I’d linked to a preprint paper [PDF] on arXiv a couple days ago that had summarized the search for Supersymmetry (Susy) from the first run of the Large Hadron Collider (LHC). I’d written to one of the paper’s authors, Pascal Pralavorio at CERN, seeking some insights into his summary, but unfortunately he couldn’t reply by the time I’d published the post. He replied this morning and I’ve summed them up.
Pascal says physicists trained their detectors for “the simplest extension of the Standard Model” using supersymmetric principles called the Minimal Supersymmetric Standard Model (MSSM), formulated in the early 1980s. This meant they were looking for a total of 35 particles. In the first run, the LHC operated at two different energies: first at 7 TeV (at a luminosity of 5 fb-1), then at 8 TeV (at 20 fb-1; explainer here). The data was garnered from both the ATLAS and CMS detectors.
In all, they found nothing. As a result, as Pascal says, “When you find nothing, you don’t know if you are close or far from it!”
His paper has an interesting chart that summarized the results for the search for Susy from Run 1. It is actually a superimposition of two charts. One shows the different Standard Model processes (particle productions, particle decays, etc.) at different energies (200-1,600 GeV). The second shows the Susy processes that are thought to occur at these energies.
The cross-section of the chart is the probability of an event-type to appear during a proton-proton collision. What you can see from this plot is the ratio of probabilities. For example, stop-stop* (the top quark’s Susy partner particle and anti-particle, respectively) production with a mass of 400 GeV is 1010 (10 billion) less probable than inclusive di-jet events (a Standard Model process). “In other words,” Pascal says, it is “very hard to find” a Susy process while Standard Model processes are on, but it is “possible for highly trained particle physics” to get there.
Of course, none of this means physicists aren’t open to the possibility of there being a theory (and corresponding particles out there) that even Susy mightn’t be able to explain. The most popular among such theories is “the presence of a “possible extra special dimension” on top of the three that we already know. “We will of course continue to look for it and for supersymmetry in the second run.”
As a big week for physics comes up–a July 4 update by CERN on the search for the Higgs boson followed by ICHEP ’12 at Melbourne–I feel really anxious as a small-time proto-journalist and particle-physics-enthusiast. If CERN announces the discovery of evidence that rules out the existence of such a thing as the Higgs particle, not much will be lost apart from years of theoretical groundwork set in place for the post-Higgs universe. Physicists obeying the Standard Model will, to think the snowclone, scramble to their boards and come up with another hypothesis that explains mass-formation in quantum-mechanical terms.
For me… I don’t know what it means. Sure, I will have to unlearn the Higgs mechanism, which does make a lot of sense, and scour through the outpouring of scientific literature that will definitely follow to keep track of new directions and, more fascinatingly, new thought. The competing supertheories–loop quantum gravity (LQG) and string theory–will have to have their innards adjusted to make up for the change in the mechanism of mass-formation. Even then, their principle bone of contention will remain unchanged: whether there exists an absolute frame of reference. All this while, the universe, however, will have continued to witness the rise and fall of stars, galaxies and matter.
It is easier to consider the non-existence of the Higgs boson than its proven existence: the post-Higgs world is dark, riddled with problems more complex and, unsurprisingly, more philosophical. The two theories that dominated the first half of the previous century, quantum mechanics and special relativity, will still have to be reconciled. While special relativity holds causality and locality close to its heart, quantum mechanics’ tendency to violate the latter made it disagreeable at the philosophical level to A. Einstein (in a humorous and ironical turn, his attempts to illustrate this “anomaly” numerically opened up the field that further made acceptable the implications of quantum mechanics).
The theories’ impudent bickering continues with mathematical terms as well. While one prohibits travel at the speed of light, the other allows for the conclusive demonstration of superluminal communication. While one keeps all objects nailed to one place in space and time, the other allows for the occupation of multiple regions of space at a time. While one operates in a universe wherein gods don’t play with dice, the other can exist at all only if there are unseen powers that gamble on a secondly basis. If you ask me, I’d prefer one with no gods; I also have a strange feeling that that’s not a physics problem.
Speaking of causality, physicists of the Standard Model believe that the four fundamental forces–nuclear, weak, gravitational, and electromagnetic–cause everything that happens in this universe. However, they are at a loss to explain why the weak force is 1032-times stronger than the gravitational force (even the finding of the Higgs boson won’t fix this–assuming the boson exists). An attempt to explain this anomaly exists in the name of supersymmetry (SUSY) or, together with the Standard Model, MSSM. If an entity in the (hypothetical) likeness of the Higgs boson cannot exist, then MSSM will also fall with it.
Taunting physicists everywhere all the way through this mesh of intense speculation, Werner Heisenberg’s tragic formulation remains indefatigable. In a universe in which the scale at which physics is born is only hypothetical, in which energy in its fundamental form is thought to be a result of probabilistic fluctuations in a quantum field, determinism plays a dominant role in determining the future as well as, in some ways, contradicting it. The quantum field, counter-intuitively, is antecedent to human intervention: Heisenberg postulated that physical quantities such as position and particle spin come in conjugate quantities, and that making a measurement of one quantity makes the other indeterminable. In other words, one cannot simultaneously know the position and momentum of a particle, or the spins of a particle around two different axes.
To me, this seems like a problem of scale: humans are macroscopic in the sense that they can manipulate objects using the laws of classical mechanics and not the laws of quantum mechanics. However, a sense of scale is rendered incontextualizable when it is known that the dynamics of quantum mechanics affect the entire universe through a principle called the collapse postulate (i.e., collapse of the state vector): if I measure an observable physical property of a system that is in a particular state, I subject the entire system to collapse into a state that is described by the observable’s eigenstate. Even further, there exist many eigenstates for collapsing into; which eigenstate is “chosen” depends on its observation (this is an awfully close analogue to the anthropic principle).
That reminds me. The greatest unsolved question in my opinion is whether the universe houses the brain or if the brain houses the universe. To be honest, I started writing this post without knowing how it would end: there were multiple eigenstates it could “collapse” into. That it would collapse into this particular one was unknown to me, too, and, in hindsight, there was no way I could have known about any aspect of its destiny. Having said that, the nature of the universe–and the brain/universe protogenesis problem–with the knowledge of deterministic causality and mensural antecedence, if the universe conceived the brain, the brain must inherit the characteristics of the universe, and therefore must not allow for freewill.
Now, I’m faintly depressed. And yes, this eigenstate did exist in the possibility-space.
Would a mind’s computing strength be determined by its ability to make sense of counter-intuitive principles (Type I) or by its ability to solve an increasing number of simple problems in a second (Type II)?
Would Type I and Type II strengths translate into the same computing strength?
Does either Type I or Type II metric possess a local inconsistency that prevents its state-function from being continuous at all points?
Does either Type I or Type II metric possess an inconsistency that manifests as probabilistic eigenstates?