When matter and anti-matter meet, they annihilate each other in a “flash” of energy. Usually, this release of energy is in the form of high-energy photons, or gamma rays, which are then detected, analysed, and interpreted to understand more of the collision’s other properties. In nature, however, matter/anti-matter collisions are ultra-rare if not altogether non-existent because of the unavailability of anti-matter.
Such annihilation processes are important not just to supplant our understanding of particle physics but also because they play a central role in the design of hadron colliders. Such colliders use heavily interacting particles (the superficial definition of hadrons), such as protons and neutrons, to bombard into each other. The target particles, depending on experimental necessities, may be stationary – in which case the collider is said to employ a fixed target – or moving. The Large Hadron Collider (LHC) is the world’s largest and most powerful hadron collider, and it uses moving targets, i.e., both the incident and target hadrons are moving toward each other.
Currently, it is know that a hadronic collision is explicable in terms of their constituent particles, quarks and gluons. Quarks are the snowcloned fundamental building blocks of all matter, and gluons are particles that allow two quarks to “stick” together, behaving like glue. More specifically, gluons mediate the residual strong force (where the strong force itself is one of the four fundamental forces of nature): in other words, quarks interact by exchanging gluons.
Parton distribution functions
Earlier, before the quark-gluon model was known, a hadronic collision was broken down in terms of hypothetical particles called partons. The idea was suggested by Richard Feynman in 1969. At very high energies – such as the ones at which collisions occur at the LHC – equations governing the parton model, which approximates the hadrons as presenting point-targets, evolve into parton-distribution functions (PDFs). PDFs, in turn, allow for the prediction of the composition of the hubris resulting from the collisions. Theoretical calculations pertaining to different collision environments and outcomes are used to derive different PDFs for each process, which are then used by technicians to design hadron-colliders accordingly.
(If you can work on FORTRAN, here are some PDFs to work with.)
Once the quark-gluon model was in place, there were no significant deviations from the parton model. At the same time, because quarks have a corresponding anti-matter “form”,anti-quarks, a model had to be developed that could study quark/anti-quark collisions during the course of a hadronic collision, especially one that could factor in the production of pairs of leptons during such collisions. Such a model was developed by Sidney Drell and Tung-Mow Yan in 1970, and was called the Drell-Yan (DY) process, and further complimented by a phenomenon called Bjorken scaling (Bsc).
(In Bsc, when the energy of an incoming lepton is sufficiently high during a collision process, the cross-section available for collision becomes independent of the electron’s momentum. In other words, the lepton, say, an electron, at very-high energies interacts with a hadron not as if the latter were particle but as if it were composed of point-like targets called partons.)
In a DY process, a quark from one hadron would collide with an anti-quark from another hadron and annihilate each other to produce a virtual photon (γ*). The γ* then decays to form a dilepton pair, which, if we were to treat with as one entity instead of as a paired two, could be said to have a mass M.
Now, if M is large, then Heisenberg’s uncertainty principle tells us that the time of interaction between the quark/anti-quark pair should have been small, essentially limiting its interaction with any other partons in the colliding hadrons. Similarly, in a timeframe that is long in comparison to the timescale of the annihilation, the other spectator-partons would rearrange themselves into resultant hadrons. However, in most cases, the dilepton is detected and momentum-analysed, not the properties of the outgoing hadrons. The DY process results in the production of dilepton pairs at finite energies, but these energies are very closely spaced, resulting in an energy-band, or continuum, being defined in the ambit of which a dilepton-pair might be produced.
In quantum chromodynamics and quark-parton transitions
Quark/anti-quark annihilation is of special significance in quantum chromodynamics (QCD), which studies the colour-force, the force between gluons and quarks and anti-quarks, inside hadrons. The strong field that gluons mediate is, in quantum mechanical terms, called the colour field. Unlike in QED (quantum electrodynamics) or classical mechanics, QCD allows for two strange kinds of behaviour from quarks and gluons. The first kind, called confinement, holds that the force between two interacting quarks does not diminish as they are separated. This doesn’t mean that quarks are strongly interacting at large distances! No, it means that once two quarks have come together, no amount of energy can take them apart. The second kind, called asymptotic freedom (AF), holds that that quarks and gluons interact weakly at high energies.
(If you think about it, colour-confinement implies that gluons can emit gluons, and as the separation between two quarks increases, so also the rate of gluon emission increases. Axiomatically, as the separation decreases, or that the relative four-momentum squared increases, the force holding quarks together decreases monotonically in strength, leading to asymptotic freedom.)
The definitions for both properties are deeply rooted in experimental ontology: colour-confinement was chosen to explain the consistent failure of free-quark searches, while asymptotic freedom doesn’t yield any phase-transition line between high- and low-energy scales while still describing a property transition between the two scales. Therefore, the DY process seemed well-poised to provide some indirect proof for the experimental validity of QCD if some relation could be found between the collision cross-section and the particles’ colour-charge, and this is just what was done.
The QCD factorization theorem can be read as:
Here, as(μ) is the effective chromodynamic (quark-gluon-quark) coupling at a factorization scale μ. Further, fa(x, μ) defines the probability of finding a parton a within a nucleon with the Bjorken scaling variable x at the scale μ. Also, { hat { sigma } }_{ i }^{ a } (TeX converter) is the hard-scattering cross-section of the electroweak vector boson on the parton. The physical implication is that the nucleonic structure function is derived by the area of overlap between the function describing the probability of finding a parton inside a nucleon and the summa of all functions describing the probabilities of finding all partons within the nucleon.
This scaling behaviour enabled by QCD makes possible predictions about future particle phenomenology.