Wednesday, 21 May 2014

BICEP2 and Axions. A few comments.

After our paper on Axions and BICEP2 came out (here) we were contacted quite a bit by various media outlets for comment. One article appeared in Nature News. There is another due to appear in Quebec Science tomorrow. All the answering of interview questions made me think quite hard about explaining this business in a manner understandable by the lay person, and I think I got quite good at it. So I've decided to reproduce for you here the transcript of the interview I gave for Quebec Science. Their article only used a few quotes of mine in the end, but this here is the whole shebang!

(P.S. Sorry for the weird formatting: I'm not really sure what happened)


Monday, 17 March 2014

B-eautiful tensors

That's what BB said.

Yes, I've been waiting my whole life to make a post title like this.
But seriously, if you haven't been hiding under a rock this morning you will have noticed the internet go crazy for the detection by BICEP2 of tensor modes, the 'smoking gun' of inflation. Even the NYTimes got in on the action.

The detection is parameterised by the tensor-to-scalar ratio "r", the ratio of tensor modes to the usual scalar modes whose spectrum we have characterised well with experiments like WMAP, Planck and the ground based experiments like ACT. This detection is r = 0.2 + 0.07 - 0.05 (the two numbers give the upper and lower 68% confidence intervals). This means that the detection is significantly non-zero. Why hello, tensor modes.

The B-mode polarisation spectrum is shown here below, where all the other limits are just that, upper limits. This is pretty awesome if you think about how this fits in with all the efforts of so many.
Figure 1. The BB-mode spectrum from BICEP2 with previous data.


This detection is really exciting, and has implications not only for the specific theory of inflation and the kinds of models it supports - it also allows us to place constraints on other physics. For example, my colleagues David Marsh, Dan Grin, Pedro Ferreira and I wrote a paper investigating what the detection would mean for axion-like particle dark matter. Such a large value of r places a constraint on the energy scale of inflation, H_I, which in the axion model place constraints on the initial misalignment angle - leaving a model that has a high level of fine tuning (fine tuning in physics is generally considered a bad thing, you don't want to have to tweak your model to give you something reasonable, you want that reasonable thing to emerge organically). If we consider very light axions, then this constraint on r tells you about the fraction of the total dark matter that can be made up of these axion-like particles (as a function of their mass).

We show that this new constraint (indicated by the red curve) limits the fraction of the dark matter that can be made up of axions... which helps us rule out parameter space (which is a good thing!) You can read all about it here.

While the claimed detection of B-modes from BICEP2 is awesome and very exciting, it is also important to remain skeptical about possible systematics and issues with the detection. It is a very tough game, and such an important result that we need to make sure we pass all the possible tests we can throw at it. I for one am a little worried about leakage between temperature and polarisation in the spectrum. If you look at the cross-correlation between this measurement and the BICEP1 data, it seems that there is excess power on small scales (large multipoles).


Now it bears repeating that the BICEP2 result on r is only based on the scales between 30< ell<150, but these high ell issues to need to be addressed, as leakage could bias your signal high (make the evidence for tensor modes stronger).
Another thing to worry about are foregrounds. The team have presented reasons why they think foregrounds are not an issue for a signal so large, and it looks like they've done their homework, but I'll spend the next few days digesting the paper in more detail.

Also, this is such a large signal that we need to think about why other experiments have not seen it. In fact, if you consider the figure below from their paper:

you might be worried about a conflict with the results presented by the Planck team last March. First of all this plot is made by marginalising over running of the spectral index, so it is beyond the "vanilla" model + tensor modes (it has another parameter in it, the running of the spectral index, which gives it two free parameters relative to the base LCDM model without tensor modes at all).

So, the bottom line: I am excited by this (and so should you BB!) but there is more to understand and this result needs to be battle tested and confirmed. Long life the scientific process!! 

To BB or not to BB.

Ok, I'm done. Happy BB-day all.

Monday, 17 February 2014

Belief in Quantum Gravity (and Cosmology)

Recently a few discussions have alerted me to the role of belief when it comes to theories of quantum gravity. This comes about essentially because of the huge energy scales involved in quantum gravity: because we have no (direct) experimental access to the Planck scale.

Firstly, what is the Planck scale? We need to a short lesson on units to get there. The Planck scale is what we assume to be the natural scale in gravity. As a mass scale it is approximately the square root of 1/G, where G is Newton's constant (I normally prefer to include a factor of 8 pi and call this "reduced Planck scale" simply "the" Planck scale, but that is a matter of preference, although as we are discussing, preference is a driving factor here...). Planck noticed that "natural" units for physics can be established based on a few fundamental constants, that is, we measure things in units of those constants. The first is Planck's constant itself, h (or "h-bar" if you divide it by 2 pi), which measures units of angular momentum (Joules per second in SI), and is the fundamental constant associated to quantum mechanics. Next is the speed of light, c, which measures units of speed (duh!) (metres per second in SI), and is the fundamental constant associated to relativity. Finally, then, comes Newton's constant, G, which measures the force of the gravitational field of body of fixed mass (per unit distance squared from that body, per unit mass of that body, per unit mass of the test particle feeling the force, which all follows from Newton's famous law of gravitation). Newton's constant also appears in Einstein's theory of general relativity, and so is associated to all gravitational physics (it is inserted by hand into general relativity to fix the units and the weak limit, but by consistency carries through the rest, and in all that spectacularly verified glory).

Here's where the fun starts: we can measure *all* dimensionful quantities in physics in terms of these three constants. Let's focus on the Planck mass. First of all, notice it involves masses, in particular, two masses, and so mass squared (hence why we took the square root above). All the other things it involves can be expressed as appropriate powers of c and h. We can get acceleration from using the units of c and part of h (the seconds bit), and we can also use c (via E=mc^2) to turn energies, i.e. the Joules part of h, into masses. That leaves G just a measure of 1/mass^2, and the mass it measures is the Planck mass.

Now, gravity is a very weak force. What does that mean? It means that for all the fundamental particles we know if you consider the force between any two of them then the gravitational force is far weaker than any of the other forces (yes, even the Weak force). But, if there were a particle that weighed a Planck mass (which is about 10^18 times the mass of the proton, or the same mass as about one ten thousandth of a gram, judging roughly from a mole of hydrogen which contains 10^23 protons) then the strength of the force of gravity between those particles would be equal to the strength of all the other forces.

There is also that sneaky "per unit distance squared form that body", which means if you bring the particles closer together, gravity gets stronger. When you compute that change in force taking account of the appropriate quantum mechanics (the renormalisation group flow) then we find that all the forces not only change in this simple high-school physics way, but also fundamentally, as we go to short distances. The constants of nature "flow" with energy scale (though h and c, and debatably G, do not). This means that in addition gravity becomes of comparable strength to other forces on very short distance scales, in fact at the Planck length (using our units we can change mass into length too). (If you want to read more about all of this, go and read Frank Wilczek's great book "The Lightness of Being")

Normally in computing quantum effects we can ignore gravity because it is so weak, but at the very high energies of the Planck scale, gravity becomes so strong that we cannot ignore it, and this is therefore the scale at which a theory of quantum gravity is needed. At all the energies below the Planck scale gravity was so weak that we could treat it as a "classical background". (It is a common misconception that physicists "cannot treat quantum mechanics and relativity at the same time".  We're actually very good at it: we can do so-called "quantum field theory in curved space-time", but to do this we always treat both halves separately, that is we have "classical space-time")

Okay, so now we are finally there and we can discuss why quantum gravity involves belief. It involves belief because the Planck scale is so very big. It is 10^18 GeV in particle physics units. The rest energy of a proton is about 1 GeV. The LHC runs at about 10^4 GeV. The biggest machine physicists can even think of making in the foreseeable future is about 10^5 GeV, which is still a very long way from the Planck scale. (I read somewhere that a particle accelerator capable of reaching the Planck scale would have to be the size of the solar system and use a large fraction of the sun's total output. I don't know where I read that, or how the maths was done) At these comparatively low energies we can don't need to specify our theory of quantum gravity in order to do calculations in normal theories. As long as the quantum theory reduces to general relativity in the right limits, pretty much anything goes (although some things may not, they may "resist embedding", as recently and elegantly discussed in this paper).

The enormity of the Planck scale means we cannot do experiments to test quantum gravity directly. And this means that for the most part whether you think string theory is a better theory than loop quantum gravity, or vice versa, is based on your aesthetic opinion about those theories. The role of aesthetics in physics *is* important, and helps guide us towards new laws (for more on this read/watch Feynman's "Character of Physical Law", or read Weinberg's "Dreams of a Final Theory"). It is precisely that aesthetics that has even got us as far as being able to contemplate quantum gravity, but beliefs about aesthetics diverge at the edges of our knowledge.

I came to think about this recently during a conversation with colleague. We were discussing what kind of indirect evidence could possibly be considered as for or against a given theory of quantum gravity, where by indirect I mean evidence discovered well below the Planck scale, either in cosmology or in a spectrum of new particles that could be found at foreseeable collider. I was primarily thinking of whether this evidence could support a complex theory of quantum gravity with many possible solutions, in particular, the "string landscape". Certain solutions and low energy physics scenarios appear "more likely" (in quotes because of the notorious measure problem: there is a *lot* to discuss here) in the landscape, and I argued that seeing such signals could be indirect evidence for the landscape (I do argue this a lot, and was particularly inspired by Paul Langacker's recent colloquium at PI on this subject, which you can see here). My colleague replied:

"In [theory of quantum gravity] which I believe in, the situation is..."

and we went on to try and interpret (unsatisfactorily in my opinion) all such results in light of said theory. And so, it has become abundantly clear to me how important our beliefs are in interpreting indirect evidence. I guess this is obvious, but it does get a little worse. Earlier the same day I had discussed during a mini-conference this exact topic of indirect evidence pointing to string theory and the landscape. I asked the audience, "if we discovered ultra-light axions in cosmology would you consider this a good pointer towards string theory and the landscape?". An audience member replied:

"No, I would try and interpret it in light of [theory of cosmology]"

I found this very honest, but depressing. The role of belief is so strong in the far and esoteric reaches of cosmology and quantum gravity that even when faced with a nominal prediction and hypothetical evidence for that prediction, someone cannot be convinced away from their beliefs. I'm not trying to be above all of this. I admit to being in a similar situation myself. I *believe* that the landscape is unavoidable, and that this behooves us to interpret the world in light of this. Why? Because, following Gell-Mann "anything that isn't forbidden is mandatory" (quoted from that same elegant paper linked to above) the landscape has a much wider space of what is possible, and thus not forbidden, and is therefore an interesting playground that forces us to question all possible assumptions. As a phenomenologist this is daunting, but I love the challenge of trying to find tell-tale needles in this haystack.

I wonder, even if we could do experiments up at the Planck scale, if all parties could ever be convinced? If scattering carried a uniquely stringy character (there are some, but I don't know them) could this still be "interpreted in light of [theory]"? On the flip-side, and this is more important to me, what types of evidence would I consider as being counter to my own beliefs that might force me to revise them?

Monday, 18 March 2013

'Twas the week before Planckmas...

This week will see cosmologists excitedly waiting for, and celebrating, the upcoming results from ESA's Planck satellite. We've been waiting for this day since the launch of Planck in 2009 (in fact, most people having been waiting for this day since the late 1990s, when the satellite was proposed, initially called COBRAS/SAMBA). This multi-national collaboration has already released some data and results a year ago (on subjects such as point sources and clusters detected through their Sunyaev-Zel'dovich signature), but the first large suite of cosmology results will be announced on Thursday the 21st of March 2013, at a large press event.
Here at Princeton Astrophysics, we are having our own Planck Party at 5 am, and event which will no doubt have as much excitement as the pre-dawn Higgs party we had at the Institute for Advanced Study last summer.

So what is all the excitement about?

Until the Planck release, the tightest constraints at multipoles less than 1000 have come from NASA's WMAP satellite, which was recently awarded the Gruber Foundation Cosmology Prize. WMAP operated for nine years and really helped to pin down the cosmological model on the largest scales.


The plot above shows the power on the y-axis as a function of multipole (x-axis). Multipoles are inversely related to angle, that is, large angles correspond to small values of the multipole, while small scales are large values of l.

On smaller scales (i.e. to the right of this graph) two experiments have dominated the game recently, The Atacama Cosmology Telescope (based in the Chilean desert, and the collaboration I'm a part of) and the South Pole Telescope (no prizes for guessing where this telescope is!)

The gold points are the same as the black points in the top plot, but with a logarithmic scale on the y-axis. From this plot, it is clear to see how ACT and SPT provide all the signal at small scales - the WMAP data points end around l=1000. Combining the data from WMAP with these experiments helps us put tight limits on our cosmological model and on non-standard physics in the early universe.

Planck will improve on this picture by making the error bars much smaller on all scales. On large scales we are looking to see if any of the WMAP anomolies are present, and on intermediate scales Planck will also greatly reduce the error bars (on multipoles of 800 - 2000), where the WMAP error bars are large or unconstrained (see the linear scale plot at the top of the page). 

This is particularly interesting for a parameter of recent interest, namely the effective number of relativistic species, or Neff. If we had three neutrino species (which is the standard picture) - Neff would be 3.046 (this number is not exactly three due to electron-positron annihilations in the early universe). It helps to think of the number in terms of extra neutrinos, but what Neff actually measures is if there was any extra (or less) energy from such a relativistic species. It doesn't specify what that species should be, and many authors have proposed some interesting candidates, from sterile neutrinos to `dark radiation'. If there was more relativistic energy when the CMB was formed, this would lead to a few interesting effects, the most obvious being the decrease in amplitude of the small scale Silk damping tail - the intrinsic CMB spectrum which drops in power as l increases. Of course, there are many degeneracies between Neff and other parameters, which is why better data (and independent data) help us tease apart the degeneracy.


All three experiments (WMAP, ACT and SPT) recently released their constraints on cosmological parameters including Neff (they are here, here and here).
The three experiments have some mild tension the best-fit values of Neff (we discuss the consistency between them in a recent paper) - the plot above shows this. In both cases the ACT and SPT data are combined with the latest WMAP9 results. The left-most panel shows the one-dimensional contours for Neff, while the two right panels show an error ellipse. Dark ellipses shows models which are consistent with the data at 68% confidence, while the lighter ellipses show models consistent at 95% confidence. Any model outside of the ellipses is less than 5% likely to fit the data. The red lines/curves are for WMAP9 and ACT, the green for WMAP9 and SPT and the black curves/contours show the combination of all three experiments together. While SPT sees a higher value of Neff than 3.046 at Neff = 3.74 +/ 0.47, and ACT a slightly lower value with Neff = 2.90 +/ 0.53, the combined data are completely consistent with the standard picture: Neff =  3.37 +/ 0.42 (which may dismay or delight you, depending on your camp of interest!).

By improving the constraints on the power at intermediate scales, Planck should tell us more in a few days. This is particularly interesting because while ACT and SPT look at different regions of the sky (on smaller patches), Planck will release results based on the full sky - another independent measurement of the same underlying physics.

[There is a great post by Jester on RĂ©esonances about Neff (posted just before the ACT constraints were released) written for those with a particle physics interest.]

Planck will also measure the weak lensing of the CMB by gravitational structures - an extremely subtle effect which moves power around on the maps of the CMB temperature on arcminute scales, but coherently over degrees. ACT  and SPT have measured this deflection - and Planck will improve the errors on this measurement by a great deal on all scales. The deflection power spectrum is a strong probe of structure, and things which would wash out that structure, such as a massive neutrino.

Another key constraint that will come from Planck is one on the non-Gaussianity of the initial conditions of the universe, which is a strong test of the various inflationary models out there today.

[There is an awesome TEDx talk by Ed Copeland on CMB physics and inflation which provides a nice summary of the link between the CMB and the early universe.]

One way to think of non-Gaussianity is by imagining a distribution with some level of skewness and kurtosis (so, a normal distribution that has been distorted). A simple picture for how to produce a two-dimensional temperature map from the power spectrum above, is to generate a Gaussian realisation of the power spectrum - at each angular scale (defined by the multipole), use the power to define the variance in temperature on that scale. However, if the temperature field is non-Gaussian, then the full map is not described by the two-point function, or power spectrum: we need to use higher order statistics to characterise the initial conditions if they are non-Gaussian! That is typically why we use the bispectrum (the three point function) and higher order statistical correlation functions to measure non-Gaussianity.

 The WMAP bound is consistent with zero fNL (the parameter describing the level of non-Gaussianity, a quantity we expect to be vanishingly small in the simplest single field models of inflation) with -3< fNL < 77 at 95% confidence. However, the expected errors on fNL from Planck should go from the errors on fNL of about 20 to errors of a few! If the central value of fNL = 37.2 found by WMAP remains while the errors decrease we will put some serious pressure on many inflationary models - it is always a theoretical treat to find you aren't living in a `vanilla' universe.

These are only a few of the presents we are expecting on Thursday. Make sure to tune in to hear the results, and enjoy the flurry of papers on the latest cosmological bounds using the temperature of the CMB. For the polarisation measurements, you will still have a little wait before Planck (and ACTPol and SPTPol) entice you with more results - as it is an even more delicate procedure to tease out polarisation from these sensitive instruments.

Until then, we wait to boldy constrain where only a few experiments have constrained before...


Monday, 17 December 2012

The Folklore of the Untestability of String Theory


I recently received an email from an undergraduate after agreeing to give a talk to their society about string cosmology and testing string theory. The undergraduate expressed amazement about being able to test string theory, given the "folklore that it is untestable". I thought this warranted some explanation, obvious to any sensible Bayesian. Here is an extended version of my reply to this student.

I apologise for overstating Bayesian, but I wanted to ram it down the student's throat: you, my discerning readers, may not need so much ramming. I also apologise to any better Bayesians than I for perhaps liberal use of terminology and butchering of concepts.

Firstly, the folklore applies to string theory "as a whole", rather than individual models. I will ignore the obvious point to be made that string theory "as a whole" is a beautiful mathematical framework and testing it is not the point. I will instead approach things as a cosmologist and a Bayesian.

The folklore is too simplistic, in the sense that one can always assume a model and verify its parameters. This is what one always does in a Bayesian philosophy of science, which is manifestly what practicing science actually is. However one is able to construct other models that may have similar consequences. This is often the case in fundamental theory: think of the plethora of models being tested at the LHC! (Although silly particle theorists aren't Bayesians and use silly concepts like the "look elsewhere effect". Tut tut...)

The selection for the models to test, outwith unexpected and contradictory results (lack of concordance, which we all hope and pray for), comes down to a selection based on priors (also Bayesian, except Bayesians choose to recognise them!). These priors are either (arbitrary) prejudice based on intuition, unification etc, or (less arbitrary) priors based on fine tuning and the ability to perform meaningful calculations. String theory and other theories fall into both camps on priors depending on your taste. In my opinion string theory falls into both at once positively and negatively. A Bayesian picks one model and tests it, then compares models to one another.

String theory comes up trumps in (practical) cosmology because it is complete enough to actually give meaningfully testable cosmological models, although many of them. More standard particle theories also give such many models, as can explicitly non-fundamental models. Currently the data cannot tell them apart, therefore a Bayesian accepts both as equally likely modulo the priors. By this I mean that I accept modified gravity is equally as likely as a cosmological constant based on the evidence, but my prior is a hard prejudice that it is not the truth.

An aside on this point: there many theories of modified gravity and field theory Lagrangians. As many as low-energy models based on string theory? More? I don't know: what is the measure on theory space? Clearly all these low-energy theories (paradigms?) fall foul of being "untestable" based on the folklore, which I now hope you are starting to agree is fallacious.

The folklore refers to "complete" theories, so the testable other models referred to above in particle and non-fundamental theory (by which I mean low-energy modifications of gravity) are manifestly not "complete". But then, "completeness" is still an arbitrary prior.

The folklore is applied in an ideal world that may not exist. In this ideal world the many additional parameters needed to turn string theory into a model make it unpalatable (although, as I've said we do measure some of them in the *context* of cosmological models. For example I can construct a stringy model of inflation and use the scalar amplitude to bound some property of the compactification.). In this ideal world one can do experiments at arbitrarily high energy and across all of space time and "test" whatever you like. But this is not the world we live in, certainly in practice, and in fact maybe not even in theory. In theory I mean we cannot do all the experiments required to gain complete knowledge of cosmology. In simple terms this is due to the special relativity fact that events outside your light cone are inaccessible to you. In more refined terms it is boiled up in complex arguments about the existence of observables in quantum gravity in de-Sitter space and eternal inflation.

Anyhoo, practically we are always limited by our finitude and fallibility to not have access to a perfect world of infinite data at the Planck scale, and so we are Bayesians. Comparing models we have and measuring their parameters. Sometimes these models come from string theory, and we may prefer these models to low-energy field theory models for their being part of a larger framework. Or we may not. Model selection with insufficient data is prior driven. But you do need a theory that gives you models to test, and string theory certainly does that in cosmology!

Wednesday, 27 June 2012

Scalar Fields

Beginning this blog on the "s" theme of its title, I want to talk about scalar fields, or scalars. A scalar field is something that has a value at every point in space: the canonical example is temperature. There is a value, but no direction.

A magnetic field on the other hand, points from north to south, it has a direction and is known as a vector field. If I look at the Earth the "normal" way up, I see this as going down on the map, but I can look the Earth the other way up, because there is nothing special about the choice of map makers, and I see the magnetic field going up. Vectors care about how you look at them. They care about the frame of reference. Scalars don't care what direction you look at them or how fast you are moving: they are Lorentz invariant.

For this reason, scalars are odd things, as they don't transform under the Lorentz symmetry of special relativity. This allows them to have a vacuum expectation value, or vev. A vev means that the scalar field has a value that is the same everywhere in space. This is not something a vector field could have without picking out a preferred direction in space, and violating a form of the cosmological principle. The vev has an energy, and it contributes to the cosmological constant, which effects the expansion rate of our universe.

Interestingly, this is contrary to the point of this video explaining the Higgs (which I found on Cosmic Variance). The video explains quantum fields as things whose ripples are what we observe: they are everywhere but we only see them when they ripple. However, the Higgs is a scalar, and its vev also contributes to the cosmological constant, which changes when the electroweak symmetry is broken in the early universe. With scalars, we see more than just their ripples.

I mentioned above that temperature is the canonical example to explain what a scalar field is to the non-specialist. However, in relativity, we express the temperature as the energy density of the electromagnetic field, but the energy density changes when we move to different frames of reference. Temperature is the fourth root of the energy density, which forms part of a four vector and transforms under Lorentz transformations. Temperature is NOT a scalar in relativity! In Cosmology a major number is the temperature of the CMB. How can we speak of this in a way that does not violate the cosmological principle? The truth is that we only measure the temperature of the CMB in the preferred frame of reference that is at rest with respect to the CMB. The CMB defines the preferred frame. Models where we mess with this are called "tilted universes" (see e.g. Liddle and Lyth's book on Inflation).

When we write down fundamental theories we don't want them to care about such arbitrary choices like the maps of the Earth. Scalars fit this bill. If we want our fundamental theories to contain vector fields or other more complicated objects, however, we must "contract" them up to make scalars. Being a scalar is very important and is related to fundamental concepts in physics about symmetries and the action principle.

In group theory, scalars are the singlets of any group. A representation every group shares. They normally don't feel whatever force is associated with a group, for example neutrinos are singlets under the U(1) of electromagnetism (EM), so they are not charged. but neutrinos are not scalar fields, we name things "scalars" in the sense of "scalar field" by the way the field behaves under the symmetries of General Relativity (GR), the diffeomorphisms. However, because GR sees all energy density, even scalars of diffeomorphisms source gravity, in a way that scalars of EM do not source EM.

Fundamental scalars are very odd things. By which I mean things that are scalars without us having to "make" them that way, in the way we build scalars in EM to make the Maxwell Lagrangian by contracting the vector fields. The only one that we think exists in the standard model is the Higgs field, and an aversion to fundamental scalars is one philosophical motivation for theories like technicolor that are alternatives to the Higgs. The scalars of technicolor are not fundamental, they are "composite" scalars. Like the pion of the strong force, they are built by adding two fields of opposite spin.

However,  one theory that is full of scalar fields is string theory. These scalars are describing things to do with the internal space that we lowly 4d mortals can't see (see for example http://arxiv.org/abs/1204.2795). These fields are called moduli. These apparently fundamental scalars in 4d are just vestiges of higher dimensional gravity, and appear to us after the famous phenomenon of Kaluza-Klein compactification. String theory scalars are very important in making string theory reproduce the physics we know, give us lots to play with in the physics we don't, but are infamous as the source of the string landscape and the charge of loss of productivity often brought against the theory. (string theory also furnishes us with many other similar fields called axions, which are "pseudo-scalars" and behave differently under spatial reflections, but they are the subject of a whole other post…)

Another theory that contains lots of scalars is supersymmetry (SUSY). SUSY builds in scalars in what are called chiral multiplets. The chirality (or handedness) of the weak nuclear force is one of the most important facts about our universe and is very constraining for model builders. Incorporating SUSY into a theory that is chiral gives us all the superpartners as scalars or "sparticles" -  and at least as many as there are fermions (quarks and leptons) in the standard model. We know the weak force is chiral: if the world is also supersymmetric, then scalars play a key role.

But why, apart from the possible imminent detection of our first genuine scalar in CERN next week, am I telling you about scalars? I'm a cosmologist, and scalars are everywhere in cosmology theory. Why? Because they are easy to work with. One can often do away with the awkward phase space descriptions one needs to properly describe for example photons and neutrinos (the analogous distributions to the Maxwell-Boltzmann distribution in ordinary statistical mechanics). All modified gravity theories are GR plus a scalar or more. (the exception are the odd (and oddly named) "Galileons", also the subject of future post)

What do we use scalars for in cosmology? Well, scalars can have potentials. Because they are Lorentz invariant, any function of them is, and this is their great utility. They serve us as inflatons, dark energy, and dark matter. However, our greatest theoretical problem, the cosmological constant problem, is intimately related to these very potentials, and the vevs that allow scalar fields to perform the Higgs mechanism. (if you've never read it, I highly recommend Weinberg's 1989 review)

Do true scalars exist? Because of the need for a change in, or constant, vacuum energy in any theories of electroweak symmetry breaking (what the Higgs does), inflation, or Dark Energy/modified gravity, one always requires at least an approximate, or composite, scalar degree of freedom: technibaryons, or massive gravitons are all described by effective scalar degrees of freedom. One could even argue that string theory, in its true 10 or 11 dimensional form doesn't have scalars: the effective ones are degrees of freedom of a string. The question of whether we have fundamental scalars is still open, but could come one step closer to being resolved on Wednesday at CERN. Even if particle physicists find the Higgs, knowing whether it is really fundamental will be a whole other game. Although the LHC should be able to distinguish it from many popular technicolor theories, we will probably have to wait longer to find out. (I'm sure Jester will have plenty to say about this)

Finally, even in a strongly coupled theory like technicolor, compositeness is not a definite concept thanks to the hot topic of holography and dualities. In dual theories the "fundamental" fields swap roles, so composites on one side of the duality are fundamental from the other point of view.