Author: David

Wilson: the EFT toolkit

This is the second post in the series about public codes.

Effective Field Theories (EFTs) are powerful tools to simplify the analysis of physics beyond the Standard Model (SM). At energies well below the scale of the new physics, all the information about new particles is contained in the Wilson coefficients (WCs) of dimension-6 operators. The comparison to experiment – in general, a huge number of precision measurements in processes at various energy scales – can then be performed as function of these Wilson coefficients. Importantly, this only has to be done once and for all. Testing a new model then amounts to only computing the Wilson coefficients and plugging them into the model-independent phenomenology “machinery”.

Matching, running & translation

Even before computing the actual experimental observables however, within the EFT one needs to take care of the matching and running of the WCs:

  • Since the energy scale of the measured processes is much lower (by assumption) than the scale of the new particles’ masses, one needs to run the WCs by solving their renormalization group (RG) equations to resum large logarithms of this mass hierarchy.
  • Since the new particles are much heavier (by assumption) than the SM bosons, the appropriate EFT is SMEFT, the SM-gauge-invariant dimension-6 extension of the SM. For low-energy processes (like flavour physics), the appropriate EFT is instead the “weak effective theory” (WET) where the heavy SM particles are integrated out and where the only gauge symmetries are QCD and QED. To convert the WCs from SMEFT to WET, one needs to match the two EFTs.

Moreover, it can be convenient to work in different bases within a given EFT, either because a particular basis is more convenient for a given process, or simply because different codes use different conventions. For this, it is necessary to implement translations between pairs of bases.

Wilson: overview

Wilson is a Python package that performs this matching, running & translation in SMEFT and WET. The representation, input & output of WCs is built on the WCxf format described in the last post. It implements:

  • The one-loop RG evolution in SMEFT for all dimension-6 operators. This is directly derived from the DsixTools Mathematica package.
  • The tree-level matching from SMEFT to WET for all dimension-6 operators. This is based on the results of this paper.
  • The one-loop QCD & QED running in WET for all dimension-6 operators. This is based on the results of this paper.
  • Translations between most of the bases defined in WCxf.

Wilson was described in this paper with Jason Aebischer and Jacky Kumar. You can find the source code repository on Github – pull requests welcome!

Simple example

Using the code is easiest through the Wilson class and its match_run method. Say you want to explain a deviation from the SM in $R_K$ through a NP effect in the SMEFT Warsaw-basis operator $[O_{lq}^{(1)}]_{2223}$. Then you could do

from wilson import Wilson
w = Wilson({'lq1_2223': 1e-9},
           scale=1000,
           eft='SMEFT',
           basis='Warsaw')
wc = w.match_run(scale=4.2, eft='WET', basis='flavio')
print(wc['C9_bsmumu'])

The units of the Wilson coefficients are 1/GeV², the units of the scales are GeV, and the EFT and basis names are defined in WCxf. The names of all the Wilson coefficients in the different bases are listed in PDF files that you can download from the WCxf web site, see e.g. for the WET flavio basis.

In the above example, the SMEFT RG running from the 1 TeV scale down to the EW scale (including the extraction of the Standard Model parameters properly taking into account their shifts due to dimension-6 contributions), the matching to WET, translation to the “flavio” basis, and the QCD & QED running down to the b quark mass are all done automatically.

Configuration

The package provides some configuration options that can be set with the set_option (for a single instance) or set_default_option (valid for all future instances) methods of the Wilson class, for instance:

  • smeft_accuracy can be set to leadinglog (rather than the default integrate) to use the less precise but much faster leading-log approximation to the SMEFT RG running
  • smeft_matchingscale can be used to set the scale where SMEFT is matched to WET (by default, the Z mass)

Advanced use of the results

When a large number of non-zero WCs is returned, it is particularly useful to work with them as a pandas.DataFrame. In the above example, this is achieved simply by accessing wc.df. The data frame contains the real and imaginary parts of the WCs in the columns Re and Im. One can now use all the advanced features such as sorting and filtering. For instance, to get all the WCs with a sizable imaginary part, sorted by real part:

df = wc.df
df = df[abs(df['Im']) > 1e-6]  # WCs with large imaginary part
df.sort_values('Re')  # sort by real part

WCxf: the Wilson coefficient exchange format

Here is my first post in the new series on codes, even though WCxf is more about conventions than about code.

First off, it’s true, WCxf is not a very catchy name. On the other hand, it’s definitely googlable and was still available as organization name on Github!

So what is it, actually? In short, it’s two things:

  • a way to specify and define unambiguous conventions for operators bases in effective field theories (EFTs),
  • a data format to exchange numerical values of Wilson coefficients of these operators (in one of these conventions) between different programs.

WCxf is supported by ten public codes and was described in detail in a paper. Below I give a brief version of how it works and list some caveats about its usage.

Motivation: exchanging BSM WCs

The original motivation was to have an unambiguous way to exchange values of beyond-the-Standard-Model Wilson coefficients between different public codes, for instance those that compute them in specific new physics models, those that perform RG evolution, and those that compute observables. There is already the FLHA format that partly addresses this problem, but we (initially mostly the developers of flavio and EOS) found it not to entirely suit our needs, e.g. because

  • different public codes used different normalizations (which are not fixed by FLHA), such that it is not actually an unambiguous exchange format in practice,
  • the format does not fix a non-redundant basis of operators, which is not inconsistent, but can lead to ambiguities (the famous symmetry factors of 2) unless each code specifies exactly how redundancies are dealt with,
  • the format is limited to the weak effective theory below the electroweak scale and is not meant for SMEFT, which however becomes more and more important,
  • while the FLHA file format is similar to other *LHA formats used in HEP, it is not a format that can be easily parsed by all programming languages (unlike industry standards like YAML or JSON).

But in fact the new format we came up with is not meant as a replacement for FLHA – FLHA has a much wider scope, allowing also to exchange SM Wilson coefficient values and even hadronic parameters etc.

Next, let me describe some of the basic concepts of WCxf.

EFTs

At the most basic level, WCxf allows to define different EFTs. An EFT is defined by having a set of operators that can optionally be split into sectors, that is groups of operators that close under renormalization. This can be convenient for codes performing RG evolution or basis translation, as this can be done on a sector-by-sector basis.

A new EFT is defined by submitted an EFT file to a public repository. At the moment, the EFTs defined are: SMEFT (the dimension-6 extension of the SM above the electroweak scale), WET (the weak effective theory below the EW scale), as well asWET-4 and WET-3 corresponding to the WET with reduced number of quark flavours (appropriate for low-energy processes below the b or c quark scale).

Bases

A basis in WCxf is defined as a set of non-redundant (but not necessarily complete) operators for a given EFT. For SMEFT, the most well-known basis is the Warsaw basis. On the WCxf web site, you can access PDF files with lists of all operators in a given basis.

Note that the term basis is used slightly differently than in parts of the literature here: it is not sufficient to choose a non-redundant set of operators, but one also needs to define to define the weak basis for the fields and their relations to the mass basis! For WET, this is not so relevant because one is free to work in the mass basis for all fields at all times (even though it might be more convenient to work in the flavour basis for neutrinos), but for SMEFT it is non-trivial: First, because it is not possible to work in the mass basis for both left-handed up-type and down-type quarks. Second, because the RG evolution is flavour dependent, such that definitions involving diagonality of mass matrices are only valid at a single scale.

Wilson Coefficients

Once EFTs and bases have been defined, they can be used to exchange numerical values of BSM Wilson coefficients. WCxf fixes the data format, which is programming language agnostic, and by default provides two different file formats: JSON or YAML. But programs can also exchange data structures directly, without the detour over a file.

Translators & matchers

Since WCxf allows using different basis for a given EFT, it can be used together with basis translators that translate a set of WCs from one basis to another. Since it allows also using different EFTs, it can also be used together with matchers to match WCs from a high-energy EFT (e.g. SMEFT) to a lower-energy one (i.e. WET). Such translators and matchers are provided by the wilson package that I will discuss in the next post.

Caveats

After a little more than a year of using WCxf, a few subtleties and caveats have emerged that are worth stressing.

  • As already mentioned above, for an unambiguous exchange format, in SMEFT the weak basis needs to be fixed. In the WCxf Warsaw basis, the convention is that the running down-type quark and charged lepton mass matrices are diagonal at the scale of the exchange. Since the RG evolution is flavour dependent, this means that a code performing this RG evolution must use a slightly different weak basis for input and output in order to adhere to this convention.
  • The bases must be non-redundant also with respect to flavour indices. This is necessary to ensure an unambiguous and consistent data exchange. However, flavour-redundant bases are actually more convenient in some contexts, e.g. matching or RG evolution, leading to symmetry factors with respect to the flavour non-redundant WCxf conventions. This is discussed in detail in appendix A.2 of this paper.

Code

There is a Python utility called wcxf that allows to manipulate WCxf files on the command line and can be used as a library for Python packages built on the WCxf format. See its documentation for details. For translation, matching, and RG evolution, the wilson package, that will be discussed in the next post, is a more powerful layer on top of wcxf.

New blog series on public codes

Time to resuscitate the blog section of my web site!

Recently, I’ve been involved in the development of a number of public codes for phenomenology. While they are mostly reasonably documented and described in papers, I realized in conversations with colleagues that there is not much awareness about what these various codes can (and cannot) do. Moreover, there can never be enough concrete examples of how to use these tools.

So I just decided to start a new blog series on public codes, with a rough outline being:

  1. WCxf: the Wilson coefficient exchange format
  2. wilson: matching and running in SMEFT and WET
  3. flavio: flavour and other precision observables in the SM and beyond
  4. smelli: a global SMEFT likelihood
  5. parton: PDFs in Python

and, in case it turns out being fun, I might continue with a series of posts about smaller tools or usage examples.

If there is something you are particularly interested in, let me know below.

Reducing the hassle with BibTeX

BibTeX is great for generating bibliographies, in particular combined with Inspire, but it also has its annoying aspects. This is a typical workflow to generate references for a paper:

  1. Find the texkey of a paper on Inspire and \cite it in the manuscript
  2. Copy & paste the bibtex entry into the .bib file
  3. Correct LaTeX code in the title (often missing the dollar signs or containing characters like “->”)
  4. After having completed the paper, check whether any of the preprints have been published in the meantime and add the journal reference.

In this list, step 1 is the only one requiring a brain, while steps 2-4 are increasingly annoying. This is why I have written a script that mostly automatizes these steps and I want to explain it in this post.

Spare me the details, tell me how to use it

Having completed step 1 above, you can compile your LaTeX document (let’s call it paper.tex) and a paper.aux file will be generated. This is the case even if you don’t have a bibliography file yet (and the compilation will thus fail). Installing my inspiretools script from GitHub, you can now execute the following command:

auxtobib paper.aux > bilbiography.bib

This command will download all the BibTeX entries from Inspire and save them to the .bib file. Step 2 has been automatized! When you add citations to the paper, just rerun the command. It will always fetch all the references anew, so if one of the references gets a journal reference added, your bibliography will be up to date. So step 4 is redundant as well!

What about step 3? Well, you could still do it manually, but all changes will be overwritten when you update the bibliography. The best way would be to change it on Inspire itself! And you can help doing that. The code contains a second script that you can invoke as

auxtoxml paper.aux > titles.xml

This will generate an XML file containing all the titles of the references in your bibliography. Correct all the LaTeX errors there and then send the XML file to feedback@inspirehep.net. The file is in the right format for the Inspire staff to quickly update the information in their database. This way, the change will not only persist when you update your references, but you will also have saved your colleagues some time!

How it works

The code uses the pyinspire script by Ian Huston (with some modifications by myself) that uses the Inspire API to fetch entries. It is written in Python.

In case you are wondering why I am taking the detour via the .aux file rather than directly extracting the references from the .tex file: I have found this to be more robust since it works with many different citation commands like \cite, \nocite, \autocites, and even with custom macros without the need to use complicated regular expressions.

Note that the current implementation is quite slow as it fetches each entry separately, which can take some time especially for long papers. In principle this could be sped up by fetching several entries simultaneously. If you want to improve on this, you are welcome to contribute to the repository.

New paper on composite Higgs

Today, a new paper entitled “Direct and indirect signals of natural composite Higgs models” by Christoph Niehoff, Peter Stangl and myself appeared on the preprint archive. Weighing 72 pages, it might be a good read on the beach in your well-deserved summer vacation!? Anyway, here is some information about why we made this analysis and what we found.

The idea of the Higgs boson being a composite particle is a compelling and fascinating solution to the electroweak hierarchy problem (also called the Higgs naturalness problem) and, according to many, is among the two most attractive solutions to this problem (the other one being supersymmetry). Many brilliant people have contributed to the construction of elaborate models that address a variety of challenges that arise when formulating a realistic theory implementing the composite Higgs idea. Given the plethora of experimental tests of the Standard Model, from precision electroweak measurements to flavour physics, direct searches for the production of heavy particles and precision measurements of the properties of the Higgs boson, it is not easy to determine whether a given model is viable, or what an experimental exclusion (or discovery!) in one observable implies for other observables.

In the past, many studies have either focused on a limited set of experimental tests – e.g. on Higgs physics, flavour physics or direct searches – or have studied the interplay between different tests in a qualitative way, while being as model-independent as possible. While this approach certainly has its advantages, to really study the correlations between different experimental tests of a new physics model (which is the overarching goal of our research group), one needs to select a specific model and perform a numerical analysis of all experimental constraints on its parameters. This is exactly what we have set out to do.

For several reasons (detailed in the paper), this turned out to be quite challenging on a technical level, and it was only thanks to a local computing cluster that we were able to obtain the results we were interested in. In the end, we think the results we got are interesting enough to justify the efforts. Just to mention two of the most exciting results of our analysis:

  • Some hints of a resonance at 2 TeV seen by ATLAS and CMS in diboson final states can be perfectly accomodated, while being in agreement with all other experimental constraints.
  • Deviations from Standard Model expectations in $B$ physics, in particular in angular observables in $B\to K^*\mu^+\mu^-$ and the branching ratio of $B_s\to\phi\mu^+\mu^-$, can be explained as well. To be honest, this came as a surprise to us! But most exciting about this is that it implies the existence of a neutral spin-1 resonance below 1 TeV which should show up soon in the dijet or $t\bar{t}$ mass distribution at LHC! And if it doesn’t show up, it’s clear that the models studied by us cannot explain these anomalies.

Many more big and small results can be found in the 61 plots and the accompanying text. Of course, having set up the analysis for one particular model (with four different flavour structures), we are now eager to apply this strategy also to other models or scenarios, and we are looking forward to discussing this with the community.

Relative vs. absolute $\chi^2$

On this week’s workshop on $B$ decays in Edinburgh, which unfortunately I was not able to attend, apparently there have been many interesting talks and lots of fruitful discussion. An interesting point was raised on the slides of the talk by N. Mahmoudi regarding the statistical approach taken in my recent paper with Wolfgang Altmannshofer. It concerns the allowed regions for new physics contributions to the Wilson coefficients as obtained from a global fit to experimental data. In our paper, we used the $\chi^2$ function defined as
$$\chi^2(\vec C^{\rm NP})=\left[\vec O_{\rm exp}-\vec O_{\rm th}(\vec C^{\rm NP})\right]^T\left[C_{\rm exp}+C_{\rm th}\right]^{-1}\left[\vec O_{\rm exp}-\vec O_{\rm th}(\vec C^{\rm NP})\right]$$
where $\vec C^{\rm NP}$ are the new physics contributions to the Wilson coefficients, $\vec O_{\rm th}$ are the observables, and $C_\text{exp,th}$ the experimental and theoretical covariance matrices, respectively. The latter encode the experimental and theoretical uncertainties as well as their correlations.

Now, when we present plots with constraints on Wilson coefficients such as this one, we proceed as follows.

The $\Delta\chi^2$ method

  • Make a hypothesis which two Wilson coefficients receive new physics contributions (assuming all others are SM-like),
  • Determine the minimum value of $\chi^2$ under variation of these two coefficients,
  • Plot contours of $\Delta\chi^2=2.3$ and $6$, where $\Delta\chi^2$ is the difference of the $\chi^2$ with respect to the minimum value in the previous item.

The numbers are chosen because $F_{\chi^2}(2.3, 2) \approx 0.68$, $F_{\chi^2}(6, 2) \approx 0.95$, where  $F_{\chi^2}(x, 2)$ is the cumulative distribution function (CDF) of the $\chi^2$ distribution with 2 degrees of freedom. That is, the regions we plot are the 68% and 95% credibility regions for the Wilson coefficients under the hypothesis that new physics resides only in these two coefficients.

The “absolute $\chi^2$” method

Now, in the talk mentioned above, an alternative method is proposed, coined “absolute $\chi^2$ method”. In this method, one plots contours of the absolute $\chi^2$ value, rather than $\Delta\chi^2$, again assuming that new physics affects two Wilson coefficients. Conversely, the $\Delta\chi^2$ is claimed to be “NOT appropriate … to claim no physics”. Several plots are shown to demonstrate that the “absolute” method leads to looser constraints on Wilson coefficients $C_9$ and $C_{10}$.

Comparing the two

For a fair comparison, I reproduced the plots of $C_9$ and $C_{10}$ using the numerics of our paper. First, we need to determine the $\chi^2$ values corresponding to the “1 and 2$\sigma$ regions” in the “absolute” method. This can be determined as  $F_{\chi^2}^{-1}(\alpha, \nu)$, where $F_{\chi^2}^{-1}(x,\nu)$ is the inverse CDF of the $\chi^2$ distribution with $\nu$ degrees of freedom, $\nu=N-2$ for $N$ observables and 2 Wilson coefficients, and $\alpha=0.68$ or $0.95$.

In our nominal fit described in the paper, we have $N=88$, thus for the “$1\sigma$” region, $F_{\chi^2}^{-1}(0.68, 86) = $ 91.7. The minimum (i.e. best-fit) $\chi^2$ when varying $C_9$ and $C_{10}$ is given by 102.7, while $\chi^2$ value at the SM point is 116.9. Comparing the two methods,

  • Using $\Delta\chi^2$, the best-fit scenario improves over the SM by 14.2. Assuming everything is normally distributed, this can be roughly translated to a “number of sigmas” by equating the CDFs of the $\chi^2$ distribution for $\nu=2$ with a standard Gaussian, and the result is $3.6\sigma$.
  • Using the “absolute $\chi^2$”, there is a paradox: In the entire plane, the “$1\sigma$” value cannot be attained since $91.7 < 102.7$. The “$2\sigma$” region does contain the best-fit point, but not the SM point.

Graphically:

deltachi2-vs-abschi2-full

 

Left, the $\Delta\chi^2$ regions, right the “absolute $\chi^2$” regions. Note that the similarity of the “$2\sigma$” regions is a numerical coincidence, the “$1\sigma$” region in the right-hand plot is even completely gone.

How ist this possible? Well, the reason is very simple. Even if the theoretical model describes nature perfectly, the data have statistical as well as systematic uncertainties leading to irreducible constant contributions to $\chi^2$. For instance, the two measurements of $F_L$ in $B\to K^*\mu^+\mu^-$ by ATLAS and LHCb at low $q^2$ are not compatible with each other within 1 standard deviation, which contributes a constant positive contribution to $\chi^2$ when they are treated as independent.

Using only “sensitive” observables

On the other hand, in the talk at hand, it is also mentioned that one should remove an observable from the fit if it is “relatively insensitive to the variation of the Wilson coefficients”. Could this help to solve the paradox? To determine which these observables are, I computed
$$\delta_i = \left[O_i \left( \vec C^{\rm NP} \right) – O_i \left( \vec 0 \right) \right] / \sqrt{\sigma_{{\rm th},i}^2+\sigma_{{\rm exp},i}^2}$$
i.e. the relative variation of an observable under variation of the Wilson coefficients, normalized to the combined experimental and theoretical uncertainty. Requiring that $|\delta|>1$ for the benchmark points $C_9^{\rm NP}=-1.5$ and $C_{10}^{\rm NP}=+1.5$, we are left with just $N=37$ observables.

Let’s repeat the previous game: we get$F_{\chi^2}^{-1}(0.68, 35) =38.4$, $F_{\chi^2}^{-1}(0.95, 35) =50.3$, $\chi^2_{\rm min}=55.9$, $\chi^2_{\rm SM}=67.9$. Again, the “$1\sigma$” value cannot be attained, and this time, not even the best fit point is within the “$2\sigma$” region!

Graphically:

deltachi2-vs-abschi2-sensitive

 

The left plot, using the “traditional” $\Delta\chi^2$ method, is now a bit looser since less observables are included. With the “absolute $\chi^2$”, the regions disappear completely.

 

Upshot

The problem of the “absolute $\chi^2$” method can be understood with a simple gedanken experiment. Let’s assume experiment X measures one of the “sensitive” observables but, due to a measurement error, is very far off the “true” value, while the other experiments got it right. In the $\Delta\chi^2$ method, this will have no impact, since it will simply shift the $\chi^2$ by a virtually constant amount. In the absolute $\chi^2$ method instead, this can lead to a drastic shrinking of the “allowed regions”. And in fact this is what happens taking the numerics of our paper that includes a large number of observables.

To summarize, I don’t think the “absolute $\chi^2$” plots can be used to judge the significance of a possible new physics contribution (or of an underestimation of SM uncertainties that mimicks new physics).

I would be happy to hear your opinions.

The $B\to K^*\mu^+\mu^-$ anomaly persists

tl;dr The $B\to K^*\mu^+\mu^-$ anomaly is still there, global fit prefers new physics in $C_9$ over SM by $3.7\sigma$, interpretation as hadronic effect not excluded though.

Two years ago the LHCb experiment measured a significant deviation from the Standard Model predictions in one of the angular observables of the $B\to K^*\mu^+\mu^-$ decay (prosaically called $P_5’$). This deviation caused a lot of discussion because it could in principle be a sign of physics beyond the Standard Model, but it has also been speculated that some mundane QCD effect not accounted for in the theoretical predictions for this (and other) observables is responsible for it.

Last year, another tantalizing announcement was made by the same experiment. Apparently, the decay rates of the two modes $B\to K\mu^+\mu^-$ and $B\to Ke^+e^-$ differ by something like 25% (their ratio, called $R_K$, was measured to be around 0.75). This is not possible in the Standard Model where electrons and muons are identical up to their different masses (which play no role in this decay). Interestingly, the two anomalies — dubbed $B\to K^*\mu^+\mu^-$ and $R_K$ anomalies — fit very nicely together if interpreted in terms of physics beyond the Standard Model. However, it was too early to draw a firm conclusion, with the $B\to K^*\mu^+\mu^-$ observables being potentially susceptible to poorly known QCD effects and the $R_K$ observation not being statistically very significant taken on its own.

Today at the conference Rencontres de Moriond in the Italian ski resort of La Thuile in the Aosta valley, one of the most important conferences in high energy physics, Christoph Langenbruch, on behalf of the LHCb collaboration, has presented their updated analysis of $B\to K^*\mu^+\mu^-$ angular observables and has shown that the tension with the Standard Model is still present (see the announcements here and here).

Thanks to the conference organizers as well as the LHCb collaboration, I was given the honour to be one of the theorists to give some initial interpretations of this measurement. Using the analysis developed with Wolfgang Altmannshofer for our recent paper and exploiting the $B\to K^*$ form factors obtained for a project with Roman Zwicky and Aoife Bharucha, in my talk I showed that in a global fit to all available experimental data, a new physics interpretation (in the so-called Wilson coefficient $C_9$, found already after the previous measurement) is preferred over the Standard Model by $3.7\sigma$ and even $4.3\sigma$ if $R_K$ is included.

While this is extremely interesting, unfortunately this is not yet evidence for the presence of “new” physics. It is still possible we are being fooled by an unexpected QCD effect. An interesting check of the QCD vs. new physics hypotheses is to consider the size of the deviation as a function of the invariant mass-squared of the muons, $q^2$. In the following plot, showing the values preferred by the data, a new physics effect would lead to boxes that align horizontally — i.e., no $q^2$ dependence — while a hadronic effect should have a different $q^2$ dependence. Indeed there seems to be an increasing trend when moving from the left towards the $J/\psi$ resonance (indicated by the first vertical gray line). However, at the moment it is fair to say that the situation is not yet conclusive and both hypotheses — new physics or an unexpectedly large hadronic effect — are still valid and both have interesting implications.

WCq2-C9-6

In the near future, it will be extremely intersting to see what LHCb has to say on the ratio of $B\to K^*\mu^+\mu^-$ vs. $B\to K^*e^+e^-$ observables. If muons and electrons indeed behave differently, this should have a visible impact there, even using data already taken in 2012 (but not yet analyzed). In the future, more precise measurements of processes like $B_s\to\mu^+\mu^-$ will certainly solve this puzzle.

[Technical comment: don’t be confused by the three different $3.7\sigma$ numbers here. The first ist the significance of the 2013 LHCb anomaly, which used a theory predictions that many (me included) considered as not conservative enough. The second is the significance of the new tension as obtained by LHCb. This is more conservative because it includes an important source of theory uncertainty (charm loops) not considered in the old analysis. The third is the pull of the $C_9$ new physics solution compared to the SM in our global fit. This uses many more processes and observables, while it does not use one of the bins where the tension observed by LHCb is largest, because we consider this bin to be theoretically unreliable. Using it as well, Joaquim Matias presented a pull of more than $4\sigma$ at the conference. In general, the agreement between the analyses presented by him and by me was very good, which is an important consistency check.]

New paper on $b\to s\nu\bar\nu$!

Today my collaborators Andrzej Buras, Jennifer Girrbach-Noe, Christoph Niehoff and I released our new paper on the decays $B\to K\nu\bar\nu$ and $B\to K^*\nu\bar\nu$. These two closely related decays are sensitive to physics beyond the Standard Model and will very likely be observed for the first time at the upcoming Belle-II experiment. It was about time to improve on the analysis of five years ago; especially interesting for us was the impact of the recent computation of $B\to K$ and $B\to K^*$ form factors on the lattice as well as the host of new measurements of observables in $b\to s\mu^+\mu^-$ transitions at LHC (remember the $B\to K^*\mu^+\mu^-$ anomaly?).

Here is just a selection of some of the main points of the paper that you might be interested in.

If you’re an experimentalist:

  • The central value of the Standard Model  prediction for the branching ratio of $B\to K^*\nu\bar\nu$ is now 40% higher – this is good news for Belle-II.
  • We show that, even if no new physics is discovered in $b\to s\mu^+\mu^-$ transitions at LHC, $b\to s\nu\bar\nu$ decays are still very interesting to probe new physics.
  • Even if there is no new physics, measuring these decays precisely is important to reduce theory uncertainties in other processes.

If you care about precise Standard Model predictions:

  • Using the information from the lattice and a recent full NLO calculation of electroweak corrections, we obtain relative uncertainties of about 10% on the branching ratios. (Details on the form factors will be discussed in an upcoming paper with Aoife Bharucha and Roman Zwicky.)

If you are interested in new physics:

  • We study the correlations between $b\to s\mu^+\mu^-$ and $b\to s\nu\bar\nu$ on a model-independent basis. Interestingly, if the current tensions in $b\to s\mu^+\mu^-$ are due to new physics, $b\to s\nu\bar\nu$ could help disentangle what kind of new physics is responsible for them.
  • We point out that if there is new physics (e.g. a $Z’$ boson) that only affects tau leptons (and tau neutrinos), $B\to K\nu\bar\nu$ and $B\to K^*\nu\bar\nu$ would be the first place to look for it.

Our new physics analysis is summarized in this plot, showing predictions of various scenarios in the plane of the $B\to K\nu\bar\nu$ and $B\to K^*\nu\bar\nu$ branching ratios, normalized to their SM values. If you are curious what it all means, have a look at the paper!

Lecture “Physics beyond the Standard Model” at TUM

In the winter semester 2014/15, I will offer a special lecture on “Physics beyond the Standard Model” at TUM. The lecture will take place in the “Handbibliothek” (room 3343 in the Physics Department) every Wednesday from 10:30-12:00 (note the change of time), starting on October 8.

The lecture is targeted at Master students. The aim is to enable the participants to start their own, independent research in particle physics phenomenology. To this end, we will look at motivations to extend the Standard Model of particle physics, at the most important classes of new physics models, and at the ways these models can be tested in the future. Since the starting point is the Standard Model, a basic knowledge of Quantum Field theory is required. In terms of previous lectures at TUM,

  • “Introduction to Quantum Field Theory” as well as “Quantum Field Theory I” are necessary prerequisites. If you have missed these lectures, you might want to go through the basics of QFT using textbooks.
  • “Theoretical Particle Physics” is not strictly necessary, but if you have not attended it, I suggest you read up on the Standard Model, e.g. chapter 20 of Peskin/Schröder, chapters 87-89 of Srednicki (you can also find some excellent PDF lecture notes online).
  • “Quantum Field Theory II” is not a necessary prerequisite (but is of course beneficial).

The topics covered in the lecture include

  • A brief review of the Standard Model
  • Shortcomings of the Standard Model and the need for new physics
  • Effective field theories and the hierarchy problem
  • Indirect test of new physics (electroweak precision tests, flavour physics, Higgs physics)
  • Supersymmetry and the MSSM
  • Composite Higgs models

The focus will always be on the phenomenology, i.e. on the experimentally testable theoretical aspects.

The lecture will be held in English, unless all the participants are fluent in (Swabian) German. At the end of the course, there will be an optional 25-minute oral exam (the exam is required to get the 5 ECTS points).

If you plan to attend this lecture, please sign up for it on TUMonline. This will allow me to more efficiently prepare the lecture and to offer supplementary material on Moodle (but you are welcome to attend even if you didn’t sign up).