- Find the texkey of a paper on Inspire and
`\cite`

it in the manuscript - Copy & paste the bibtex entry into the
`.bib`

file - Correct LaTeX code in the title (often missing the dollar signs or containing characters like “->”)
- After having completed the paper, check whether any of the preprints have been published in the meantime and add the journal reference.

In this list, step 1 is the only one requiring a brain, while steps 2-4 are increasingly annoying. This is why I have written a script that mostly automatizes these steps and I want to explain it in this post.

Having completed step 1 above, you can compile your LaTeX document (let’s call it `paper.tex`

) and a `paper.aux`

file will be generated. This is the case even if you don’t have a bibliography file yet (and the compilation will thus fail). Installing my `inspiretools`

script from GitHub, you can now execute the following command:

`auxtobib paper.aux > bilbiography.bib`

This command will download all the BibTeX entries from Inspire and save them to the `.bib`

file. Step 2 has been automatized! When you add citations to the paper, just rerun the command. It will always fetch all the references anew, so if one of the references gets a journal reference added, your bibliography will be up to date. So step 4 is redundant as well!

What about step 3? Well, you could still do it manually, but all changes will be overwritten when you update the bibliography. The best way would be to change it on Inspire itself! And you can help doing that. The code contains a second script that you can invoke as

`auxtoxml paper.aux > titles.xml`

This will generate an XML file containing all the titles of the references in your bibliography. Correct all the LaTeX errors there and then send the XML file to feedback@inspirehep.net. The file is in the right format for the Inspire staff to quickly update the information in their database. This way, the change will not only persist when you update your references, but you will also have saved your colleagues some time!

The code uses the `pyinspire`

script by Ian Huston (with some modifications by myself) that uses the Inspire API to fetch entries. It is written in Python.

In case you are wondering why I am taking the detour via the `.aux`

file rather than directly extracting the references from the `.tex`

file: I have found this to be more robust since it works with many different citation commands like `\cite`

, `\nocite`

, `\autocites`

, and even with custom macros without the need to use complicated regular expressions.

Note that the current implementation is quite slow as it fetches each entry separately, which can take some time especially for long papers. In principle this could be sped up by fetching several entries simultaneously. If you want to improve on this, you are welcome to contribute to the repository.

]]>The idea of the Higgs boson being a composite particle is a compelling and fascinating solution to the electroweak hierarchy problem (also called the Higgs naturalness problem) and, according to many, is among the two most attractive solutions to this problem (the other one being supersymmetry). Many brilliant people have contributed to the construction of elaborate models that address a variety of challenges that arise when formulating a realistic theory implementing the composite Higgs idea. Given the plethora of experimental tests of the Standard Model, from precision electroweak measurements to flavour physics, direct searches for the production of heavy particles and precision measurements of the properties of the Higgs boson, it is not easy to determine whether a given model is viable, or what an experimental exclusion (or discovery!) in one observable implies for other observables.

In the past, many studies have either focused on a limited set of experimental tests – e.g. on Higgs physics, flavour physics or direct searches – or have studied the interplay between different tests in a qualitative way, while being as model-independent as possible. While this approach certainly has its advantages, to really study the correlations between different experimental tests of a new physics model (which is the overarching goal of our research group), one needs to select a specific model and perform a numerical analysis of all experimental constraints on its parameters. This is exactly what we have set out to do.

For several reasons (detailed in the paper), this turned out to be quite challenging on a technical level, and it was only thanks to a local computing cluster that we were able to obtain the results we were interested in. In the end, we think the results we got are interesting enough to justify the efforts. Just to mention two of the most exciting results of our analysis:

- Some hints of a resonance at 2 TeV seen by ATLAS and CMS in diboson final states can be perfectly accomodated, while being in agreement with all other experimental constraints.
- Deviations from Standard Model expectations in $B$ physics, in particular in angular observables in $B\to K^*\mu^+\mu^-$ and the branching ratio of $B_s\to\phi\mu^+\mu^-$, can be explained as well. To be honest, this came as a surprise to us! But most exciting about this is that it implies the existence of a neutral spin-1 resonance below 1 TeV which should show up soon in the dijet or $t\bar{t}$ mass distribution at LHC! And if it doesn’t show up, it’s clear that the models studied by us cannot explain these anomalies.

Many more big and small results can be found in the 61 plots and the accompanying text. Of course, having set up the analysis for one particular model (with four different flavour structures), we are now eager to apply this strategy also to other models or scenarios, and we are looking forward to discussing this with the community.

]]>$$\chi^2(\vec C^{\rm NP})=\left[\vec O_{\rm exp}-\vec O_{\rm th}(\vec C^{\rm NP})\right]^T\left[C_{\rm exp}+C_{\rm th}\right]^{-1}\left[\vec O_{\rm exp}-\vec O_{\rm th}(\vec C^{\rm NP})\right]$$

where $\vec C^{\rm NP}$ are the new physics contributions to the Wilson coefficients, $\vec O_{\rm th}$ are the observables, and $C_\text{exp,th}$ the experimental and theoretical covariance matrices, respectively. The latter encode the experimental and theoretical uncertainties as well as their correlations.

Now, when we present plots with constraints on Wilson coefficients such as this one, we proceed as follows.

- Make a hypothesis which two Wilson coefficients receive new physics contributions (assuming all others are SM-like),
- Determine the minimum value of $\chi^2$ under variation of these two coefficients,
- Plot contours of $\Delta\chi^2=2.3$ and $6$, where $\Delta\chi^2$ is the difference of the $\chi^2$ with respect to the minimum value in the previous item.

The numbers are chosen because $F_{\chi^2}(2.3, 2) \approx 0.68$, $F_{\chi^2}(6, 2) \approx 0.95$, where $F_{\chi^2}(x, 2)$ is the cumulative distribution function (CDF) of the $\chi^2$ distribution with 2 degrees of freedom. That is, the regions we plot are the 68% and 95% credibility regions for the Wilson coefficients under the hypothesis that new physics resides only in these two coefficients.

Now, in the talk mentioned above, an alternative method is proposed, coined **“absolute $\chi^2$ method”**. In this method, one plots contours of the *absolute* $\chi^2$ value, rather than $\Delta\chi^2$, again assuming that new physics affects two Wilson coefficients. Conversely, the $\Delta\chi^2$ is claimed to be “NOT appropriate … to claim no physics”. Several plots are shown to demonstrate that the “absolute” method leads to looser constraints on Wilson coefficients $C_9$ and $C_{10}$.

For a fair comparison, I reproduced the plots of $C_9$ and $C_{10}$ using the numerics of our paper. First, we need to determine the $\chi^2$ values corresponding to the “1 and 2$\sigma$ regions” in the “absolute” method. This can be determined as $F_{\chi^2}^{-1}(\alpha, \nu)$, where $F_{\chi^2}^{-1}(x,\nu)$ is the inverse CDF of the $\chi^2$ distribution with $\nu$ degrees of freedom, $\nu=N-2$ for $N$ observables and 2 Wilson coefficients, and $\alpha=0.68$ or $0.95$.

In our nominal fit described in the paper, we have $N=88$, thus for the “$1\sigma$” region, $F_{\chi^2}^{-1}(0.68, 86) = $ **91.7**. The minimum (i.e. best-fit) $\chi^2$ when varying $C_9$ and $C_{10}$ is given by **102.7**, while $\chi^2$ value at the SM point is **116.9**. Comparing the two methods,

- Using $\Delta\chi^2$, the best-fit scenario improves over the SM by
**14.2**. Assuming everything is normally distributed, this can be roughly translated to a “number of sigmas” by equating the CDFs of the $\chi^2$ distribution for $\nu=2$ with a standard Gaussian, and the result is $3.6\sigma$. - Using the “absolute $\chi^2$”, there is a paradox: In the entire plane,
**the “$1\sigma$” value cannot be attained**since $91.7 < 102.7$. The “$2\sigma$” region does contain the best-fit point, but not the SM point.

Graphically:

Left, the $\Delta\chi^2$ regions, right the “absolute $\chi^2$” regions. Note that the similarity of the “$2\sigma$” regions is a numerical coincidence, the “$1\sigma$” region in the right-hand plot is even completely gone.

How ist this possible? Well, the reason is very simple. Even if the theoretical model describes *nature* perfectly, the *data* have statistical as well as systematic uncertainties leading to *irreducible constant contributions* to $\chi^2$. For instance, the two measurements of $F_L$ in $B\to K^*\mu^+\mu^-$ by ATLAS and LHCb at low $q^2$ are not compatible with each other within 1 standard deviation, which contributes a constant positive contribution to $\chi^2$ when they are treated as independent.

On the other hand, in the talk at hand, it is also mentioned that one should remove an observable from the fit if it is “relatively insensitive to the variation of the Wilson coefficients”. Could this help to solve the paradox? To determine which these observables are, I computed

$$\delta_i = \left[O_i \left( \vec C^{\rm NP} \right) – O_i \left( \vec 0 \right) \right] / \sqrt{\sigma_{{\rm th},i}^2+\sigma_{{\rm exp},i}^2}$$

i.e. the *relative variation of an observable under variation of the Wilson coefficients, *normalized to the combined experimental and theoretical uncertainty. Requiring that $|\delta|>1$ for the benchmark points $C_9^{\rm NP}=-1.5$ and $C_{10}^{\rm NP}=+1.5$, we are left with just $N=37$ observables.

Let’s repeat the previous game: we get$F_{\chi^2}^{-1}(0.68, 35) =38.4$, $F_{\chi^2}^{-1}(0.95, 35) =50.3$, $\chi^2_{\rm min}=55.9$, $\chi^2_{\rm SM}=67.9$. Again, **the “$1\sigma$” value cannot be attained**, and this time, not even the best fit point is within the “$2\sigma$” region!

Graphically:

The left plot, using the “traditional” $\Delta\chi^2$ method, is now a bit looser since less observables are included. With the “absolute $\chi^2$”, the regions disappear completely.

The problem of the “absolute $\chi^2$” method can be understood with a simple *gedanken* experiment. Let’s assume experiment X measures one of the “sensitive” observables but, due to a measurement error, is very far off the “true” value, while the other experiments got it right. In the $\Delta\chi^2$ method, this will have no impact, since it will simply shift the $\chi^2$ by a virtually constant amount. In the absolute $\chi^2$ method instead, this can lead to a drastic shrinking of the “allowed regions”. And in fact this is what happens taking the numerics of our paper that includes a large number of observables.

To summarize, I don’t think the “absolute $\chi^2$” plots can be used to judge the significance of a possible new physics contribution (or of an underestimation of SM uncertainties that mimicks new physics).

I would be happy to hear your opinions.

]]>Two years ago the LHCb experiment measured a significant deviation from the Standard Model predictions in one of the angular observables of the $B\to K^*\mu^+\mu^-$ decay (prosaically called $P_5’$). This deviation caused a lot of discussion because it could in principle be a sign of physics beyond the Standard Model, but it has also been speculated that some mundane QCD effect not accounted for in the theoretical predictions for this (and other) observables is responsible for it.

Last year, another tantalizing announcement was made by the same experiment. Apparently, the decay rates of the two modes $B\to K\mu^+\mu^-$ and $B\to Ke^+e^-$ differ by something like 25% (their ratio, called $R_K$, was measured to be around 0.75). This is not possible in the Standard Model where electrons and muons are identical up to their different masses (which play no role in this decay). Interestingly, the two anomalies — dubbed $B\to K^*\mu^+\mu^-$ and $R_K$ anomalies — fit very nicely together if interpreted in terms of physics beyond the Standard Model. However, it was too early to draw a firm conclusion, with the $B\to K^*\mu^+\mu^-$ observables being potentially susceptible to poorly known QCD effects and the $R_K$ observation not being statistically very significant taken on its own.

Today at the conference Rencontres de Moriond in the Italian ski resort of La Thuile in the Aosta valley, one of the most important conferences in high energy physics, Christoph Langenbruch, on behalf of the LHCb collaboration, has presented their updated analysis of $B\to K^*\mu^+\mu^-$ angular observables and has shown that the tension with the Standard Model is still present (see the announcements here and here).

Thanks to the conference organizers as well as the LHCb collaboration, I was given the honour to be one of the theorists to give some initial interpretations of this measurement. Using the analysis developed with Wolfgang Altmannshofer for our recent paper and exploiting the $B\to K^*$ form factors obtained for a project with Roman Zwicky and Aoife Bharucha, in my talk I showed that in a global fit to all available experimental data, a new physics interpretation (in the so-called Wilson coefficient $C_9$, found already after the previous measurement) is preferred over the Standard Model by $3.7\sigma$ and even $4.3\sigma$ if $R_K$ is included.

While this is extremely interesting, unfortunately this is not yet evidence for the presence of “new” physics. It is still possible we are being fooled by an unexpected QCD effect. An interesting check of the QCD vs. new physics hypotheses is to consider the size of the deviation as a function of the invariant mass-squared of the muons, $q^2$. In the following plot, showing the values preferred by the data, a new physics effect would lead to boxes that align horizontally — i.e., no $q^2$ dependence — while a hadronic effect should have a different $q^2$ dependence. Indeed there seems to be an increasing trend when moving from the left towards the $J/\psi$ resonance (indicated by the first vertical gray line). However, at the moment it is fair to say that the situation is not yet conclusive and both hypotheses — new physics or an unexpectedly large hadronic effect — are still valid and both have interesting implications.

In the near future, it will be extremely intersting to see what LHCb has to say on the ratio of $B\to K^*\mu^+\mu^-$ vs. $B\to K^*e^+e^-$ observables. If muons and electrons indeed behave differently, this should have a visible impact there, even using data already taken in 2012 (but not yet analyzed). In the future, more precise measurements of processes like $B_s\to\mu^+\mu^-$ will certainly solve this puzzle.

*[Technical comment: don’t be confused by the three different $3.7\sigma$ numbers here. The first ist the significance of the 2013 LHCb anomaly, which used a theory predictions that many (me included) considered as not conservative enough. The second is the significance of the new tension as obtained by LHCb. This is more conservative because it includes an important source of theory uncertainty (charm loops) not considered in the old analysis. The third is the pull of the $C_9$ new physics solution compared to the SM in our global fit. This uses many more processes and observables, while it does not use one of the bins where the tension observed by LHCb is largest, because we consider this bin to be theoretically unreliable. Using it as well, Joaquim Matias presented a pull of more than $4\sigma$ at the conference. In general, the agreement between the analyses presented by him and by me was very good, which is an important consistency check.]*

Here is just a selection of some of the main points of the paper that you might be interested in.

*If you’re an experimentalist:*

- The central value of the Standard Model prediction for the branching ratio of $B\to K^*\nu\bar\nu$ is now 40% higher – this is good news for Belle-II.
- We show that, even if no new physics is discovered in $b\to s\mu^+\mu^-$ transitions at LHC, $b\to s\nu\bar\nu$ decays are still very interesting to probe new physics.
- Even if there is no new physics, measuring these decays precisely is important to reduce theory uncertainties in other processes.

*If you care about precise Standard Model predictions:*

- Using the information from the lattice and a recent full NLO calculation of electroweak corrections, we obtain relative uncertainties of about 10% on the branching ratios. (Details on the form factors will be discussed in an upcoming paper with Aoife Bharucha and Roman Zwicky.)

*If you are interested in new physics:*

- We study the correlations between $b\to s\mu^+\mu^-$ and $b\to s\nu\bar\nu$ on a model-independent basis. Interestingly, if the current tensions in $b\to s\mu^+\mu^-$ are due to new physics, $b\to s\nu\bar\nu$ could help disentangle what kind of new physics is responsible for them.
- We point out that if there is new physics (e.g. a $Z’$ boson) that only affects tau leptons (and tau neutrinos), $B\to K\nu\bar\nu$ and $B\to K^*\nu\bar\nu$ would be the first place to look for it.

Our new physics analysis is summarized in this plot, showing predictions of various scenarios in the plane of the $B\to K\nu\bar\nu$ and $B\to K^*\nu\bar\nu$ branching ratios, normalized to their SM values. If you are curious what it all means, have a look at the paper!

]]>The lecture is targeted at Master students. The aim is to enable the participants to start their own, independent research in particle physics phenomenology. To this end, we will look at motivations to extend the Standard Model of particle physics, at the most important classes of new physics models, and at the ways these models can be tested in the future. Since the starting point is the Standard Model, a basic knowledge of Quantum Field theory is required. In terms of previous lectures at TUM,

- “Introduction to Quantum Field Theory” as well as “Quantum Field Theory I” are necessary prerequisites. If you have missed these lectures, you might want to go through the basics of QFT using textbooks.
- “Theoretical Particle Physics” is not strictly necessary, but if you have not attended it, I suggest you read up on the Standard Model, e.g. chapter 20 of Peskin/Schröder, chapters 87-89 of Srednicki (you can also find some excellent PDF lecture notes online).
- “Quantum Field Theory II” is not a necessary prerequisite (but is of course beneficial).

The topics covered in the lecture include

- A brief review of the Standard Model
- Shortcomings of the Standard Model and the need for new physics
- Effective field theories and the hierarchy problem
- Indirect test of new physics (electroweak precision tests, flavour physics, Higgs physics)
- Supersymmetry and the MSSM
- Composite Higgs models

The focus will always be on the phenomenology, i.e. on the experimentally testable theoretical aspects.

The lecture will be held in English, unless all the participants are fluent in (Swabian) German. At the end of the course, there will be an optional 25-minute oral exam (the exam is required to get the 5 ECTS points).

If you plan to attend this lecture, please sign up for it on TUMonline. This will allow me to more efficiently prepare the lecture and to offer supplementary material on Moodle (but you are welcome to attend even if you didn’t sign up).

]]>