Particle and nuclear physicists face a real dilema. Our "Standard Model" explains most of what we observe at accelerator and non-accelerator experiments, IceCube included. The Standard Model has been around for about 40 years. It's three generations of quarks and leptons, four forces, and the Higgs boson come together to provide a good description of the processes we observe at the Large Hadron Collider (LHC) and other accelerators, like Brookhaven's Relativistic Heavy Ion Collider, not to mention underground neutrino detectors. The only clear crack in the standard model is the fact that neutrinos oscillate between the different flavors, and therefore should have mass. But, most of us don't feel like this is a huge crack.
So, we have been looking for holes in the Standard Model for the past 40 years. With the discovery of the Higgs boson in the bag, this is now the main rationale for the LHC. Each year the four LHC experiments put out hundreds of new results; the search for "New" (beyond the standard model) physics is a major focus. Unfortunately, they have not found any clear evidence for any new physics.
There are some good reasons we know that there must be physics beyond the Standard Model. The evidence for both dark matter and dark energy is clear and convincing. Many theories of dark matter model it as a new particle that could very well be discovered at the LHC. Dark energy is even more mysterious. It is beyond the reach of any as-yet proposed laboratory scale experiments, but it is a cler reminder that the universe still has some deep secrets.
Although it is not our primary focus, IceCube is also searching for new physics, mostly that involving neutrinos. As part of this search, we continue to study neutrino oscillations (see my previous post here) in more detail. One of the things that we are looking for is a new type of neutrinos, which do not interact; these are called 'sterile neutrinos.' If regular neutrinos oscillated into sterile neutrinos, it would look just like these neutrinos disappeared. We search for sterile neutrinos by looking at how likely neutrinos produced in cosmic-ray air showers are to appear in IceCube. If sterile neutrinos exist, neutrinos traveling long distances through the Earth might disappear. In a recent study, published in Physical Review Letters (here, and also available here through the Cornell arXiv), we set strict limits on sterile neutrinos. We pub limits on the possible existence of sterile neutrinos with certain characteristics; the main characteristics are the mass difference between sterile neutrinos and regular neutrinos, and the mixing angle (strength of coupling) between sterile and regular neutrinos.
The IceCube limits are of particular interest because they rule out a region of parameter space (mass difference and mixing angle) that had been suggested by a couple of earlier experiments. These earlier results had attracted great attention, but we now know that they are unlikely to be correct.
So, we need to keep looking to find a different crack in the Standard Model, possibly including sterile neutrinos with different masses and couplings.
Monday, December 19, 2016
Monday, September 26, 2016
Make IceCube Big Again
As a scientific experiment, IceCube is approaching maturity. We have collected 6 years of data with the complete detector. In one way, this is a lot. For most analyses, the statistical precision increases as the square root of the amount of data collected). So, to make a measurement with half the error requires four times as much data. It will take IceCube another 18 years to half the statistical errors that are possible with the current data. 18 years is a long time, so for many of our studies, future increases in precision will be relatively slow.
This is not true for everything. For many analyses, it is a matter of waiting for the right astrophysical event. Assuming that high-energy neutrinos are produced by episodic (non-constant) sources, one nearby gamma-ray burst, or supernova, or whatever is producing astrophysical neutrinos, would be a huge discovery. This is worth waiting for.
We continue to improve our analysis techniques, and we will be able to continue to make progress here for some time. And, there are some analyses that are only now becoming possible, either because they require software that is only now becoming possible, or because they require a lot of data. So, we are still productively busy.
But, we are also thinking more intensively about follow-on experiments. There are several possibilities on the table.
PINGU would be a dense infill array, with a threshold of a few GeV, able to determine which neutrino flavor is the lightest.
Gen2 (above) is a comprehensive IceCube upgrade, probably including PINGU, but focused on an array 10 times larger than IceCube. It would have a similar number of strings to IceCube, but be build with more sensitive optics. Because the strings would be more widely dispersed than IceCube, it would have a higher energy threshold, well matched to studies of astrophysical neutrinos. We (both the collaboration and the broader neutrino astronomy community) think, but cannot completely demonstrate that Gen2 will be able to find the source of our cosmic neutrinos.
Gen2 will likely also include a large (10 square kilometer) surface air-shower array. One main purpose of the array will be to reject downward-going atmospheric neutrinos, improving our sensitivity to astrophysical sources which are above the horizon; the center of our galaxy is of prime interest.
There are several efforts to build a large radio-detection array, either as part of Gen2, or as a stand-alone project. Here, the main possibilities are ARIANNA, which I have discussed multiple times before, or ARA, a radio-detection project at the South Pole.
in Europe, there is also a large effort to build an optical detector array, KM3NeT in the Mediterranean Sea. KM3NeT will eventually include a large (~ 5 cubic kilometers?) astrophysical array, and a smaller array, ORCA, which will have physics goals similar to PINGU. KM3NeT is starting to deploy test strings now, and ORCA might be complete in the next ~ 3 years. Construction of the astrophysical array is also starting, although the 5 km^3 array will not be complete until the mid 2020's.
On the U.S. side, these projects are perfectly aligned with National Science Foundation priorities. NSF director France Cordova recently unveiled a 9-point R & D agenda; one of the 6 science themes was "multimessenger astronomy." Unfortunately, even despite this, these U.S. projects seem stalled, due to lack of funding now and in the near-term future. From my perspective, this is very unfortunate; if an excellent science case and a good match to the funding agency's directors priorities isn't enough to move a project forward, then what is needed? Although the full Gen2 would not be inexpensive (comparable in cost to IceCube), one could build either of the radio-detection arrays or a moderate sized surface array for under $50 million - not much by current particle/nuclear physics standards.
Some of the ideas presented here were presented in a comment I wrote for Nature: Invest in Neutrino Astronomy; it appeared in the May 25 issue.
Tuesday, August 9, 2016
Nailing down the astrophysical neutrinos - a new analysis
Previously, the strongest (statistically) IceCube astrophysical neutrino analysis used four years of contained event data. The significance of the result (compared to the null hypothesis of no astrophysical neutrinos) was 6.5 sigma (standard deviations), which is very strong, and well above the 5-sigma threshold widely used in particle physics as needed to claim a discovery. Based on raw numbers, the probability of getting a 5 sigma result is tiny - about 1 in 3.5 million. But, there are two things to keep in mind. First, with modern computers, it is easy to do a lot of experiments by changing parameters. One 'good' example is to consider the number of different places in the sky we could look for a signal. This large number must be considered when we search for point source searches. A more problematic example is to change analysis parameters to try to make the signal larger. We try very hard to avoid this - it is one reason that we use blind analysis where possible - but it can sometimes be hard to avoid unconscious bias. I had previously discussed an earlier analysis - almost identical, but with only 3 years of data.
The second thing to remember is that this was a single result, based on a single analysis. We were very very careful before making the contained event analysis public, but still, discoveries need confirmation. Unfortunately, IceCube is the only experiment large enough to study these neutrinos. So, we developed a complementary analysis that studies through-going muons from neutrino interactions outside the detector. The original version (alternate link to freely available version) of this analysis used 2 years of data, and found an excess, consistent with the contained event analysis, with a statistical significance of 3.7 sigma.
Now, we have released a new analysis, using 6 years of data. It finds an astrophysical flux, with a significance of 5.6 sigma - enough for a discovery on its own. The spectrum is shown at the top of this post. This data also allows us to say more about the characteristics of the astrophysical neutrinos. The measured spectral index (the 'gamma' in dN/dE_nu = A * (E_nu/1 TeV)^gamma is measured to be 2.13 +/- 0.13. This is in some tension with the findings from the contained event analysis, which find gamma much closer to 2.5. This tension could be from statistical fluctuations (it is about a 2 sigma difference, so not too improbable), or it could have something to do with the different event samples. This plot shows the tension, with the different enclosed regions showing the range of astrophysical neutrino spectral index (gamma, x axis) with the corresponding signal strength (flux, y axis). The solid red curve is from the current analysis, while the blue curve shows the combined result from previous studies. If the two measurements were in good agreement, the curves should meet. But, they don't; besides statistics, there are several possible explanations.
The through-going neutrino analysis samples, on average, more energetic neutrinos than the contained event study, so one simple explanation might be that a power law neutrino energy spectrum, like dN/dE_nu = A * (E_nu/1 TeV)^n is too simple a model. The spectral index gamma might change with energy. There is no reason to expect a single power law. Alternately, there could be some difference between the muon-neutrino sample (through-going events) and showers from a mixture of all three flavors; the latter is not expected, but statistical fluctuations, or a more complex energy spectrum seem like the most likely possibilities.
The fact that two different analyses get the same answer is very encouraging. There is, by both design and result, zero overlap between the two events samples. Further, the systematic uncertainties for the two analyses are very different, so the analyses are almost completely independent. So, for anyone who was waiting for this signal to go away, it looks increasingly unlikely.\
I should mention that this is the analysis that first found the 2.2 PeV neutrino that I have previously discussed here and here. So, if you were waiting for a more detailed publication, this new paper is it.
Monday, July 25, 2016
The highest energy neutrinos
It has been a while since I last discussed the so-called 'GZK' neutrinos, neutrinos which are produced when ultra-high energy cosmic-rays interact with cosmic microwave background (CMB) photons. As you may recall, CMBR photons were produced during the big bang; they have cooled off during the roughly 14 billion years since then. They are now mostly at microwave frequencies, with a peak at 6.6 GHz, corresponding to an average photon energy of 0.00024 electron Volt, or a temperature of 2.725 degrees Kelvin (i.e. above absolute zero). Although this is not much energy, it can be enough to excite ultra-high energy protons into a state called the Delta-plus (basically an excited proton). When the Delta-plus decays, it produces a proton and a neutral pion, or a neutron and a positively charged pion. When the positively charged pions decay, they produce a neutrino and a muon; when the muon decays, it produces two more neutrinos and an electron.
We know that ultra-high energy cosmic rays exist, and we know that CMB photons exist, so these are often considered to be a 'guaranteed' source of neutrinos. These neutrinos are the main goal of radio-detection experiments like ARIANNA, ARA and, of course, ANITA. However, there are a few caveats. If the highest energy cosmic rays are mostly iron, rather than protons, then the flux of these GZK neutrinos will be drastically reduced, below the point where these experiments can see them. Also GZK neutrinos have been produced continuously since the early universe (and are almost never absorbed), so the number of GZK neutrinos existing today depends on how many ultra-energetic cosmic-rays there were in the early universe; this is a much smaller uncertainty than due to the proton vs. iron (or something in between) question.
Until recently, all searches have been negative. The one possible exception is an anomalous event observed by the ANITA experiment, the fourth, 'anomalous' event in their recent paper. I have previously discussed ANITA; this event emerged from a reanalysis of data from their first flight. The ANITA Collaboration describes the event as consistent from a primary source that emerged from the earth; this might be from a neutrino or a long-lived tau lepton. The tau lepton could have been produced in an air shower, and travelled through the Earth, before emerging to produce this shower. The event could also be a mis-reconstructed downward-going shower. Although the event is very interesting, we do see the difficulty of trying to draw conclusions based on one event. It is also clear that the ANITA collaboration feels this difficulty; the event is one of four presented in a paper on downward-going cosmic-rays, rather than highlighted on its own.
Of course, IceCube is also looking for GZK neutrinos. Our latest search, based on 6 years of data, has recently appeared here. To cut to the chase, we didn't find any GZK neutrinos; the analysis did find two lower energy (by GZK standards) events, including the previously announced energy champ. From this non-detection, we set limits that are finally reaching the 'interesting' region. The plot below shows our upper limits as a function of energy, compared with several models.
One needs to be careful in interpreting the curves on the figure. One needs to understand how the curves were made to understand the implications. The limit curve is a 'quasi-differential limit, in decades of energy. Basically, this means that, at each energy, the solid line limit is produced by assuming a continuous neutrino flux with an E^-1 energy spectrum; the E^-1 is chosen to roughly approximate the GZK neutrino flux; more detailed analyses, also given in the paper, use the entire spectrum to calculate 'Model Rejection Factors' to rule out (or not) the different calculations of GZK neutrinos. We are now starting to rule out some models.
We know that ultra-high energy cosmic rays exist, and we know that CMB photons exist, so these are often considered to be a 'guaranteed' source of neutrinos. These neutrinos are the main goal of radio-detection experiments like ARIANNA, ARA and, of course, ANITA. However, there are a few caveats. If the highest energy cosmic rays are mostly iron, rather than protons, then the flux of these GZK neutrinos will be drastically reduced, below the point where these experiments can see them. Also GZK neutrinos have been produced continuously since the early universe (and are almost never absorbed), so the number of GZK neutrinos existing today depends on how many ultra-energetic cosmic-rays there were in the early universe; this is a much smaller uncertainty than due to the proton vs. iron (or something in between) question.
Until recently, all searches have been negative. The one possible exception is an anomalous event observed by the ANITA experiment, the fourth, 'anomalous' event in their recent paper. I have previously discussed ANITA; this event emerged from a reanalysis of data from their first flight. The ANITA Collaboration describes the event as consistent from a primary source that emerged from the earth; this might be from a neutrino or a long-lived tau lepton. The tau lepton could have been produced in an air shower, and travelled through the Earth, before emerging to produce this shower. The event could also be a mis-reconstructed downward-going shower. Although the event is very interesting, we do see the difficulty of trying to draw conclusions based on one event. It is also clear that the ANITA collaboration feels this difficulty; the event is one of four presented in a paper on downward-going cosmic-rays, rather than highlighted on its own.
Of course, IceCube is also looking for GZK neutrinos. Our latest search, based on 6 years of data, has recently appeared here. To cut to the chase, we didn't find any GZK neutrinos; the analysis did find two lower energy (by GZK standards) events, including the previously announced energy champ. From this non-detection, we set limits that are finally reaching the 'interesting' region. The plot below shows our upper limits as a function of energy, compared with several models.
One needs to be careful in interpreting the curves on the figure. One needs to understand how the curves were made to understand the implications. The limit curve is a 'quasi-differential limit, in decades of energy. Basically, this means that, at each energy, the solid line limit is produced by assuming a continuous neutrino flux with an E^-1 energy spectrum; the E^-1 is chosen to roughly approximate the GZK neutrino flux; more detailed analyses, also given in the paper, use the entire spectrum to calculate 'Model Rejection Factors' to rule out (or not) the different calculations of GZK neutrinos. We are now starting to rule out some models.
Wednesday, July 20, 2016
Funding
One of the most painful parts of being a scientist is searching for money. Funding is a necessary evil, but finding it is getting harder and harder. More and more scientists are chasing a relatively constant pool of money, so the success rates for proposals are dropping.
This is probably most pronounced for the National Institutes of Health, which funds most U. S. health care related research. A blog post by Dr. Michael Lauer, NIH's Deputy Director for Extramural Research, gives some recent, and very sobering numbers. For Fiscal Year 2015, the most recent available, the success rate for new proposals is down to 16.3%, or one proposal in six. For renewals, the success rate is 37.3%, or a bit better than one in three.The new proposal rate has declined precipitously over the past decade or so.
The National Science Foundation, which funds IceCube, and much other basic research in the U.S., the overall success rate is, per a blog post by Jeremy Fox, per principle investigator (not per proposal; some PIs submit multiple proposals) is 35%, or comparable to the NIH renewal rate. These rates are not healthy.
On average, scientists have to write three proposals for each one funded. That's a lot of writing, not to mention work for the reviewers and program managers. Furthermore, it can't be any fun being a program officer at a funding agency and having to tell so many people 'No.'
The low renewal rate makes it very hard to do long-term planning; this may put an unwonted emphasis on short-term results. It also makes it much harder to hire people. Particularly for long-term experiments like IceCube, continuity is important, and it would make no sense to fund one group one year, and a different group the next year. Fortunately, most funding agencies do recognize this, and renewals seem easier than new proposals; at the least, the success rates are higher. On the other hand, it is very difficult for young faculty trying to break into the system.
This discourages "the best and the brightest" (whoever they are) from going into academia. When I was in grad school, academia was the preferred career. We all knew it would be tough, but it seemed viable. Now, many of the best students prefer jobs in Silicon Valley, or the financial industry, or working with "big data." There are multiple reasons, but funding expectations are high on the list. Graduating students should certainly pursue their dreams, but, long term, this is not good for the health of U.S. (or international) science. Beyond this, discouragement trickles down, and the funding situation can discourage bright undergraduates from further science education, steering them toward something with better returns, like finance, law or engineering.
Normally, this would be the point where I would provide some snappy suggestions about how to solve this problem. I don't have any brilliant ideas, but I will share a few thoughts
Contrary to what some science critics say (sometimes loudly), peer review for proposals is generally pretty successful, and I don't see a lot of wasted money in the system.
It is not easy to see how one could asks the scientists with grants to get by with significantly less money. Most of the money goes for graduate students and postdocs. Less money means less science, and, frequently, groups sizes are already smaller than is optimal. By optimal, I mean most efficient. There may be some small gains in getting faculty to work together, using a single grant, but not enough to make major improvements. This will also reduce the breadth of coverage at each university.
One could also try to shift some funding from large facilities (particle accelerators, neutron sources, etc.) toward smaller grants. This makes some sense, in that there is no point in building a large facility if there is no money to operate it, but the large facilities are there for very good reasons. To give one example, many areas of science rely ultra-intense X-ray beams to atomically image all sorts of stuff; producing sufficiently intense X-ray beams requires >$100M facilities. That said, a case could be made that some areas of science would benefit from a little shifting.
Of course, the best solution to the acceptance rate problem is additional funding. Unfortunately, this solution can only come from Congress. Right now, given the current political deadlocks, significant additional funding seem unlikely. But, it can't hurt to contact your senators and representatives.
From the standpoint of individual scientists, the only even partially bright point is that funding may be reaching the point where it is self-limiting. Success rates are so low that universities are forced to acknowledge this when assessing faculty. With less money flowing in, they may be more reluctant to hire new science faculty, and will certainly be forced to limit the number of graduate students, to match the available funding. Long-term, this does not seem healthy for the U. S. STEM (science technology engineering math) enterprise, but it is a natural reaction.
I wish this were more upbeat, but it's not. Next time, I'll focus on something cheerful, like science.
This is probably most pronounced for the National Institutes of Health, which funds most U. S. health care related research. A blog post by Dr. Michael Lauer, NIH's Deputy Director for Extramural Research, gives some recent, and very sobering numbers. For Fiscal Year 2015, the most recent available, the success rate for new proposals is down to 16.3%, or one proposal in six. For renewals, the success rate is 37.3%, or a bit better than one in three.The new proposal rate has declined precipitously over the past decade or so.
The National Science Foundation, which funds IceCube, and much other basic research in the U.S., the overall success rate is, per a blog post by Jeremy Fox, per principle investigator (not per proposal; some PIs submit multiple proposals) is 35%, or comparable to the NIH renewal rate. These rates are not healthy.
On average, scientists have to write three proposals for each one funded. That's a lot of writing, not to mention work for the reviewers and program managers. Furthermore, it can't be any fun being a program officer at a funding agency and having to tell so many people 'No.'
The low renewal rate makes it very hard to do long-term planning; this may put an unwonted emphasis on short-term results. It also makes it much harder to hire people. Particularly for long-term experiments like IceCube, continuity is important, and it would make no sense to fund one group one year, and a different group the next year. Fortunately, most funding agencies do recognize this, and renewals seem easier than new proposals; at the least, the success rates are higher. On the other hand, it is very difficult for young faculty trying to break into the system.
This discourages "the best and the brightest" (whoever they are) from going into academia. When I was in grad school, academia was the preferred career. We all knew it would be tough, but it seemed viable. Now, many of the best students prefer jobs in Silicon Valley, or the financial industry, or working with "big data." There are multiple reasons, but funding expectations are high on the list. Graduating students should certainly pursue their dreams, but, long term, this is not good for the health of U.S. (or international) science. Beyond this, discouragement trickles down, and the funding situation can discourage bright undergraduates from further science education, steering them toward something with better returns, like finance, law or engineering.
Normally, this would be the point where I would provide some snappy suggestions about how to solve this problem. I don't have any brilliant ideas, but I will share a few thoughts
Contrary to what some science critics say (sometimes loudly), peer review for proposals is generally pretty successful, and I don't see a lot of wasted money in the system.
It is not easy to see how one could asks the scientists with grants to get by with significantly less money. Most of the money goes for graduate students and postdocs. Less money means less science, and, frequently, groups sizes are already smaller than is optimal. By optimal, I mean most efficient. There may be some small gains in getting faculty to work together, using a single grant, but not enough to make major improvements. This will also reduce the breadth of coverage at each university.
One could also try to shift some funding from large facilities (particle accelerators, neutron sources, etc.) toward smaller grants. This makes some sense, in that there is no point in building a large facility if there is no money to operate it, but the large facilities are there for very good reasons. To give one example, many areas of science rely ultra-intense X-ray beams to atomically image all sorts of stuff; producing sufficiently intense X-ray beams requires >$100M facilities. That said, a case could be made that some areas of science would benefit from a little shifting.
Of course, the best solution to the acceptance rate problem is additional funding. Unfortunately, this solution can only come from Congress. Right now, given the current political deadlocks, significant additional funding seem unlikely. But, it can't hurt to contact your senators and representatives.
From the standpoint of individual scientists, the only even partially bright point is that funding may be reaching the point where it is self-limiting. Success rates are so low that universities are forced to acknowledge this when assessing faculty. With less money flowing in, they may be more reluctant to hire new science faculty, and will certainly be forced to limit the number of graduate students, to match the available funding. Long-term, this does not seem healthy for the U. S. STEM (science technology engineering math) enterprise, but it is a natural reaction.
I wish this were more upbeat, but it's not. Next time, I'll focus on something cheerful, like science.
Tuesday, April 26, 2016
On authorship...life in a mega-collaboration
Scientific collaborations have rapidly increased in size over the past 40 years, with particle and nuclear physics leading the way. Groups of half a dozen researchers have given way to mega-collaborations comprising thousands of people - 2,800 authors in the case of the ATLAS experiment at the LHC.
The enormous growth in size has led to inevitable changes. Large organizations require structure; in the case of scientific collaborations, this includes written rules (governance documents), elections for leaders, usually called 'spokespersons' (to emphasize that their job is to represent the collaboration), committees, and more committees.
These developments are driven, for the most part, by the demands of the science, which require large complex detectors, and ever more detailed analyses. In many areas, large groups are required to make progress. These developments raise a number of sociological questions. To me, one of the more interesting questions is what it means to be an author in a large collaboration.
Recently, I wrote guest post for Retraction Watch, entitled, "When it takes a village to write a paper, what does it mean to be an author?" that goes into this question in more detail. I may be biased, but I think it is worth reading.
If you haven't heard of it, Retraction Watch is a blog that covers retractions in the scientific literature, whether due to mistake or misconduct. They also have interesting articles (and links) to pieces that discuss authorship and other ethical issues, including on the 'business' of science.
The enormous growth in size has led to inevitable changes. Large organizations require structure; in the case of scientific collaborations, this includes written rules (governance documents), elections for leaders, usually called 'spokespersons' (to emphasize that their job is to represent the collaboration), committees, and more committees.
These developments are driven, for the most part, by the demands of the science, which require large complex detectors, and ever more detailed analyses. In many areas, large groups are required to make progress. These developments raise a number of sociological questions. To me, one of the more interesting questions is what it means to be an author in a large collaboration.
Recently, I wrote guest post for Retraction Watch, entitled, "When it takes a village to write a paper, what does it mean to be an author?" that goes into this question in more detail. I may be biased, but I think it is worth reading.
If you haven't heard of it, Retraction Watch is a blog that covers retractions in the scientific literature, whether due to mistake or misconduct. They also have interesting articles (and links) to pieces that discuss authorship and other ethical issues, including on the 'business' of science.
Monday, February 1, 2016
ARIANNA the 2015 field season
The 2015 ARIANNA season is over. Actually, it has been over for a while, but I have been remiss in posting.
Instead of me writing something based on secondhand discussion, I am just going to point you to the blog maintained by Anna Nelles, who was one of the three people who deployed this season:
http://arianna.ps.uci.edu/blog
Instead of me writing something based on secondhand discussion, I am just going to point you to the blog maintained by Anna Nelles, who was one of the three people who deployed this season:
http://arianna.ps.uci.edu/blog