IceCube has just announced the observation of our most energetic neutrino yet. The event was in the form of a through-going muon, which means that we saw a piece of the track in our detector, but the track both originated and ended outside of the enclosed volume. So, we cannot measure the total energy of the neutrino. Instead, we measure the specific energy loss (energy loss/distance, or dE/dx). From that, we can estimate the muon energy, in the detector, and, from that, we can, based on an assumption of the neutrino energy spectrum, estimate the probability that neutrino had a range of energies. We are still working on estimating the neutrino energy, but the total energy loss visible in the detector was 2.6 +/- 0.3 PeV. This is, of course, a lower limit to the neutrino energy, making it clearly the most energetic neutrino yet observed. Typically, one expects the neutrino energy to be a couple of times higher than the muon energy.
The event came from the Northern Sky (coordinates R.A.: 110.34 deg and Decl.: 11.48 deg), and we currently estimate the average angular resolution to be 0.27 degrees. Because it was upward-going, we know that it must be a neutrino, and not cosmic-ray muon background.
The event was recorded on June 11, 2014 (see the link for details), just over a year ago. We are clearly getting better at processing and analyzing our data more quickly, but there is still room for improvement.
The event was announced as an "Astronomers Telegram," (ATel) a brief announcement which can be issued quickly. The main purpose of ATels is to get word out quickly about a new transient phenomena (gamma-ray burst, nearby supernova etc.), so that other astronomers can point their telescopes in right direction while the phenomena is still going on. In our case, there is no reason to expect this to be (or not be) from a transient phenomena, and, if it was, it is probably over now. But, we are releasing the coordinates now, so that other observatories can see if they can find anything unusual in that direction. The event is relatively near the equator, so it should be visible from most large observatories.
More information later (probably after the Intl. Cosmic Ray Conference, July 30-Aug. 6th).
Note Added (July 30th). Unlike Bert, Ernie and Big Bird, we have named this event after a Muppet.
Wednesday, July 29, 2015
Wednesday, July 15, 2015
Getting Science Right
Every once in a while, a scientific scandal makes big news. Someone faked doing an experiment, or grossly misinterpreted their results, or failed to reproduce someone elses important result. Particularly in medicine, this can have big consequences.
Unfortunately, these scandals are often blown out of proportion, with insinuations that many scientists are dishonest. At least partly because of this, scientists are paying increasing attention to irreproducible results. There is a blog, "Retraction Watch" which is devoted entirely to scientific papers that have been formally retracted. Some common problems are plagiarism (including self-plagiarism) or apparently faked results (particularly manipulated images). Honest scientific mistakes (i.e. missed minus signs, etc.) also make an appearance, as does occasional subversion of the peer review process. These problems are real, but it is important to keep them in perspective. Retraction watch typically posts 1-3 retractions/day, out of hundreds of thousands of scientific papers published each year. This is a very miniscule percentage. Although Retraction Watch probably doesn't catch every retraction, they do appear to be very efficient at finding them.
Many of these errors are caught rather quickly, by other scientists. Most important retractions occur within a year or two. Pubmed, an online library of medical literature, run by the National Institute of Health recently (2013) opened Pubmed commons, where readers can comment on the scientific literature; suspect images and other visible problems can be (and are) vigorously discussed.
A bigger problem may be papers that are just not reproducible, for reasons that are not clear. This is mostly an issue for biology and medicine, fields that deal with complex systems (large molecules, cells, humans), where . At least according to some reports, like this article in the New Scientist, this is an epidemic problem, affecting a large fraction of published papers. This track record is a good reason to take the latest medical advice with at least a small grain of salt. However, even here, the scientific record is generally self-correcting, albeit most slowly. Science builds on previous results, and you can't build much on a cracked foundation. Darwinian evolution gradually weeds out bad conclusions.
As a more quantitative science, physics suffers less from irreproducibility than biology. It is far easier to quantify the uncertainties in a neutrino energy measurement than in, for example, unknown contaminants in a reagent used in a biology experiment. Over the past decades, physics has also taken increasing efforts to eliminate sources of unconscious bias. In many experiments (IceCube included), most analyses are done in a 'blind' manner, whereby the analyst prepares his analysis using simulated data, and a small fraction of the real data. Only after the analysis procedure is fixed, and reviewed by the collaboration, is the real data analyses. This avoids any tendency to zero in on fluctuations in the data (the 'look here' phenomena). So, when choosing a list of possible neutrino sources to analyze, we won't unconsciously pick one(s) that correspond to upfluctuations in the data. As a result of these practices, particle and nuclear physics have a pretty good (but not perfect) record with being able to reproduce previous results.
Unfortunately, these scandals are often blown out of proportion, with insinuations that many scientists are dishonest. At least partly because of this, scientists are paying increasing attention to irreproducible results. There is a blog, "Retraction Watch" which is devoted entirely to scientific papers that have been formally retracted. Some common problems are plagiarism (including self-plagiarism) or apparently faked results (particularly manipulated images). Honest scientific mistakes (i.e. missed minus signs, etc.) also make an appearance, as does occasional subversion of the peer review process. These problems are real, but it is important to keep them in perspective. Retraction watch typically posts 1-3 retractions/day, out of hundreds of thousands of scientific papers published each year. This is a very miniscule percentage. Although Retraction Watch probably doesn't catch every retraction, they do appear to be very efficient at finding them.
Many of these errors are caught rather quickly, by other scientists. Most important retractions occur within a year or two. Pubmed, an online library of medical literature, run by the National Institute of Health recently (2013) opened Pubmed commons, where readers can comment on the scientific literature; suspect images and other visible problems can be (and are) vigorously discussed.
A bigger problem may be papers that are just not reproducible, for reasons that are not clear. This is mostly an issue for biology and medicine, fields that deal with complex systems (large molecules, cells, humans), where . At least according to some reports, like this article in the New Scientist, this is an epidemic problem, affecting a large fraction of published papers. This track record is a good reason to take the latest medical advice with at least a small grain of salt. However, even here, the scientific record is generally self-correcting, albeit most slowly. Science builds on previous results, and you can't build much on a cracked foundation. Darwinian evolution gradually weeds out bad conclusions.
As a more quantitative science, physics suffers less from irreproducibility than biology. It is far easier to quantify the uncertainties in a neutrino energy measurement than in, for example, unknown contaminants in a reagent used in a biology experiment. Over the past decades, physics has also taken increasing efforts to eliminate sources of unconscious bias. In many experiments (IceCube included), most analyses are done in a 'blind' manner, whereby the analyst prepares his analysis using simulated data, and a small fraction of the real data. Only after the analysis procedure is fixed, and reviewed by the collaboration, is the real data analyses. This avoids any tendency to zero in on fluctuations in the data (the 'look here' phenomena). So, when choosing a list of possible neutrino sources to analyze, we won't unconsciously pick one(s) that correspond to upfluctuations in the data. As a result of these practices, particle and nuclear physics have a pretty good (but not perfect) record with being able to reproduce previous results.