Itty bitty engine puts a single atom to work

A team of scientists has built a heat engine out of a single atom.

Heat engines, like steam engines or internal combustion engines, convert heat into motion. To create the minuscule engine, physicist Johannes Roßnagel of University of Mainz and colleagues heated and cooled a calcium ion with an electric field and a laser, causing it to move and do a tiny amount of work. They report their results in the April 15 Science.

Read more about this and other scaled-down engines in “Ultrasmall engines bend second law of thermodynamics.”

Hubble telescope snaps stunning pic for its 26th birthday

Time to add another gorgeous space photo to the Hubble Space Telescope’s list of greatest hits. For the orbiting observatory’s 26th anniversary in space, astronomers snapped a picture of the Bubble Nebula, a seven-light-year-wide pocket of gas being blown away by a blazing massive star about 7,100 light-years away in the constellation Cassiopeia.

The star responsible for the bubble is young, just 4 million years old, and about 45 times as massive as our sun. It is so hot and bright that it launches its own gas into space at more than 6 million kilometers per hour. The vibrant colors in the nebula represent the elements oxygen, hydrogen and nitrogen.

Hubble launched April 24, 1990, aboard the space shuttle Discovery. A series of visits by astronauts have kept the aging telescope’s suite of cameras, spectrometers and ancillary equipment up-to-date and operating well into its third decade.

Leptospirosis bacterium still haunts swimming holes

Danger in ‘swimming hole’  — As warm weather approaches, the old swimming hole will again beckon boys and girls in farm areas. But disease germs lurk in waters exposed to cattle and other animals…. One “swimming hole disease” called leptospirosis is caused by water-borne Leptospira pomona…. Warm summer temperatures are ideal for maintaining leptospiral organisms in water, and heavy rains may transport the organisms downstream.  — Science News, May 14, 1966

UPDATE
An estimated 100 to 200 people get leptospirosis annually in the United States. The disease, which can cause fever, headache and vomiting, is most common in tropical and rural regions worldwide. Summertime swimming is also haunted by another single-celled terror that thrives in warm freshwater: the so-called “brain-eating” amoeba, Naegleria fowleri. The amoeba caused 35 reported infections in the United States from 2005 to 2014. If N. fowleri enters a person’s nose, it can travel to the brain, where swelling triggered by the immune system kills most victims (SN: 8/22/15, p. 14).

Despite misuses, statistics still has solid foundation

In many realms of science today, “statistical wisdom” seems to be in short supply. Misuse of statistics in scientific research has contributed substantially to the widespread “reproducibility crisis” afflicting many fields (SN: 4/2/16, p. 8; SN: 1/24/15, p. 20). Recently the American Statistical Association produced a list of principles warning against multiple misbeliefs about drawing conclusions from statistical tests. Statistician Stephen Stigler has now issued a reminder that there is some wisdom in the science of statistics. He identifes seven “pillars” that collectively provide a foundation for understanding the scope and depth of statistical reasoning.
Stigler’s pillars include methods for measuring or representing aggregation (measures, such as averages, that represent a collection of data); information (quantifying it and assessing how it changes); likelihood (coping with probabilities); intercomparison (involving measures of variation within datasets); regression (analyzing data to draw inferences); design (of experiments, emphasizing randomization); and residual (identifying the unexplained “leftovers” and comparing scientific models).

His approach is to identify the historical origins of these seven key pillars, providing some idea of what they are and how they can assist in making sense of numerical data. His explanations are engaging but not thorough (it’s not a textbook), and while mostly accessible, his writing often assumes a nontrivial level of mathematical knowledge. You’ll have to cope with expressions such as L(Θ)=L(Θ)|Χ and Cov(L,W)=E{Cov(L,W|S)}+Cov(E{L|S}, E{W|S}) every now and then.

While Stigler defends statistics from some of the criticisms against it — noting, for instance, that specific misuses should not be grounds for condemning the generic enterprise — he acknowledges that some issues are still a source of concern, especially in the new era of “big data” (SN: 2/7/15, p. 22). Using common statistical tests when many comparisons are made at once, or applying tests at multiple stages of an experimental process, introduces problems that the seven pillars do not accommodate. Stigler notes that there is room, therefore, for an eighth pillar. “The pillar may well exist,” he writes, “but no overall structure has yet attracted the general assent needed for recognition.”

Antibiotics in cattle leave their mark in dung

Overuse of antibiotics in livestock can spread drug-resistant microbes — via farm workers or even breezy weather. But there’s more than one reason stay upwind of drugged cattle.

Dung beetles (Aphodius fossor) make their living on cattle dung pats, which are rich in nutritious microbes. To investigate the effects of cattle antibiotics on this smaller scale, Tobin Hammer of the University of Colorado at Boulder and his colleagues studied the tiny communities around tetracycline-dosed and undosed cows. Compared with untreated cows’ dung, microbes in dung produced by treated cows were less diverse and dominated by a genus with documented resistance, the researchers report May 25 in the Proceedings of the Royal Society B.

Beetles typically reduce methane gas wafting off dung, but pats from treated cows showed a 1.8-fold increase in methane output. How this might figure into greater cattle methane production remains to be studied, but Hammer and company speculate that the antibiotics may wipe out the bacterial competition for microbial methane factories.

Tiny plastics cause big problems for perch, lab study finds

Editor’s note: On May 3, 2017, Science retracted the study described in this article. Based on findings from a review board at Uppsala University, Science cites three reasons for pulling the study: The experiments lacked ethical approval, the original data do not appear in the paper and questions emerged about experimental methods.

Microscopic pieces of plastic rule Earth’s oceans, with numbers in the billions — possibly trillions. These tiny plastic rafts provide homes to microbes (SN: 2/20/16, p. 20), but their ecological effects remain murky.
In a lab at Uppsala University in Sweden, researchers exposed European perch (Perca fluviatilis) larvae to a microplastic called polystyrene to see how they might react. The exposure triggered a slew of potentially negative effects: Fewer eggs hatched, growth rates dropped and feeding habits changed, with some larvae preferring polystyrene to more nutritious food options. Exposed larvae were also sluggish in responding to scents that signal approaching predators in the wild, the team reports in the June 3 Science.

European perch, a keystone species in the Baltic Sea, have recently experienced a population dive. Because the drop has been linked to juvenile feeding issues, the researchers argue that microplastics could be to blame.

Limestone world gobbled by planet-eating white dwarf

SAN DIEGO — A remote planet — the first with hints of a limestone shell — has been shredded by its dead sun, a new study suggests.

A generous heaping of carbon is raining down on a white dwarf, the exposed core of a dead star, astrophysicist Carl Melis of the University of California, San Diego said June 13 at a meeting of the American Astronomical Society. The carbon — along with a dash of other elements such as calcium, silicon and iron — is probably all that remains of a rocky planet, torn apart by its dying sun’s gravity. Many other white dwarfs show similar signs of planetary cannibalism (SN Online: 10/21/15), but none are as flooded with carbon atoms as this one.

A planet slathered in calcium carbonate, a mineral found in limestone, could explain the shower of carbon as well as the relative amounts of other elements, said Melis. He and Patrick Dufour, an astrophysicist at the University of Montreal, estimate that calcium carbonate could have made up to 9 percent of the doomed world’s mass.

While a limestone-encrusted world is a first, it’s not shocking, says Melis. The recipe for calcium carbonate is just carbon and calcium in the presence of water. “If you have those conditions, it’s going to form,” he says.

“The real interesting thing is the carbon,” Melis adds. Carbon needs to be frozen — most likely as carbon dioxide — to be incorporated into a forming planet. But CO2 freezes far from a star, beyond where researchers suspect rocky planets are assembled. A limestone planet could have formed in an unexpected place and later wandered in while somehow retaining its carbon stores in the warm environs closer to its sun. Or the carbon might have been delivered to the world after it formed. But, Melis says, it’s not clear how either would happen.

Courts’ use of statistics should be put on trial

The Rev. Thomas Bayes was, as the honorific the Rev. suggests, a clergyman. Too bad he wasn’t a lawyer. Maybe if he had been, lawyers today wouldn’t be so reluctant to enlist his mathematical insights in the pursuit of justice.

In many sorts of court cases, from whether talcum powder causes ovarian cancer to The People v. O.J. Simpson, statistics play (or ought to play) a vital role in evaluating the evidence. Sometimes the evidence itself is statistical, as with the odds of a DNA match or the strength of a scientific research finding. Even more often the key question is how evidence should be added up to assess the probability of guilt. In either circumstance, the statistical methods devised by Bayes are often the only reasonable way of drawing an intelligent conclusion.

Yet the courts today seem suspicious of statistics of any sort, and not without reason. In several famous cases, flawed statistical reasoning has sent innocent people to prison. But in most such instances the statistics applied in court have been primarily the standard type that scientists use to test hypotheses (producing numbers for gauging “statistical significance”). These are the same approaches that have been so widely criticized for rendering many scientific results irreproducible. Many experts believe Bayesian statistics, the legacy of a paper by Bayes published posthumously in 1763, offers a better option.

“The Bayesian approach is especially well suited for a broad range of legal reasoning,” write mathematician Norman Fenton and colleagues in a recent paper in the Annual Review of Statistics and Its Application.

But Bayes has for the most part been neglected by the legal system. “Outside of paternity cases its impact on legal practice has been minimal,” say Fenton, Martin Neil and Daniel Berger, all of the School of Electronic Engineering and Computer Science at Queen Mary University London.

That’s unfortunate, they contend, because non-Bayesian statistical methods have severe shortcomings when applied in legal contexts. Most famously, the standard approach is typically misinterpreted in a way known as the “prosecutor’s fallacy.”

In formal logical terms, the prosecutor’s fallacy is known as “the error of the transposed conditional,” as British pharmacologist David Colquhoun explains in a recent blog post. Consider a murder on a hypothetical island, populated by 1,000 people. Police find a DNA fragment at the crime scene, a fragment that would be found in only 0.4 percent of the population. For no particular reason, the police arrest Jack and give him a DNA test. Jack’s DNA matches the crime scene fragment, so he is charged and sent to trial. The prosecutor proclaims that since only 0.4 percent of innocent people have this DNA fragment, it is 99.6 percent certain that Jack is the killer — evidence beyond reasonable doubt.
But that reasoning is fatally (for Jack) flawed. Unless there was some good reason to suspect Jack in the first place, he is just one of 1,000 possible suspects. Among those 1,000, four people (0.4 percent) should have the same DNA fragment found at the crime scene. Jack is therefore just one of four possibilities to be the murderer — so the probability that he’s the killer is merely 25 percent, not 99.6 percent.

Bayesian reasoning averts this potential miscarriage of justice by including the “prior probability” of guilt when calculating the probability of guilt after the evidence is in.

Suppose, for instance, that the crime in question is not murder, but theft of cupcakes from a bakery employing 100 people. Security cameras reveal 10 employees sneaking off with the cupcakes but without a good view of their identities. So the prior probability of any given employee’s guilt is 10 percent. Police sent to investigate choose an employee at random and conduct a frosting residue test known to be accurate 90 percent of the time. If the employee tests positive, the police might conclude there is therefore a 90 percent probability of guilt. But that’s another example of the prosecutor’s fallacy — it neglects the prior probability. Well-trained Bayesian police would use the formula known as Bayes’ theorem to calculate that given a 10 percent prior probability, 90 percent reliable evidence yields an actual probability of guilt of only 50 percent.

You don’t even need to know Bayes’ formula to reason out that result. If the test is 90 percent accurate, it will erroneously identify nine out of the 90 innocent employees as guilty, and it would identify only nine out of the 10 truly guilty employees. If the police tested all 100 people, then, 18 would appear guilty, but nine of those 18 (half of them) would actually be innocent. So a positive frosting test means only a 50 percent chance of guilt. Bayesian math would in this case (and in many real life cases) prevent a rush to injustice.

“Unfortunately, people without statistical training — and this includes most highly respected legal professionals — find Bayes’ theorem both difficult to understand and counterintuitive,” Fenton and colleagues lament.

One major problem is that real criminal cases are rarely as simple as the cupcake example. “Practical legal arguments normally involve multiple hypotheses and pieces of evidence with complex causal dependencies,” Fenton and colleagues note. Adapting Bayes’ formula to complex situations is not always straightforward. Combining testimony and various other sorts of evidence requires mapping out a network of interrelated probabilities; the math quickly can become much too complicated for pencil and paper — and, until relatively recently, even for computers.

“Until the late 1980s there were no known efficient computer algorithms for doing the calculations,” Fenton and colleagues point out.

But nowadays, better computers — and more crucially, better algorithms — are available to compute the probabilities in just the sorts of complicated Bayesian networks that legal cases present. So Bayesian math now provides the ideal method for weighing competing evidence in order to reach a sound legal judgment. Yet the legal system seems unimpressed.

“Although Bayes is the perfect formalism for this type of reasoning, it is difficult to find any well-reported examples of the successful use of Bayes in combining diverse evidence in a real case,” Fenton and coauthors note. “There is a persistent attitude among some members of the legal profession that probability theory has no role in the courtroom.”

In one case in England, in fact, an appeals court denounced the use of Bayesian calculations, asserting that members of the jury should apply “their individual common sense and knowledge of the world” to the evidence presented.

Apart from the obvious idiocy of using common sense to resolve complex issues, the court’s call to apply “knowledge of the world” to the evidence is exactly what Bayesian math does. Bayesian reasoning provides guidance for applying prior knowledge properly in assessing new knowledge (or evidence) to reach a sound conclusion. Which is what the judicial system is supposed to do.

Bayesian statistics offers a technical tool for avoiding fallacious reasoning. Lawyers should learn to use it. So should scientists. And then maybe then someday justice will be done, and science and the law can work more seamlessly together. But as Fenton and colleagues point out, there remain “massive cultural barriers between the fields of science and law” that “will only be broken down by achieving a critical mass of relevant experts and stakeholders, united in their objectives.”

Sounds from gunshots may help solve crimes

The surveillance video shows a peaceful city streetscape: People walking, cars driving, birds chirping.

“Then, abruptly, there’s the sound of gunfire,” said electrical engineer Robert Maher. “A big bang followed by another bang.”

Witnesses saw two shooters facing off, a few meters apart — one aiming north, the other south. But no one knew who shot first. That’s where Maher comes in. His specialty is gunshot acoustics, and he’s helping shore up the science behind a relatively new forensics field.
In the case of the two shooters, surveillance cameras missed the action, but the sounds told a story that was loud and clear.

A distinctive echo followed the first gunshot but not the second. The first gunshot’s sound probably bounced off a big building to the north, causing the echo, Maher concluded. So the first person to shoot was the person facing north, he reported May 24 in Salt Lake City at a meeting of the Acoustical Society of America.

Maher has analyzed the booming echoes of gunshots in dozens of cases, but he’s also studying the millisecond-long sound of a bullet blasting out of the barrel — and finding differences from one type of gun to the next.

He and colleagues at Montana State University in Bozeman erected a semicircular aluminum frame studded with 12 microphones, evenly spaced and raised 3 meters off the ground. When someone standing on a raised platform in the center of the contraption shoots a gun — a 12-gauge shotgun, for example, or a .38 Special handgun — the microphones pick up the sound.

“Each of the different firearms has a distinctive signal,” he says. His team is building a database of sounds made by 20 different guns. To the ear, the gunshots seem alike, but Maher can chart out differences in the sound waves.
One day, investigators might be able to use the information to figure out what kind of guns were fired at a crime scene. Of course, Maher says, most crime scene recordings aren’t high quality — they often come from cellphones or surveillance systems. But his team will compare those recordings with ones made in his outdoor “lab” and try to figure out which aspects of crime scene audio they can analyze.

Maher, a music lover who plays the cello and sings in a choir, didn’t intend this career. “If I were really talented at music, that’s what I’d be doing full time,” he says. Instead, he has applied his skills in math and science to problems involving sound: studying humans’ contribution to noise in national parks, for example, and now, gunshot acoustics.

For him, it’s “a nice way to bridge the gap between the science and the sound.”

Post-stroke shifts in gut bacteria could cause additional brain injury

When mice have a stroke, their gut reaction can amp up brain damage.

A series of new experiments reveals a surprising back-and-forth between the brain and the gut in the aftermath of a stroke. In mice, this dickering includes changes to the gut microbial population that ultimately lead to even more inflammation in the brain.

There is much work to be done to determine whether the results apply to humans. But the research, published in the July 13 Journal of Neuroscience, hints that poop pills laden with healthy microbes could one day be part of post-stroke therapy.
The work also highlights a connection between gut microbes and brain function that scientists are only just beginning to understand,says Ted Dinan of the Microbiome Institute at the University College Cork, Ireland. There’s growing evidence that gut microbes can influence how people experience stress or depression, for example (SN: 4/2/16, p. 23).

“It’s a fascinating study” says Dinan, who was not involved with the work. “It raises almost as many questions as it answers, which is what good studies do.”

Following a stroke, the mouse gut becomes temporarily paralyzed, leading to a shift in the microbial community, neurologist Arthur Liesz of the Institute for Stroke and Dementia Research in Munich and colleagues found. This altered, less diverse microbial ecosystem appears to interact with immune system cells called T cells that reside in the gut. These T cells can either dampen inflammation or dial it up, leading to more damage, says Liesz. Whether the T cells further damage the brain after a stroke rather than soothe it seems to be determined by the immune system cells’ interaction with the gut microbes.

Transplanting microbe-laden fecal matter from healthy mice into mice who had strokes curbed brain damage, the researchers found. But transplanting fecal matter from mice that had had strokes into stroke-free mice spurred a fourfold increase in immune cells that exacerbate inflammation in the brain.

Learning more about this interaction between the gut’s immune cell and microbial populations will be key to developing therapies, says Liesz. “We basically have no clue what’s going on there.”