Internal clock helps young sunflowers follow the sun

Young sunflowers grow better when they track the sun’s daily motion from east to west across the sky. An internal clock helps control the behavior, biologist Stacey Harmer and colleagues report in the Aug. 5 Science.

Depending on the time of day, certain growth genes appear to be activated to different degrees on opposing sides of young sunflowers’ stems. The east side of their stems grow faster during the day, causing the stems to gradually bend from east to west. The west side grows faster at night, reorienting the plants to prepare them for the next morning. “At dawn, they’re already facing east again,” says Harmer, of the University of California, Davis. The behavior helped sunflowers grow bigger, her team found.
Young plants continued to grow from east to west each day even when their light source didn’t move. So Harmer and her colleagues concluded that the behavior was influenced by an internal clock like the one that controls human sleep/wake cycles, instead of being solely in response to available light.

That’s probably advantageous, Harmer says, “because you have a system that’s set up to run even if the environment changes transiently.” A cloudy morning doesn’t stop the plants from tracking, for instance.

Contrary to popular belief, mature sunflowers don’t track the sun — they perpetually face east. That’s probably because their stems have stopped growing. But Harmer and her colleagues found an advantage for the fixed orientation, too: Eastern-facing heads get warmer in the sun than westward-facing ones and attract more pollinators.

Sleep deprivation hits some brain areas hard

Pulling consecutive all-nighters makes some brain areas groggier than others. Regions involved with problem solving and concentration become especially sluggish when sleep-deprived, a new study using brain scans reveals. Other areas keep ticking along, appearing to be less affected by a mounting sleep debt.

The results might lead to a better understanding of the rhythmic nature of symptoms in certain psychiatric or neurodegenerative disorders, says study coauthor Derk-Jan Dijk. People with dementia, for instance, can be afflicted with “sundowning,” which worsens their symptoms at the end of the day. More broadly, the findings, published August 12 in Science, document the brain’s response to too little shut-eye.
“We’ve shown what shift workers already know,” says Dijk, of the University of Surrey in England. “Being awake at 6 a.m. after a night of no sleep, it isn’t easy. But what wasn’t known was the remarkably different response of these brain areas.”

The research reveals the differing effects of the two major factors that influence when you conk out: the body’s roughly 24-hour circadian clock, which helps keep you awake in the daytime and put you to sleep when it’s dark, and the body’s drive to sleep, which steadily increases the longer you’re awake.

Dijk and collaborators at the University of Liege in Belgium assessed the cognitive function of 33 young adults who went without sleep for 42 hours. Over the course of this sleepless period, the participants performed some simple tasks testing reaction time and memory. The sleepy subjects also underwent 12 brain scans during their ordeal and another scan after 12 hours of recovery sleep. Throughout the study, the researchers also measured participants’ levels of the sleep hormone melatonin, which served as a way to track the hands on their master circadian clocks.

Activity in some brain areas, such as the thalamus, a central hub that connects many other structures, waxed and waned in sync with the circadian clock. But in other areas, especially those in the brain’s outer layer, the effects of this master clock were overridden by the body’s drive to sleep. Brain activity diminished in these regions as sleep debt mounted, the scans showed.

Sleep deprivation also meddled with the participants’ performance on simple tasks, effects influenced both by the mounting sleep debt and the cycles of the master clock. Performance suffered in the night, but improved somewhat during the second day, even after no sleep.
While the brain’s circadian clock signal is known to originate in a cluster of nerve cells known as the suprachiasmatic nucleus, it isn’t clear where the drive to sleep comes from, says Charles Czeisler, a sleep expert at Harvard Medical School. The need to sleep might grow as toxic metabolites build up after a day’s worth of brain activity, or be triggered when certain regions run out of fuel.

Sleep drive’s origin is just one of many questions raised by the research, says Czeisler, who says the study “opens up a new era in our understanding of sleep-wake neurobiology.” The approach of tracking activity with brain scans and melatonin measurements might reveal, for example, how a lack of sleep during the teenage years influences brain development.

Such an approach also might lead to the development of a test that reflects the strength of the body’s sleep drive, Czeisler says. That measurement might help clinicians spot chronic sleep deprivation, a health threat that can masquerade as attention-deficit/hyperactivity disorder in children.

Flaming fuel on water creates ‘blue whirl’ that burns clean

Blue whirl
Bloo werl n.
A swirling flame that appears in fuel floating on the surface of water and glows blue.

An unfortunate mix of electricity and bourbon has led to a new discovery. After lightning hit a Jim Beam warehouse in 2003, a nearby lake was set ablaze when the distilled spirit spilled into the water and ignited. Spiraling tornadoes of fire leapt from the surface. In a laboratory experiment inspired by the conflagration, a team of researchers produced a new, efficiently burning fire tornado, which they named a blue whirl.
To re-create the bourbon-fire conditions, the researchers, led by Elaine Oran of the University of Maryland in College Park, ignited liquid fuel floating on a bath of water. They surrounded the blaze with a cylindrical structure that funneled air into the flame to create a vortex with a height of about 60 centimeters. Eventually, the chaotic fire whirl calmed into a blue, cone-shaped flame just a few centimeters tall, the scientists report online August 4 in Proceedings of the National Academy of Sciences.

“Firenadoes” are known to appear in wildfires, when swirling winds and flames combine to form a hellacious, rotating inferno. They burn more efficiently than typical fires, as the whipping winds mix in extra oxygen, which feeds the fire. But the blue whirl is even more efficient; its azure glow indicates complete combustion, which releases little soot, or uncombusted carbon, to the air.

The soot-free blue whirls could be a way of burning off oil spills on water without adding much pollution to the air, the researchers say, if they can find a way to control them in the wild.

What Donkey Kong can tell us about how to study the brain

Brain scientists Eric Jonas and Konrad Kording had grown skeptical. They weren’t convinced that the sophisticated, big data experiments of neuroscience were actually accomplishing anything. So they devised a devilish experiment.

Instead of studying the brain of a person, or a mouse, or even a lowly worm, the two used advanced neuroscience methods to scrutinize the inner workings of another information processor — a computer chip. The unorthodox experimental subject, the MOS 6502, is the same chip that dazzled early tech junkies and kids alike in the 1980s by powering Donkey Kong, Space Invaders and Pitfall, as well as the Apple I and II computers.
Of course, these experiments were rigged. The scientists already knew everything about how the 6502 works.

“The beauty of the microprocessor is that unlike anything in biology, we understand it on every level,” says Jonas, of the University of California, Berkeley.
Using a simulation of MOS 6502, Jonas and Kording, of Northwestern University in Chicago, studied the behavior of electricity-moving transistors, along with aspects of the chip’s connections and its output, to reveal how it handles information. Since they already knew what the outcomes should be, they were actually testing the methods.

By the end of their experiments, Jonas and Kording had discovered almost nothing.

Their results — or lack thereof — hit a nerve among neuroscientists. When Jonas presented the work last year at a Kavli Foundation workshop held at MIT, the response from the crowd was split. “A bunch of people said, ‘That’s awesome. I had that idea 10 years ago and never got around to doing it,’ ” Jonas says. “And a bunch of people were like, ‘That’s bullshit. You’re taking the analogy way too far. You’re attacking a straw man.’ ”
On May 26, Jonas and Kording shared their results with a wider audience by posting a manuscript on the website bioRxiv.org. Bottom line of their report: Some of the best tools used by neuro-scientists turned up plenty of data but failed to reveal anything meaningful about a relatively simple machine. The implications are profound — and discouraging. Current neuro-science methods might not be up for the job when it comes to truly understanding the brain.

The paper “does a great job of articulating something that most thoughtful people believe but haven’t said out loud,” says neuroscientist Anthony Zador of Cold Spring Harbor Laboratory in New York. “Their point is that it’s not clear that the current methods would ever allow us to understand how the brain computes in [a] fundamental way,” he says. “And I don’t necessarily disagree.”

Differences and similarities
Critics, however, contend that the analogy of the brain as a computer is flawed. Terrence Sejnowski of the Salk Institute for Biological Studies in La Jolla, Calif., for instance, calls the comparison “provocative, but misleading.” The brain and the microprocessor are distinct in a huge number of ways. The brain can behave differently in different situations, a variability that adds an element of randomness to its machinations; computers aim to serve up the same response to the same situation every time. And compared with a microprocessor, the brain has an incredible amount of redundancy, with multiple circuits able to step in and compensate when others malfunction.

In microprocessors, the software is distinct from the hardware — any number of programs can run on the same machine. “This is not the case in the brain, where the software is the hardware,” Sejnowski says. And this hardware changes from minute to minute. Unlike the microprocessor’s connections, brain circuits morph every time you learn something new. Synapses grow and connect nerve cells, storing new knowledge.

Brains and microprocessors have very different origins, Sejnowski points out. The human brain has been sculpted over millions of years of evolution to be incredibly specialized, able to spot an angry face at a glance, for instance, or remember a childhood song for years. The 6502, which debuted in 1975, was designed by a small team of humans, who engineered the chip to their exact specifications. The methods for understanding one shouldn’t be expected to work for the other, Sejnowski says.

Yet there are some undeniable similarities. Brains and microprocessors are both built from many small units: 86 billion neurons and 3,510 transistors, respectively. These units can be organized into specialized modules that allow both “organs” to flexibly move information around and hold memories. Those shared traits make the 6502 a legitimate and informative model organism, Jonas and Kording argue.
In one experiment, they tested what would happen if they tried to break the 6502 bit by bit. Using a simulation to run their experiments, the researchers systematically knocked out every single transistor one at a time. They wanted to know which transistors were mission-critical to three important “behaviors”: Donkey Kong, Space Invaders and Pitfall. The effort was akin to what neuroscientists call “lesion studies,” which probe how the brain behaves when a certain area is damaged.

The experiment netted 1,565 transistors that could be eliminated without any consequences to the games. But other transistors proved essential. Losing any one of 1,560 transistors made it impossible for the microprocessor to load any of the games.

Big gap
Those results are hard to parse into something meaningful. This type of experiment, just as those in human and animal brains, are informative in some ways. But they don’t constitute understanding, Jonas argues. The gulf between knowing that a particular broken transistor can stymie a game and actually understanding how that transistor helps compute is “incredibly vast,” he says.

The transistor “lesion” experiment “gets at the core problem that we are struggling with in neuro-science,” Zador says. “Although we can attribute different brain functions to different brain areas, we don’t actually understand how the brain computes.”
Other experiments reported in the study turned up red herrings — results that looked similar to potentially useful brain data, but were ultimately meaningless. Jonas and Kording looked at the average activity of groups of nearby transistors to assess patterns about how the microprocessor works. Neuro-scientists do something similar when they analyze electrical patterns of groups of neurons. In this task, the microprocessor delivered some good-looking data. Oscillations of activity rippled over the microprocessor in patterns that seemed similar to those of the brain. Unfortunately, those signals are irrelevant to how the computer chip actually operates.

Data from other experiments revealed a few finds, including that the microprocessor contains a clock signal and that it switches between reading and writing memory. Yet these are not key insights into how the chip actually handles information, Jonas and Kording write in their paper.

It’s not that analogous experiments on the brain are useless, Jonas says. But he hopes that these examples reveal how big of a challenge it will be to move from experimental results to a true understanding. “We really need to be honest about what we’re going to pull out here.”
Jonas says the results should caution against collecting big datasets in the absence of theories that can help guide experiments and that can be verified or refuted. For the microprocessor, the researchers had a lot of data, yet still couldn’t separate the informative wheat from the distracting chaff. The results “suggest that we need to try and push a little bit more toward testable theories,” he says.

That’s not to say that big datasets are useless, he is quick to point out. Zador agrees. Some giant collections of neural information will probably turn out to be wastes of time. But “the right dataset will be useful,” he says. And the right bit of data might hold the key that propels neuroscientists forward.

Despite the pessimistic overtones in the paper, Christof Koch of the Allen Institute for Brain Science in Seattle is a fan. “You got to love it,” Koch says. At its heart, the experiment on the 6502 “sends a good message of humility,” he adds. “It will take a lot of hard work by a lot of very clever people for many years to understand the brain.” But he says that tenacity, especially in the face of such a formidable challenge, will eventually lead to clarity.

Zador recently opened a fortune cookie that read, “If the brain were so simple that we could understand it, we would be so simple that we couldn’t.” That quote, from IBM researcher Emerson Pugh, throws down the challenge, Zador says. “The alternative is that we will never understand it,” he says. “I just can’t believe that.”

Dog brains divide language tasks much like humans do

Editor’s note: When reporting results from the functional MRI scans of dogs’ brains, left and right were accidentally reversed in all images, the researchers report in a correction posted April 7 in Science. While dogs and most humans use different hemispheres of the brain to process meaning and intonation — instead of the same hemispheres, as was suggested — lead author Attila Andics says the more important finding still stands: Dogs’ brains process different aspects of human speech in different hemispheres.
Dogs process speech much like people do, a new study finds. Meaningful words like “good boy” activate the left side of a dog’s brain regardless of tone of voice, while a region on the right side of the brain responds to intonation, scientists report in the Sept. 2 Science.

Similarly, humans process the meanings of words in the left hemisphere of the brain, and interpret intonation in the right hemisphere. That lets people sort out words that convey meaning from random sounds that don’t. But it has been unclear whether language abilities were a prerequisite for that division of brain labor, says neuroscientist Attila Andics of Eötvös Loránd University in Budapest.

Dogs make ideal test subjects for understanding speech processing because of their close connection to humans. “Humans use words towards dogs in their everyday, normal communication, and dogs pay attention to this speech in a way that cats and hamsters don’t,” says Andics. “When we want to understand how an animal processes speech, it’s important that speech be relevant.”
Andics and his colleagues trained dogs to lie still for functional MRI scans, which reveal when and where the brain is responding to certain cues. Then the scientists played the dogs recordings of a trainer saying either meaningful praise words like “good boy,” or neutral words like “however,” either in an enthusiastic tone of voice or a neutral one.
The dogs showed increased activity in the left sides of their brains in response to the meaningful words, but not the neutral ones. An area on the right side of the brain reacted to the intonation of those words, separating out enthusiasm from indifference.

When the dogs heard praising words in an enthusiastic tone of voice, neural circuits associated with reward became more active. The dogs had the same neurological response to an excited “Good dog!” as they might to being petted or receiving a tasty treat. Praise words or enthusiastic intonation alone didn’t have the same effect.

Humans stand out from other animals in their ability to use language — that is, to manipulate sequences of sounds to convey different meanings. But the new findings suggest that the ability to hear these arbitrary sequences of sound and link them to meaning isn’t a uniquely human ability.

“I love these results, as they point to how well domestication has shaped dogs to use and track the very same cues that we use to make sense of what other people are saying,” says Laurie Santos, a cognitive psychologist at Yale University.

While domestication made dogs more attentive to human speech, humans have been close companions with dogs for only 30,000 years. That’s too quickly for a trait like lateralized speech processing to evolve, Andics thinks. He suspects that some older underlying neural mechanism for processing meaningful sounds is present in other animals, too.

It’s just hard to test in other species, he says — in part because cats don’t take as kindly to being put inside MRI scanners and asked to hold still.

Supersymmetry’s absence at LHC puzzles physicists

A beautiful but unproved theory of particle physics is withering in the harsh light of data.

For decades, many particle physicists have devoted themselves to the beloved theory, known as supersymmetry. But it’s beginning to seem that the zoo of new particles that the theory predicts —the heavier cousins of known particles — may live only in physicists’ imaginations. Or if such particles, known as superpartners, do exist, they’re not what physicists expected.

New data from the world’s most powerful particle accelerator — the Large Hadron Collider, now operating at higher energies than ever before — show no traces of superpartners. And so the theory’s most fervent supporters have begun to pay for their overconfidence — in the form of expensive bottles of brandy. On August 22, a group of physicists who wagered that the LHC would quickly confirm the theory settled a 16-year-old bet. In a session at a physics meeting in Copenhagen, theoretical physicist Nima Arkani-Hamed ponied up, presenting a bottle of cognac to physicists who bet that the new particles would be slow to materialize, or might not exist at all.
Whether their pet theories are right or wrong, many theoretical physicists are simply excited that the new LHC data can finally anchor their ideas to reality. “Of course, in the end, nature is going to tell us what’s true,” says theoretical physicist Yonit Hochberg of Cornell University, who spoke on a panel at the meeting.

Supersymmetry is not ruled out by the new data, but if the new particles exist, they must be heavier than scientists expected. “Right now, nature is telling us that if supersymmetry is the right theory, then it doesn’t look exactly like we thought it would,” Hochberg says.
Since June 2015, the LHC, at the European particle physics lab CERN near Geneva, has been smashing protons together at higher energies than ever before: 13 trillion electron volts. Physicists had been eager to see if new particles would pop out at these energies. But the results have agreed overwhelmingly with the standard model, the established theory that describes the known particles and their interactions.

It’s a triumph for the standard model, but a letdown for physicists who hope to expose cracks in that theory. “There is a low-level panic,” says theoretical physicist Matthew Buckley of Rutgers University in Piscataway, N.J. “We had a long time without data, and during that time many theorists thought up very compelling ideas. And those ideas have turned out to be wrong.”

Physicists know that the standard model must break down somewhere. It doesn’t explain why the universe contains more matter than antimatter, and it fails to pinpoint the origins of dark matter and dark energy, which make up 95 percent of the matter and energy in the cosmos.

Even the crowning achievement of the LHC, the discovery of the Higgs boson in 2012 (SN: 7/28/2012, p. 5), hints at the sickness within the standard model. The mass of the Higgs boson, at 125 billion electron volts, is vastly smaller than theory naïvely predicts. That mass, physicists worry, is not “natural” — the factors that contribute to the Higgs mass must be finely tuned to cancel each other out and keep the mass small (SN Online: 10/22/13).

Among the many theories that attempt to fix the standard model’s woes, supersymmetry is the most celebrated. “Supersymmetry was this dominant paradigm for 30 years because it was so beautiful, and it was so perfect,” says theoretical physicist Nathaniel Craig of the University of California, Santa Barbara. But supersymmetry is becoming less appealing as the LHC collects more collisions with no signs of superpartners.

Supersymmetry solves three major problems in physics: It explains why the Higgs is so light; it provides a particle that serves as dark matter; and it implies that the three forces of the standard model (electromagnetism and the weak and strong nuclear forces) unite into one at high energies.

If a simple version of supersymmetry is correct, the LHC probably should have detected superpartners already. As the LHC rules out such particles at ever-higher masses, retaining the appealing properties of supersymmetry requires increasingly convoluted theoretical contortions, stripping the idea of some of the elegance that first persuaded scientists to embrace it.
“If supersymmetry exists, it is not my parents’ supersymmetry,” says Buckley. “That kind of means it can’t be the most compelling version.”

Still, many physicists are adopting an attitude of “keep calm and carry on.” They aren’t giving up hope that evidence for the theory — or other new particle physics phenomena — will show up soon. “I am not yet particularly worried,” says theoretical physicist Carlos Wagner of the University of Chicago. “I think it’s too early. We just started this process.” The LHC has delivered only 1 percent of the data it will collect over its lifetime. Hopes of quickly finding new phenomena were too optimistic, Wagner says.
Experimental physicists, too, maintain that there is plenty of room for new discoveries. But it could take years to uncover them. “I would be very, very happy if we were able to find some new phenomena, some new state of matter, within the first two or three years” of running the LHC at its boosted energy, Tiziano Camporesi of the LHC’s CMS experiment said during a news conference at the International Conference on High Energy Physics, held in Chicago in August. “That would mean that nature has been kind to us.”

But other LHC scientists admit they had expected new discoveries by now. “The fact that we haven’t seen something, I think, is in general quite surprising to the community,” said Guy Wilkinson, spokesperson for the LHCb experiment. “This isn’t a failure — this is perhaps telling us something.” The lack of new particles forces theoretical physicists to consider new explanations for the mass of the Higgs. To be consistent with data, those explanations can’t create new particles the LHC should already have seen.

Some physicists — particularly those of the younger generations — are ready to move on to new ideas. “I’m personally not attached to supersymmetry,” says David Kaplan of Johns Hopkins University. Kaplan and colleagues recently proposed the “relaxion” hypothesis, which allows the Higgs mass to change — or relax — as the universe evolves. Under this theory, the Higgs mass gets stuck at a small value, never reaching the high mass otherwise predicted.

Another idea, which Craig favors, is a family of theories by the name of “neutral naturalness.” Like supersymmetry, this idea proposes symmetries of nature that solve the problem of the Higgs mass, but it doesn’t predict new particles that should have been seen at the LHC. “The theories, they’re not as beautiful as just simple supersymmetry, but they’re motivated by data,” Craig says.

One particularly controversial idea is the multiverse hypothesis. There may be innumerable other universes, with different Higgs masses in each. Perhaps humans observe such a light Higgs because a small mass is necessary for heavy elements like carbon to be produced in stars. People might live in a universe with a small Higgs because it’s the only type of universe life can exist in.

It’s possible that physicists’ fears will be realized — the LHC could deliver the Higgs boson and nothing else. Such a result would leave theoretical physicists with few clues to work with. Still, says Hochberg, “if that’s the case, we’ll still be learning something very deep about nature.”

Tiny structures give a peacock spider its radiant rump

Male peacock spiders know how to work their angles and find their light.

The arachnids, native to Australia, raise their derriere — or, more accurately, a flap on their hind end — skyward and shake it to attract females. Hairlike scales cover their bodies and produce the vibrant colorations that make peacock spiders so striking.

Doekele Stavenga of the University of Groningen in the Netherlands and his colleagues collected Maratus splendens peacock spiders from a park outside Sydney and zoomed in on those scales.
Using microscopy, spectrometry and other techniques, the team found that the spiders’ red, yellow and cream scales rely on two pigments, 3-OH-kynurenine and xanthommatin, to reflect their colors. Even white scales contain low levels of pigment. Spines lining these scales scatter light randomly, giving them slightly different hues from different angles.
Blue scales are an entirely different story. They’re transparent and pigment-free. Instead, the scales’ architecture reflects iridescent blue and purple hues. Each peapodlike scale is lined with tiny ridges on the outside and a layer of threadlike fibers on the inside. Fiber spacing may determine whether scales appear more blue or more purple.

Whether peacock spiders’ eyes can actually see these posterior patterns is an open question, Stavenga and his colleagues write in the August Journal of the Royal Society Interface. Given that other jumping spiders see at least three color ranges, it seems unlikely that such vivid come-hither choreography plays out in black and white.

Taming photons, electrons paves way for quantum internet

WASHINGTON — A quantum internet could one day allow ultrasecure communications worldwide — but first, scientists must learn to tame unruly quantum particles such as electrons and photons. Several new developments in quantum technology, discussed at a recent meeting, have brought scientists closer to such mastery. Scientists are now teleporting particles’ properties across cities, satellite experiments are gearing up for quantum communications in space, and other scientists are developing ways to hold quantum information in memory.

In one feat, scientists achieved quantum teleportation across long distances through metropolitan areas. Quantum teleportation transfers quantum properties of one particle to another instantaneously. (It doesn’t allow for faster-than-light communication, though, because additional information has to be sent through standard channels.)
Using a quantum network in Calgary, scientists teleported quantum states of photons over 6.2 kilometers. “It’s one step towards … achieving a global quantum network,” says Raju Valivarthi of the University of Calgary in Canada, who presented the result at the International Conference on Quantum Cryptography, QCrypt, on September 12.

A second group of scientists recently teleported photons using a quantum network spread through the city of Hefei, China. The two teams published their results online September 19 in Nature Photonics.

The weird properties of quantum particles make quantum communication possible: They can be in two places at once, or can have their properties linked through quantum entanglement. Tweak one particle in an entangled pair, and you can immediately seem to affect the other — what Albert Einstein called “spooky action at a distance.” Using quantum entanglement, people can securely exchange quantum keys — codes which can be used to encrypt top-secret messages. (SN: 11/20/10, p. 22). Any eavesdropper spying on the quantum key exchange would be detected, and the keys could be thrown out.

In practice, quantum particles can travel only so far. As photons are sent back and forth through optical fibers, many are lost along the way. But certain techniques can be used to expand their range. Quantum teleportation systems could be used to create quantum repeaters, which could be chained together to extend networks farther. But in order to function, quantum repeaters would also require a quantum memory to store entanglement until all the links in the chain are ready, says Ronald Hanson of Delft University of Technology in the Netherlands. Using a system based on quantum entanglement of electrons in diamond chips, Hanson’s team has developed a quantum memory by transferring the entanglement of the electrons to atomic nuclei for safekeeping, he reported at QCrypt on September 15.

Satellites could likewise allow quantum communication from afar. In August, China launched a satellite to test quantum communication from space; other groups are also studying techniques for sending delicate quantum information to space and back again (SN Online: 6/5/16), beaming up photons through free space instead of through optical fibers. “A free-space link is essential if you want to go to real long distance,” Giuseppe Vallone of the University of Padua in Italy said in a session at QCrypt on September 14. Particles can travel farther when sent via quantum satellite — due to the emptiness of space, fewer photons are absorbed or scattered away.
Quantum networks could also benefit from processes that allow the use of scaled-down “quantum fingerprints” of data, to compare files without sending excess data, Feihu Xu of MIT reported at QCrypt on September 12. To check if two files are identical — for example, in order to find illegally pirated movies — one might compare all the bits in each file. But in fact, a subset of the bits — or a fingerprint — can do the job well. By harnessing the power of quantum mechanics, Xu and colleagues were able to compare messages using less information than classical methods require.

The quantum internet relies on the principles of quantum mechanics, which modern-day physicists generally accept — spooky action and all. In 2015, scientists finally confirmed that a key example of quantum weirdness is real, with a souped-up version of a test known as a Bell test, which closed loopholes that had weakened earlier Bell tests (SN: 9/19/15, p. 12). Loophole-free Bell tests were necessary to squelch any lingering doubts, but no one expected any surprises, says Charles Bennett of the IBM Thomas J. Watson Research Center in Yorktown Heights, N.Y. “In a certain sense it’s beating a dead horse.”

But Bell tests have applications for the quantum internet as well — they are a foundation of an even more secure type of quantum communication, called device-independent quantum key distribution. Typically, secure exchanges of quantum keys require that the devices used are trustworthy, but device-independent methods do away with this requirement. This is “the most safe way of quantum communication,” says Hanson. “It does not make any assumptions about the internal workings of the device.”

Science relies on work of young research standouts

This issue marks the second year that Science News has reached out to science notables and asked: Which up-and-coming scientist is making a splash? Whose work impresses you? Tell us about early- to mid-career scientists who have the potential to change their fields and the direction of science more generally.

This year, we expanded the pool of people we asked. We reached out to Nobel laureates again and added recently elected members of the National Academy of Sciences. That allowed us to consider shining lights from a much broader array of fields, from oceanography and astronomy to cognitive psychology. Another difference this year: We spent time face-to-face with many of those selected, to get a better sense of them both as scientists and as people.
The result is the SN 10, a collection of stories not only about science, but also about making a life in science. They are stories of people succeeding because they have found what they love, be it working in the lab on new ways to probe molecular structures or staring up to the stars in search of glimmers of the early universe. In my interviews with chemist Phil Baran, I was struck by his drive to do things in new ways, whether devising chemical reactions or developing ideas about how to fund research. (If you can, he says, go private.) Laura Sanders, who met with neuroscientist Jeremy Freeman, was intrigued by his way of seeing a problem (siloed data that can’t be easily shared or analyzed) and figuring out solutions, even if those solutions were outside his area of expertise.

Of course, there are many ways to identify noteworthy scientists — and there’s plenty more fodder out there for future years. Our approach was to seek standouts, asking who deserved recognition for the skill of their methods, the insights of their thinking, the impacts of their research. Not all of the SN 10’s work has made headlines, but they all share something more important: They are participants in building the science of the future.

Notably, many of them do basic research. I think that’s because it’s the type of work that other scientists notice, even if it’s not always on the radar of the general public. But that’s where fundamental advances are often made, as scientists explore the unknown.

That edge of what’s known is where Science News likes to explore, too. Such as the bet-ending, head-scratching results from the Large Hadron Collider, which have failed to reveal the particles that the equations of supersymmetry predict. As Emily Conover reports in “Supersymmetry’s absence at LHC puzzles physicists,” that means that either the theory must be more complicated than originally thought, or not true, letting down those who looked to supersymmetry to help explain a few enduring mysteries, from the nature of dark matter to the mass of the Higgs boson.

Other mysteries may be closer to a solution, as Sanders reports in “New Alzheimer’s drug shows promise in small trial.” A new potential treatment for Alzheimer’s disease reduced amyloid-beta plaques in patients. It also showed hints of improving cognition. That’s standout news, a result built on decades of basic research by many, many bright young scientists.

Wi-Fi can help house distinguish between members

In smart homes of the future, computers may identify inhabitants and cater to their needs using a tool already at hand: Wi-Fi. Human bodies partially block the radio waves that carry the wireless signal between router and computer. Differences in shape, size and even gait among household members yield different patterns in the received Wi-Fi signals. A computer can analyze the signals to distinguish dad from mom, according to a report posted online August 11 at arXiv.org.

Scientists built an algorithm that was nearly 95 percent accurate when attempting to discern two adults walking between a wireless router and a computer. For six people, accuracy fell to about 89 percent. Scientists tested the setup on men and women of various sizes, but it should work with children as well, says study coauthor Bin Guo of Northwestern Polytechnical University in Xi’an, China.

In a home rigged with Wi-Fi and a receiver, the system could eventually identify family members and tailor heating and lighting to their preferences — maybe even cue up a favorite playlist.