What Donkey Kong can tell us about how to study the brain

Brain scientists Eric Jonas and Konrad Kording had grown skeptical. They weren’t convinced that the sophisticated, big data experiments of neuroscience were actually accomplishing anything. So they devised a devilish experiment.

Instead of studying the brain of a person, or a mouse, or even a lowly worm, the two used advanced neuroscience methods to scrutinize the inner workings of another information processor — a computer chip. The unorthodox experimental subject, the MOS 6502, is the same chip that dazzled early tech junkies and kids alike in the 1980s by powering Donkey Kong, Space Invaders and Pitfall, as well as the Apple I and II computers.
Of course, these experiments were rigged. The scientists already knew everything about how the 6502 works.

“The beauty of the microprocessor is that unlike anything in biology, we understand it on every level,” says Jonas, of the University of California, Berkeley.
Using a simulation of MOS 6502, Jonas and Kording, of Northwestern University in Chicago, studied the behavior of electricity-moving transistors, along with aspects of the chip’s connections and its output, to reveal how it handles information. Since they already knew what the outcomes should be, they were actually testing the methods.

By the end of their experiments, Jonas and Kording had discovered almost nothing.

Their results — or lack thereof — hit a nerve among neuroscientists. When Jonas presented the work last year at a Kavli Foundation workshop held at MIT, the response from the crowd was split. “A bunch of people said, ‘That’s awesome. I had that idea 10 years ago and never got around to doing it,’ ” Jonas says. “And a bunch of people were like, ‘That’s bullshit. You’re taking the analogy way too far. You’re attacking a straw man.’ ”
On May 26, Jonas and Kording shared their results with a wider audience by posting a manuscript on the website bioRxiv.org. Bottom line of their report: Some of the best tools used by neuro-scientists turned up plenty of data but failed to reveal anything meaningful about a relatively simple machine. The implications are profound — and discouraging. Current neuro-science methods might not be up for the job when it comes to truly understanding the brain.

The paper “does a great job of articulating something that most thoughtful people believe but haven’t said out loud,” says neuroscientist Anthony Zador of Cold Spring Harbor Laboratory in New York. “Their point is that it’s not clear that the current methods would ever allow us to understand how the brain computes in [a] fundamental way,” he says. “And I don’t necessarily disagree.”

Differences and similarities
Critics, however, contend that the analogy of the brain as a computer is flawed. Terrence Sejnowski of the Salk Institute for Biological Studies in La Jolla, Calif., for instance, calls the comparison “provocative, but misleading.” The brain and the microprocessor are distinct in a huge number of ways. The brain can behave differently in different situations, a variability that adds an element of randomness to its machinations; computers aim to serve up the same response to the same situation every time. And compared with a microprocessor, the brain has an incredible amount of redundancy, with multiple circuits able to step in and compensate when others malfunction.

In microprocessors, the software is distinct from the hardware — any number of programs can run on the same machine. “This is not the case in the brain, where the software is the hardware,” Sejnowski says. And this hardware changes from minute to minute. Unlike the microprocessor’s connections, brain circuits morph every time you learn something new. Synapses grow and connect nerve cells, storing new knowledge.

Brains and microprocessors have very different origins, Sejnowski points out. The human brain has been sculpted over millions of years of evolution to be incredibly specialized, able to spot an angry face at a glance, for instance, or remember a childhood song for years. The 6502, which debuted in 1975, was designed by a small team of humans, who engineered the chip to their exact specifications. The methods for understanding one shouldn’t be expected to work for the other, Sejnowski says.

Yet there are some undeniable similarities. Brains and microprocessors are both built from many small units: 86 billion neurons and 3,510 transistors, respectively. These units can be organized into specialized modules that allow both “organs” to flexibly move information around and hold memories. Those shared traits make the 6502 a legitimate and informative model organism, Jonas and Kording argue.
In one experiment, they tested what would happen if they tried to break the 6502 bit by bit. Using a simulation to run their experiments, the researchers systematically knocked out every single transistor one at a time. They wanted to know which transistors were mission-critical to three important “behaviors”: Donkey Kong, Space Invaders and Pitfall. The effort was akin to what neuroscientists call “lesion studies,” which probe how the brain behaves when a certain area is damaged.

The experiment netted 1,565 transistors that could be eliminated without any consequences to the games. But other transistors proved essential. Losing any one of 1,560 transistors made it impossible for the microprocessor to load any of the games.

Big gap
Those results are hard to parse into something meaningful. This type of experiment, just as those in human and animal brains, are informative in some ways. But they don’t constitute understanding, Jonas argues. The gulf between knowing that a particular broken transistor can stymie a game and actually understanding how that transistor helps compute is “incredibly vast,” he says.

The transistor “lesion” experiment “gets at the core problem that we are struggling with in neuro-science,” Zador says. “Although we can attribute different brain functions to different brain areas, we don’t actually understand how the brain computes.”
Other experiments reported in the study turned up red herrings — results that looked similar to potentially useful brain data, but were ultimately meaningless. Jonas and Kording looked at the average activity of groups of nearby transistors to assess patterns about how the microprocessor works. Neuro-scientists do something similar when they analyze electrical patterns of groups of neurons. In this task, the microprocessor delivered some good-looking data. Oscillations of activity rippled over the microprocessor in patterns that seemed similar to those of the brain. Unfortunately, those signals are irrelevant to how the computer chip actually operates.

Data from other experiments revealed a few finds, including that the microprocessor contains a clock signal and that it switches between reading and writing memory. Yet these are not key insights into how the chip actually handles information, Jonas and Kording write in their paper.

It’s not that analogous experiments on the brain are useless, Jonas says. But he hopes that these examples reveal how big of a challenge it will be to move from experimental results to a true understanding. “We really need to be honest about what we’re going to pull out here.”
Jonas says the results should caution against collecting big datasets in the absence of theories that can help guide experiments and that can be verified or refuted. For the microprocessor, the researchers had a lot of data, yet still couldn’t separate the informative wheat from the distracting chaff. The results “suggest that we need to try and push a little bit more toward testable theories,” he says.

That’s not to say that big datasets are useless, he is quick to point out. Zador agrees. Some giant collections of neural information will probably turn out to be wastes of time. But “the right dataset will be useful,” he says. And the right bit of data might hold the key that propels neuroscientists forward.

Despite the pessimistic overtones in the paper, Christof Koch of the Allen Institute for Brain Science in Seattle is a fan. “You got to love it,” Koch says. At its heart, the experiment on the 6502 “sends a good message of humility,” he adds. “It will take a lot of hard work by a lot of very clever people for many years to understand the brain.” But he says that tenacity, especially in the face of such a formidable challenge, will eventually lead to clarity.

Zador recently opened a fortune cookie that read, “If the brain were so simple that we could understand it, we would be so simple that we couldn’t.” That quote, from IBM researcher Emerson Pugh, throws down the challenge, Zador says. “The alternative is that we will never understand it,” he says. “I just can’t believe that.”

Tiny structures give a peacock spider its radiant rump

Male peacock spiders know how to work their angles and find their light.

The arachnids, native to Australia, raise their derriere — or, more accurately, a flap on their hind end — skyward and shake it to attract females. Hairlike scales cover their bodies and produce the vibrant colorations that make peacock spiders so striking.

Doekele Stavenga of the University of Groningen in the Netherlands and his colleagues collected Maratus splendens peacock spiders from a park outside Sydney and zoomed in on those scales.
Using microscopy, spectrometry and other techniques, the team found that the spiders’ red, yellow and cream scales rely on two pigments, 3-OH-kynurenine and xanthommatin, to reflect their colors. Even white scales contain low levels of pigment. Spines lining these scales scatter light randomly, giving them slightly different hues from different angles.
Blue scales are an entirely different story. They’re transparent and pigment-free. Instead, the scales’ architecture reflects iridescent blue and purple hues. Each peapodlike scale is lined with tiny ridges on the outside and a layer of threadlike fibers on the inside. Fiber spacing may determine whether scales appear more blue or more purple.

Whether peacock spiders’ eyes can actually see these posterior patterns is an open question, Stavenga and his colleagues write in the August Journal of the Royal Society Interface. Given that other jumping spiders see at least three color ranges, it seems unlikely that such vivid come-hither choreography plays out in black and white.

Staph infections still a concern

New hope for control of staph infections

Staphylococcal infections — especially rampant in hospitals and responsible for … some fatal disorders — may be virtually stamped out. Researchers … have extracted teichoic acid from the bacteria’s cell wall and used it to protect groups of mice from subsequent massive doses of virulent staph organisms. — Science News, October 29, 1966

UPDATE
Staphylococcus aureus has not been conquered. As antibiotic resistance grows, the pressure is on to find ways to stop the deadly microbe. A vaccine that targets S. aureus’ various routes of infection is being tested in patients having back surgery. Ideally, doctors would use the vaccine to protect hospital patients and people with weakened immune systems. This vaccine is the furthest along among several others in development. Meanwhile, a natural anti­biotic recently found in human noses may lead to drugs that target antibiotic-resistant staph (SN: 8/20/16, p. 7).

For robots, artificial intelligence gets physical

In a high-ceilinged laboratory at Children’s National Health System in Washington, D.C., a gleaming white robot stitches up pig intestines.

The thin pink tissue dangles like a deflated balloon from a sturdy plastic loop. Two bulky cameras watch from above as the bot weaves green thread in and out, slowly sewing together two sections. Like an experienced human surgeon, the robot places each suture deftly, precisely — and with intelligence.

Or something close to it.
For robots, artificial intelligence means more than just “brains.” Sure, computers can learn how to recognize faces or beat humans in strategy games. But the body matters too. In humans, eyes and ears and skin pick up cues from the environment, like the glow of a campfire or the patter of falling raindrops. People use these cues to take action: to dodge a wayward spark or huddle close under an umbrella.

Part of intelligence is “walking around and picking things up and opening doors and stuff,” says Cornell computer scientist Bart Selman. It “has to do with our perception and our physical being.” For machines to function fully on their own, without humans calling the shots, getting physical is essential. Today’s robots aren’t there yet — not even close — but amping up the senses could change that.

“If we’re going to have robots in the world, in our home, interacting with us and exploring the environment, they absolutely have to have sensing,” says Stanford roboticist Mark Cutkosky. He and a group of like-minded scientists are making sensors for robotic feet and fingers and skin — and are even helping robots learn how to use their bodies, like babies first grasping how to squeeze a parent’s finger.

The goal is to build robots that can make decisions based on what they’re sensing around them — robots that can gauge the force needed to push open a door or figure out how to step carefully on a slick sidewalk. Eventually, such robots could work like humans, perhaps even caring for the elderly.
Such machines of the future are a far cry from that shiny white surgery robot in the D.C. lab, essentially an arm atop a cart. But today’s fledgling sensing robots mark the slow awakening of machines to the world around them, and themselves.

“By adding just a little bit of awareness to the machine,” says pediatric surgeon Peter Kim of the children’s hospital, “there’s a huge amount of benefit to gain.”

Born to run
The pint-size machine running around Stanford’s campus doesn’t look especially self-aware.
It’s a rugged sort of robot, with stacked circuit boards and bundles of colorful wires loaded on its back. It scampers over grass, gravel, asphalt — any surface roboticist Alice Wu can find.

For weeks this summer, Wu took the traveling bot outside, placed it on the ground, and then, “I let her run,” she says. The bot isn’t that fast (its top speed is about a half a meter per second), and it doesn’t go far, but Wu is trying to give it something special: a sense of touch. Wu calls the bot SAIL-R, for Sensorized Adaptive Intelligence Legged Robot.

Fixed to each of its six C-shaped legs are tactile sensors that can tell how hard the robot hits the ground. Most robots don’t have tactile sensing on their feet, Wu says. “When I first got into this, I thought that was crazy. So much effort is focused on hands and arms.” But feet make contact with the world too.

Feeling the ground, in fact, is crucial for walking. Most people tailor their gait to different surfaces without even thinking, feet pounding the ground on a run over grass, or slowing down on a street glazed with ice. Wu wants to make robots that, like humans, sense the surface they’re on and adjust their walk accordingly.

Walking robots have already ventured out into the world: Last year, a competition sponsored by DARPA, the Department of Defense agency that funds advanced research, showcased a lineup of semiautonomous robots that walked over rubble and even climbed stairs (SN: 12/13/14, p. 16). But they didn’t do it on their own; hidden away in control rooms, human operators pulled the strings.

One day, Wu says, machines could feel the ground and learn for themselves the most efficient way to walk. But that’s a tall order. For one, researchers can’t simply glue the delicate sensors designed for a robot’s hands onto its feet. “The feet are literally whacking the sensor against the ground very, very hard,” Wu says. “It’s unforgiving contact.”

That’s the challenge with tactile sensing in general, says Cutkosky, Wu’s adviser at Stanford. Scientists have to build sensors that are tough, that can survive impact and abrasion and bending and water. It’s one reason physical intelligence has advanced so slowly, he says.

“You can’t just feed a supercomputer thousands of training examples,” Cutkosky says, the way AlphaGo learned how to play Go (SN Online: 3/15/16). “You actually have to build things that interact with the world.”
Cutkosky would know. His lab is famous for building such machines: tiny “microTugs” that can team up, antlike, to pull a car, and a gecko-inspired “Stickybot” that climbs walls. Tactile sensing could make these and other robots smarter.

Wu and colleagues presented a new sensor at IROS 2015, a meeting on intelligent robots and systems in Hamburg, Germany. The sensor, a sandwich of rubber and circuit boards, can measure adhesion forces — what a climbing robot uses to stick to walls. Theoretically, such a device could tell a bot if its feet were slipping so it could adjust its grip to hang on. And because the postage stamp–sized sensor is tough, it might actually survive life on little robot feet.

Wu has used a similar sort of sensor on an indoor, two-legged bot, the predecessor to the six-legged SAIL-R. The indoor bot can successfully distinguish between hard, slippery, grassy and sandy surfaces more than 90 percent of the time, Wu reported in IEEE Robotics and Automation Letters in July.

That could be enough to keep a bot from falling. On a patch of ice, for example, “it would say, ‘Uh-oh, this feels kind of slippery. I need to slow down to a walk,’ ” Wu says.

Ideally, Cutkosky says, robots should be covered with tactile sensors — just like human skin. But scientists are still figuring out how a machine would deal with the resulting deluge of information.

Smart skin
Even someone sitting (nearly) motionless at a desk in a quiet, temperature-controlled office is bombarded with information from the senses.

Fluorescent lights flutter, air conditioning units hum and the tactile signals are too numerous to count. Fingertips touch computer keys, feet press the floor, forearms rest on the desk. If people couldn’t tune out some of the “noise” picked up by their skin, it would be total sensory overload.

“You have millions of tactile sensors, but you don’t sit there and say, ‘OK, what’s going on with my millions of tactile sensors,’ ” says Nikolaus Correll, a roboticist at the University of Colorado Boulder. Rather, the brain gets a filtered message, more of a big-picture view.

That simplified strategy may be a winner for robotic skin, too. Instead of sending every last bit of sensing data to a centralized robotic brain, the skin should do some of the computing itself, says Correll, who made the case for such “smart” materials in Science in 2015.

“When something interesting happens, [the skin] could report to the brain,” Correll says. Like human skin, artificial skin could take all the vibration info received from a nudge, or a tap to the shoulder, and translate it into a simpler message for the brain: “The skin could say, ‘I was tapped or rubbed or patted at this position,’ ” he says. That way, the robot’s brain doesn’t have to constantly process a flood of vibration data from the skin’s sensors.
It’s called distributed information processing. Correll and Colorado colleague Dana Hughes tested the idea with a stretchy square of rubbery skin mounted on the back of an industrial robot named Baxter. Throughout the skin, they placed 10 vibration sensors paired with 10 tiny computers. Then the team trained the computers to recognize different textures by rubbing patches of cotton, cardboard, sandpaper and other materials on the skin.

Their sensor/computer duo was able to distinguish between 15 textures about 70 percent of the time, Hughes and Correll reported in Bioinspiration & Biomimetics in 2015. And that’s with no centralized “brain” at all. That kind of touch discrimination brings the robotic skin a step closer to human skin. Making robotic parts with such sensing abilities “will make it much easier to build a dexterous, capable robot,” Correll says.

And with smart skin, robots could invest more brainpower in the big stuff, what humans begin learning at birth — how to use their own bodies.

Zip it
In UCLA’s Biomechatronics Lab, a green-fingered robot just figured out how to use its body for one seemingly simple task: closing a plastic bag.

Two deformable finger pads pinch the blue seal with steady pressure (the enclosed Cheerios barely tremble) as the robot slides its hand slowly along the plastic zipper. After about two minutes, the fingers reach the end, closing the bag. It’s deceptively difficult. The bag’s shape changes as it’s manipulated — tough for robotic fingers to grasp. It’s also transparent — not easily detectable by computer vision.
You can’t just tell the robot to move its fingertips horizontally along the zipper, says Veronica Santos, a roboticist at UCLA. She and colleague Randall Hellman, a mechanical engineer, tried that. It’s too hard to predict how the bag will bend and flex. “It’s a constant moving target,” Santos says.

So the researchers let the robot learn how to close the bag itself.

First they had the bot randomly move its fingers along the zipper, while collecting data from sensors in the fingertips — how the skin deforms, what vibrations it picks up, how fluid pressure in the fingertips changes. Santos and Hellman also taught the robot where the zipper was in relation to the finger pads. The sweet spot is smack dab in the middle, Santos says.

Then the team used a type of algorithm called reinforcement learning to teach the robot how to close the bag. “This is the exciting part,” Santos says. The program gives the robot “points” for keeping the zipper in the fingers’ sweet spot while moving along the bag.

“If good stuff happens, it gets rewarded,” Santos says. When the bot holds the zipper near the center of the finger pads, she explains, “it says, ‘Hey, I get points for that, so those are good things to do.’ ”

She and Hellman reported successful bag closing in April at the IEEE Haptics Symposium in Philadelphia. “The robot actually learned!” Santos says. And in a way that would have been hard to program.

It’s like teaching someone how to swing a tennis racket, she says. “I can tell you what you’re supposed to do, and I can tell you what it might feel like.” But to smash a ball across a net, “you’re going to have to do it and feel it yourself.”

Learning by doing may be the way to get robots to tackle all sorts of complicated tasks, or simple tasks in complicated situations. The crux is embodiment, Santos says, or the robot’s awareness that each of its actions brings an ever-shifting kaleidoscope of sensations.
Smooth operator
Awareness of the sights of surgery, and what to make of them, is instrumental for a human or machine trying to stitch up soft tissue.

Skin, muscle and organs are difficult to work with, says Kim, the surgeon at Children’s National Health System. “You’re trying to operate on shiny, glistening, blood-covered tissues,” he says. “They’re different shades of pink and they’re moving around all the time.”

Surgeons adjust their actions in response to what they see: a twisting bit of tissue, for example, or a spurt of fluid. Machines typically can’t gauge their location amid slippery organs or act fast when soft tissues tear. Robots needed an easier place to start. So, in 1992, surgery bots began working on bones: rigid material that tends to stay in one place.

In 2000, the U.S. Food and Drug Administration approved the first surgery robot for soft tissue: the da Vinci Surgical System, which looks like a prehistoric version of Kim’s surgery machine. Da Vinci is about as wide as a king-sized mattress and reaches 6 feet tall in places, with three mechanical arms tipped with disposable tools. Nearby, a bulky gray cart holds two silver hand controls for human surgeons.

In the cart’s backless seat, a surgeon would lean forward into a partially enclosed pod, hands gripping controls, feet working pipe organ–like pedals. To move da Vinci’s surgical tools, the surgeon would manipulate the controls, like those claw cranes kids use to pick up stuffed animals at arcades. “It’s what we call master/slave,” Kim says. “Essentially, the robot does exactly what the surgeon does.”

Da Vinci can manipulate tiny tools and keep incisions small, but it’s basically a power tool. “It has no awareness,” Kim says, “no intelligence.” The visual inputs of surgery are processed by human brains, not a computer.
Kim’s robot is a more enlightened beast. Named STAR, for Smart Tissue Autonomous Robot, the bot has preprogrammed surgical knowledge and hefty cameras that let it see and react to the environment. Recently, STAR stitched up soft tissue in a living animal — a first for a machine. The bot even outperformed human surgeons on some measures, Kim and colleagues reported in May in Science Translational Medicine.

Severed pig intestines sewed up in the lab by STAR tended to leak less than did intestines fixed by humans using da Vinci, laparoscopic tools or sewing by hand. When researchers held the intestines under water and inflated them with air, it took nearly double the pressure for the STAR-repaired tissue to spring a leak compared with intestines patched up by humans.

Kim credits STAR’s even stitches for the win. “It’s more consistent,” he says. “That’s the secret sauce.”

To keep track of its position on tissue, STAR uses near-infrared fluorescent imaging (like night vision goggles) to follow glowing dots marked by a person. To orient itself in space, STAR uses a 3-D camera with multiple lenses.

Then the robot taps into its surgical knowledge to figure out where to place a stitch. In the experiment reported in May, humans were still in the loop: STAR would await an OK if firing a stitch in a tricky spot, and an assistant helped keep the thread from tangling (a task commonly required in human-led surgeries too). Soon, STAR may be more self-sufficient. In late November, Kim plans to test a version of his machine with two robotic arms to replace the human assistant; he would also like to give STAR a few more superhuman senses, like gauging blood flow and detecting subsurface structures, like a submarine pinging an underwater shipwreck.

One day, Kim says, such technology could essentially put a world-class surgeon in every hospital, “available anyplace, anytime.”

Santos sees a future, 10 to 20 years from now perhaps, where humans and robots collaborate seamlessly — more like coworkers than master and slave. Robots will need all of their senses to take part, she says. They might not be the artificially intelligent androids of the movies, like Ex Machina’s cunning humanoid Ava. But like humans, intelligent, autonomous machines will have to learn the limits and capabilities of their bodies. They’ll have to learn how to move through the world on their own.

Questions remain about the benefits of taking testosterone

As a treatment for the ailments of aging, testosterone’s benefits are hit or miss.

For men with low testosterone, the hormone therapy is helpful for some health problems, but not so much for others, researchers report in five papers published February 21 in JAMA and JAMA Internal Medicine. Testosterone therapy was good for the bones, but didn’t help memory. It remedied anemia and was linked to a lower risk of heart attack and stroke. But treatment also upped the amount of plaque in the arteries, an early indicator of heart attack risk, researchers report.
“It’s a very confusing area,” says Caleb Alexander, a prescription drug researcher at Johns Hopkins Bloomberg School of Public Health, who was not involved with the work. “Testosterone very well may help men feel more energized,” he says. “But the real question is: At what cost?”

As men age, their testosterone levels tend to drop. Researchers have suggested that boosting the levels back up to normal might counter some signs of aging, including memory loss and weakened bones. But the risks of such treatment — especially the cardiovascular risks — remain unclear, Alexander says. Dozens of studies have tackled the question, but the results “point in lots of different directions,” he says.

Despite lack of clarity on testosterone therapy’s safety and benefits, the number of men taking the hormone has soared in recent years. One 2014 analysis estimated that 2.2 million men filled testosterone prescriptions in 2013 compared with 1.2 million men in 2010. That includes many men with testosterone levels on the borderline between low and normal, men who don’t actually meet clinical guidelines for treatment, Alexander says.
The new studies attempted to answer some of the long-standing questions about the pros and cons of treatment. Four present findings from a set of clinical trials known as the Testosterone Trials, designed to evaluate the effects of testosterone therapy in men age 65 or older.
One study found that the density and strength of hip and especially spine bones improved after a year of using a daily dose of testosterone gel. Researchers don’t yet know whether these gains will translate to fewer fractures. Daily testosterone gel treatment over a year also helped men recover from anemia, raising levels of hemoglobin, an oxygen-carrying molecule in the blood, a second study showed. But testosterone gel didn’t seem to have an effect on men’s memory and cognition. In a study of 788 men, those who took the hormone performed about as well on memory and other tests as those who got a placebo.

Two studies attempted to untangle how exactly testosterone treatment affects the heart and blood vessels. One study, part of the Testosterone Trials, linked testosterone treatment with more plaque buildup in the vessels that carry oxygen-rich blood to the heart. That sounds ominous. Too much plaque can block blood flow and cripple the heart. But the second study didn’t find more heart attacks, strokes or other cardiovascular problems in men taking the hormone. In that study, researchers examined medical records of more than 44,000 men, around 8,800 of whom had been given a prescription for testosterone treatment. Over a roughly three-year follow-up period, these men actually had a lower risk of cardiovascular issues than men who hadn’t been given a testosterone prescription, researchers report.

The new work does “little overall to clarify the role of testosterone replacement” for cardiovascular risk and cognitive function, says Dimitri Cassimatis, a cardiologist at Emory University School of Medicine in Atlanta. But taken together, he says, the studies strengthen the evidence for testosterone’s benefits on bone density and anemia.

Lungs enlist immune cells to fight infections in capillaries

Immune cells in the lungs provide a rapid counterattack to bloodstream infections, a new study in mice finds. This surprising discovery pegs the lungs as a major pillar in the body’s defense during these dangerous infections, the researchers say.

“No one would have guessed the lung would provide such an immediate and strong host defense system,” says Bryan Yipp, an immunologist at the University of Calgary in Canada. Yipp and his colleagues report their findings online April 28 in Science Immunology.
The work may offer ways to target and adjust our own immune defense system for infections, says Yipp. “Currently, we only try to kill the bacteria, but we are running out of antibiotics because of resistance.”

The research uncovers some of the mechanisms that drive the rapid activation of neutrophils, says immunologist Andrew Gelman of Washington University School of Medicine in St. Louis. “This is critical in removing bacteria from sequestered spaces in the lung,” he says.

Generally, clearing bacteria out of the bloodstream falls to macrophages that reside in the liver and the spleen. But macrophages aren’t found in vessels of the lungs. So the lungs’ blood vessel network gives pathogens a place to hide and escape the body’s usual removal efforts.

In mammals, neutrophils hang out in the lungs’ bloodstream more than in blood vessels that wind through other tissue. Past research has indicated that a dearth of neutrophils puts an individual at risk for a bloodstream infection. It wasn’t clear, though, how they were providing a defensive role in the blood, says Yipp.
Yipp and his team used confocal pulmonary intravital microscopy to view how cells and pathogens behave in living mouse lungs. The researchers found that about a third of the neutrophils seen in the mouse lung sample were crawling about three to four cell lengths along the walls of the capillaries, the smallest blood vessels in the lungs. When the researchers added LPS, a molecule found in the cell wall of gram-negative bacteria like Escherichia coli, almost all of the neutrophils began crawling, and they traveled more than twice as far as before.
One of the proteins needed for neutrophil crawling, CD11b, allows the immune cells to move along as though on a tank tread, says Yipp. After LPS was added, the neutrophils in a mouse lung deficient in CD11b crawled about a third of the distance as the neutrophils in a normal mouse lung.

In another experiment, the researchers injected fluorescent E. coli into a mouse. Within 10 minutes of the bacteria reaching the lungs, the neutrophils started to crawl toward the bacteria and gobble them up, a process called phagocytosis. The neutrophils captured the majority of the bacteria within an hour of injection, says Yipp.

Gelman says it’s now clear that neutrophils are good at removing bacteria from the lungs’ capillaries. The next step is to “show the actual biological impact” — how this system controls a bacterial infection and improves survival, he says.

For patients with depressed immune systems, the finding may eventually provide a way to battle bloodstream infections, Yipp says. But, he notes, “with all infections, part of the concern is that the inflammatory response can become too exuberant, which leads to tissue damage, as in septic shock,” and lung failure. In those cases, learning how to dampen the lungs’ immune response could help, Yipp says.

Blennies have a lot of fang for such little fishes

After a recent flurry of news that fang blennies mix an opioid in their venom, a question lingers: What do they need with fangs anyway? Most eat wimpy stuff that hardly justifies whopper canines.

Not that fang blennies are meek fishes.

“When they bite, they bite savagely,” says Bryan Fry of the University of Queensland in Brisbane, Australia. “If these little jobbies were 3 meters long, we’d be having to cage dive with them.” Real-world blennies, however, grow to only about the size of a cocktail sausage.
These little beasts probably got their big teeth before evolving venom, says Nicholas Casewell of the Liverpool School of Tropical Medicine in England. That’s jusssst backward, snakes might say, as they evolved their venom first. Yet when Casewell, Fry and colleagues put together an evolutionary family tree for the blennies, the one genus with both fangs and venom branched off amid four genera that are all fang and no toxins, Casewell, Fry and colleagues report in the April 24 Current Biology.

Those with venom aren’t that scary to humans. Fish venoms tend to cause excruciating pain, says Fry, who adds from personal experience that “sting” sounds deceptively benign for what a stingray delivers (SN: 4/29/17, p. 28). A venomous fang blenny has yet to nail him, but he hears that others have felt little more than a toothy nip.
“There’s no real reason for most of these fish to have fangs to help them feed,” Casewell says. Many prey on small invertebrates or even floating plankton, which is about as hard to subdue as chicken soup.
The fangs, however, are useful for fending off predators, Casewell suggests. Blennies have no spiky fins or spines, the more usual defensive weapons in fish. Male-versus-male competition may have been another force for fang evolution; males stab each other during breeding season.

When fangs evolved, whatever the reason, they became a useful conduit for venom, Casewell and Fry propose. Once some blennies evolved venom, “all these crazy selection pressures started coming in,” Fry says. Forces of natural selection nudged nonvenomous fang blennies toward colors and stripes similar enough to those of their venomous cousins to discourage attacks from an educated predator.

The mimics take advantage, often brazenly swimming up to bigger fish to bite off some scales and mucus for a snack. “These fish are little jerks,” Fry says. “They should be called jerk blennies.”

North America’s largest recorded earthquake helped confirm plate tectonics

In the early evening of March 27, 1964, a magnitude 9.2 earthquake roiled Alaska. For nearly five minutes, the ground shuddered violently in what was, and still is, the second biggest temblor in recorded history.

Across the southern part of the state, land cracked and split, lifting some areas nearly 12 meters — about as high as a telephone pole — in an instant. Deep, house-swallowing maws opened up. Near the coast, ground turned jellylike and slid into bays, dooming almost everyone standing on it. Local tsunamis swamped towns and villages.
Not many people lived in the newly formed state at the time. If the quake had struck in a more developed place, the damage and death toll would have been far greater. As it was, more than 130 people were killed.

In The Great Quake, Henry Fountain, a science journalist at the New York Times, tells a vivid tale of this natural drama through the eyes of the people who experienced the earthquake and the scientist who unearthed its secrets. The result is an engrossing story of ruin and revelation — one that ultimately shows how the 1964 quake provided some of the earliest supporting evidence for the theory of plate tectonics, then a disputed idea.

Using details from his own interviews with survivors — along with newspaper articles, diaries and other published accounts — Fountain focuses his story on two places near Prince William Sound. More people died in the port of Valdez (a familiar name because of the 1989 Exxon Valdez oil spill) than in any other Alaskan community, while the small village of Chenega suffered the highest proportional loss of life. Fountain’s tracking of the myriad small decisions people made that fateful day — that either put them in harm’s way or kept them safe — is meticulous. The experiences of the survivors and the lost are haunting.

Interwoven with stories of the human tragedy is Fountain’s account of the painstaking scientific gumshoe work necessary to piece together how such a monster earthquake had occurred. That’s where George Plafker, a geologist with the U.S. Geological Survey, comes in. In surveying the quake’s aftermath, Plafker, along with others, noticed something strange: There was no surface evidence of a fault large enough to explain the colossal shaking or the widespread uplift and sinking of land over hundreds of thousands of square kilometers.

Today, scientists know that Earth’s outer layer is divided into giant pieces and that the motion of tectonic plates — as they bump together or slide past each other — helps explain how some earthquakes occur. But in the mid-1960s, plate tectonics was just a hypothesis in need of real-world validation.
Plafker’s crucial contribution was to realize that the powerful Alaskan quake had no surface fault because it took place at what is now known as a subduction zone, where dense oceanic crust sinks under lighter continental crust. The insight into the quake’s origin provided some of the first real proof of tectonic plate movements.

Throughout the book, Fountain weaves in brief histories of key people and ideas in the development of the theory of plate tectonics. For those familiar with the history, Fountain doesn’t offer much new. People less familiar may find it a little difficult to keep one geologist straight from another geophysicist.

But The Great Quake is an elegant showcase of how the progressive work of numerous scientists over time — all the while questioning, debating, changing their minds — can be pieced together into an idea that reshapes how we see and understand the planet.

Help for postpartum mood disorders can be hard to come by

Words can’t describe the pandemonium that follows a child’s birth, but I’ll try anyway. After my first daughter was born, I felt like a giant had picked up my life, shaken it hard, martini-style, and returned it to the ground. The familiar objects in my life were all still there, but nothing seemed to be the same.

The day we came home from the hospital as a family of three, my husband and I plunged headfirst into profound elation and profound exhaustion, often changing by the minute. We worried. We snipped at each other. We marveled at this new, beautiful person. The experience, as new parents the world over know, was intense.

The first week home, my body took a bruising. I was recovering from the wildness that is childbirth. I was insanely thirsty and hungry. I was struggling to both breastfeed and pump every two hours, in an effort to boost my milk supply. And against this backdrop, my levels of estrogen and progesterone, after climbing to great heights during pregnancy, had fallen off a cliff.

Massive reconfigurations were taking place, both in life and in my body. And at times, I felt like the whole thing could go south at any point. After talking to other new mothers, I now realize that almost everyone has a version of this same story. Childbirth and caring for a newborn is really, really hard, in many different ways.

That fraught time can be particularly dangerous for postpartum mood disorders such as depression and anxiety. Unsurprisingly, most women experience mood disturbances in the aftermath of a baby. For the majority, symptoms are mild and ease up with time. But for an estimated 10 to 15 percent of women in industrialized countries and 20 to 40 percent of women in developing countries, symptoms of depression will be troublesome and persistent. And these estimates account for only depression — not anxiety, OCD or other disorders postpartum women sometimes experience.

As a clinical psychologist, Betty-Shannon Prevatt of North Carolina State University in Raleigh saw firsthand how hard the transition to motherhood was for many women. She set out to study why women with postpartum mood disorders often don’t get the help they need.

Along with her colleague Sarah Desmarais, Prevatt surveyed 211 women who had given birth in the previous three years. The researchers asked the women about potential symptoms of mood disorders, whether they had received treatment and, if not, factors that may have kept them from doing so.

I found the results, published August 1 in Maternal and Child Health Journal, shocking. At the time of the survey, 51 percent of the women felt they currently met the criteria for a postpartum mood disorder. That self-report isn’t the same as a diagnosis from a doctor, nor is it indicative of women’s rate overall. But still, the number is high. “I was absolutely surprised,” Prevatt says. The number was especially notable because these are women who would presumably have a good shot at getting help — they are primarily white, married, well-educated and middle class.

The follow-up number is even more worrisome: Twenty percent of the women who self-reported that they were struggling didn’t report their struggles to their health care provider. The two biggest roadblocks to getting help were time constraints (no shock there) and stigma.
A new mother can have trouble finding time to take a shower, let alone to make a doctor’s appointment, call insurance companies, find someone to watch the baby and all the other tasks that go into seeking help. Paid maternity leave policies might help alleviate some of this pressure for women who need to go back to work quickly, the authors write. Strong social support can help, too.

Overcoming stigma is another huge challenge. “Women fear judgment that they are not a good mother … and often feel embarrassed,” Prevatt says.

That has to change. Women ought to be able to seek the help they need without fear or shame. There’s a push among some providers to use a universal screening tool, to ask every postpartum woman about her mental health. But these new results hint that even a universal screen wouldn’t catch women who are ashamed of their illness. For providers to better catch that population, women need to know that they’re experiencing something that’s quite common, and often treatable. “The more we can normalize the wide range of emotions that follow childbirth, the easier it will be for women to disclose how they are truly feeling,” Prevatt says.

The postpartum time can be grueling, even for people lucky enough to not have to deal with a mood disorder. The best we can do is to try and take care of new moms who are giving their all to take care of their baby.

Mystery void is discovered in the Great Pyramid of Giza

High-energy particles from outer space have helped uncover an enigmatic void deep inside the Great Pyramid of Giza.

Using high-tech devices typically reserved for particle physics experiments, researchers peered through the thick stone of the largest pyramid in Egypt for traces of cosmic rays and spotted a previously unknown empty space. The mysterious cavity is the first major structure discovered inside the roughly 4,500-year-old Great Pyramid since the 19th century, researchers report online November 2 in Nature.
“It’s a significant discovery,” says Peter Der Manuelian, an Egyptologist at Harvard University not involved in the work, “although precisely what it means is unclear.”

The open space may comprise one or more rooms or corridors, but the particle-detector images reveal only the rough size of the void, not the details of its design. Eventually, though, this detail of the Great Pyramid’s architecture could offer new insights into one of the world’s largest, oldest and most famous monuments. The only one of the ancient Seven Wonders of the World that’s still standing, the Great Pyramid was built as a burial tomb for Pharaoh Khufu.
“Imagine you’re an archaeologist and you walk into this room no one has walked in for [over] 4,000 years,” says Nural Akchurin, a physicist at Texas Tech University in Lubbock who wasn’t involved in the study. “That’s huge. It’s incredible.”
Researchers probed the Great Pyramid’s interior with devices that sense muons — by-products of spacefaring subatomic particles called cosmic rays striking atoms in the atmosphere. Muons continuously rain on Earth at nearly the speed of light. But while the subatomic particles easily streak through open air, rock can absorb or deflect them. By placing detectors near the base and areas deep inside of the Great Pyramid and measuring the number of muons that reach the detectors from different directions, scientists could spot empty spaces inside the ancient edifice.

For instance, if a detector inside the pyramid picked up slightly more muons from the north than the south, that would indicate there was slightly less rock on the north side to intercept incoming muons. That relative abundance of muons could indicate the presence of a chamber in that direction.

Muon imaging an enormous, dense construction like the Great Pyramid “is not an easy game,” Akchurin says. The monument obstructs 99 percent of incoming muons before the particles can reach detectors, so collecting enough data to spot its hollow spaces takes several months.
The newly identified void was first seen with a type of muon detector called nuclear emulsion film, which the researchers laid out in a space called the Queen’s chamber and the adjacent corridor inside the pyramid. When muons zip through these films, the particles’ chemical interactions with the material leave silver trails that reveal which direction the particles came from, explains Elena Guardincerri, a physicist at Los Alamos National Laboratory in New Mexico not involved in the work.

Upon developing these films, the researchers saw a surprising excess of muons coming through a region above the Grand Gallery, a sloping corridor that runs north-south through the center of the pyramid. The cavity appears to be at least 30 meters across — about the size of the Grand Gallery itself. “Our first reaction was a lot of excitement,” says study coauthor Mehdi Tayoubi, cofounder of the Heritage Innovation Preservation Institute in Paris. “We said, ‘Wow, we got something big!’”

Tayoubi and colleagues confirmed their discovery with observations from two other types of muon detectors, which generate electrical signals when muons pass through them, placed inside the Queen’s chamber and outside at the base of the pyramid.

Akchurin hopes this finding will pave the way for muon imaging of other ancient monuments around the world — particularly at archaeological sites where traditional excavation may be difficult, like deep in the jungle or on mountainsides.