Scientists Put 3D Glasses on Cuttlefish and Find Out They Use Human-Like Depth Perception to Hunt Prey

Trevor Wardill
Trevor Wardill

Researchers at the University of Minnesota recently constructed a miniature underwater movie theater, outfitted a group of cuttlefish with 3D glasses, and proceeded to show them short movies of shrimp—all to see if humans and cuttlefish have more in common than we previously thought.

Cuttlefish, squid-like cephalopods with an internal shell, ensnare prey with one swift snatch of their tentacles. If they under- or over-estimate their distance from whatever unsuspecting marine animal they’re eyeing, however, they’ll fail to grasp their prey and give away their position, too.

To find out how cuttlefish estimate distance so accurately, Trevor Wardill, assistant professor in the University of Minnesota’s Department of Ecology, Evolution, and Behavior, and his team devised an innovative study, published in the journal Science Advances. After placing 3D glasses over a cuttlefish’s eyes, they set it in front of a screen that showed offset images of two different-colored shrimp on a leisurely walk.

cuttlefish in 3d glasses
Trevor Wardill

If you’ve ever briefly taken off your 3D glasses during a movie, you’ve seen the offset—or partially overlapped—images that filmmakers use to create the illusion of depth. The process by which we perceive depth is called stereopsis, where our brain receives different images from our left and right eyes and combines that information to help us understand when some objects are closer to us than others. When you’re watching a 3D movie, your brain is combining the offset images, as seen differently by your left and right eyes, to make you think that flat images have depth, and some are closer than others.

And, as demonstrated in the experiment, the same thing happens with cuttlefish. The researchers varied the positioning of the offset images so the cuttlefish would either perceive the shrimp to be in front of or behind the screen. When the cuttlefish then struck out at their would-be prey, their tentacles ended up grasping at empty water (if they thought the shrimp was in front of the screen) or colliding with the screen (if they thought the shrimp was behind it). In other words, stereopsis allowed them to interpret how far away the shrimp was, just like humans would have done.

"How the cuttlefish reacted to the disparities clearly establishes that cuttlefish use stereopsis when hunting," Wardill said in a statement. "When only one eye could see the shrimp, meaning stereopsis was not possible, the animals took longer to position themselves correctly. When both eyes could see the shrimp, meaning they utilized stereopsis, it allowed cuttlefish to make faster decisions when attacking. This can make all the difference in catching a meal."

But cuttlefish brains aren’t as similar to ours as their depth perception skills might imply.

“We know that cuttlefish brains aren’t segmented like humans. They do not seem to have a single part of the brain—like our occipital lobe—dedicated to processing vision,” Wardill’s colleague Paloma Gonzalez-Bellido said in the press release. “Our research shows there must be an area in their brain that compares the images from a cuttlefish’s left and right eye and computes their differences.”

Unlike squids, octopuses, and other cephalopods, cuttlefish can rotate their eyes to look directly forward, so the experiment isn’t suggesting that all cephalopods can use stereopsis. It is, however, suggesting that we may have underestimated invertebrates’ capacity for what we consider complex brain computations—and overestimated how unique humans actually are.

Arrokoth, the Farthest, Oldest Solar System Object Ever Studied, Could Reveal the Origins of Planets

NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute/Roman Tkachenko
NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute/Roman Tkachenko

A trip to the most remote part of our solar system has revealed some surprising insights into the formation of our own planet. Three new studies based on data gathered on NASA's flyby of Arrokoth—the farthest object in the solar system from Earth and the oldest body ever studied—is giving researchers a better idea of how the building blocks of planets were formed, what Arrokoth's surface is made of, and why it looks like a giant circus peanut.

Arrokoth is a 21-mile-wide space object that formed roughly 4 billion years ago. Located past Pluto in the Kuiper Belt, it's received much less abuse than other primordial bodies that sit in asteroid belts or closer to the sun. "[The objects] that form there have basically been unperturbed since the beginning of the solar system," William McKinnon, lead author of one of the studies, said at a news briefing.

That means, despite its age, Arrokoth doesn't look much different today than when it first came into being billions of years ago, making it the perfect tool for studying the origins of planets.

In 2019, the NASA spacecraft New Horizons performed a flyby of Arrokoth on the edge of the solar system 4 billion miles away from Earth. The probe captured a binary object consisting of two connected lobes that were once separate fragments. In their paper, McKinnon and colleagues explain that Arrokoth "is the product of a gentle, low-speed merger in the early solar system."

Prior to these new findings, there were two competing theories into how the solid building blocks of planets, or planetesimals, form. The first theory is called hierarchical accretion, and it states that planetesimals are created when two separate parts of a nebula—the cloud of gas and space dust born from a dying star—crash into one another.

The latest observations of Arrokoth support the second theory: Instead of a sudden, violent collision, planetesimals form when gases and particles in a nebula gradually amass to the point where they become too dense to withstand their own gravity. Nearby components meld together gradually, and a planetesimal is born. "All these particles are falling toward the center, then whoosh, they make a big planetesimal. Maybe 10, 20, 30, 100 kilometers across," said McKinnon, a professor of Earth and planetary sciences at Washington University. This type of cloud collapse typically results in binary shapes rather than smooth spheroids, hence Arrokoth's peanut-like silhouette.

If this is the origin of Arrokoth, it was likely the origin of other planetesimals, including those that assembled Earth. "This is how planetesimal formation took place across the Kuiper Belt, and quite possibly across the solar system," New Horizons principal investigator Alan Stern said at the briefing.

The package of studies, published in the journal Science, also includes findings on the look and substance of Arrokoth. In their paper, Northern Arizona University planetary scientist Will Grundy and colleagues reveal that the surface of the body is covered in "ultrared" matter so thermodynamically unstable that it can't exist at higher temperatures closer to the sun.

The ultrared color is a sign of the presence of organic substances, namely methanol ice. Grundy and colleagues speculate that the frozen alcohol may be the product of water and methane ice reacting with cosmic rays. New Horizons didn't detect any water on the body, but the researchers say its possible that H2O was present but hidden from view. Other unidentified organic compounds were also found on Arrokoth.

New Horizon's flyby of Pluto and Arrokoth took place over the course of a few days. To gain a further understanding of how the object formed and what it's made of, researchers need to find a way to send a probe to the Kuiper Belt for a longer length of time, perhaps by locking it into the orbit of a larger body. Such a mission could tell us even more about the infancy of the solar system and the composition of our planetary neighborhood's outer limits.

The Moon Will Make Mars Disappear Next Week

Take a break from stargazing to watch the moon swallow Mars on February 18.
Take a break from stargazing to watch the moon swallow Mars on February 18.
Pitris/iStock via Getty Images

On Tuesday, February 18, the moon will float right in front of Mars, completely obscuring it from view.

The moon covers Mars relatively often—according to Sky & Telescope, it will happen five times this year alone—but we don’t always get to see it from Earth. Next week, however, residents of North America can look up to see what’s called a lunar occultation in action. The moon's orbit will bring it between Earth and Mars, allowing the moon to "swallow" the Red Planet over the course of 14 seconds. Mars will stay hidden for just under 90 minutes, and then reemerge from behind the moon.

Depending on where you live, you might have to set your alarm quite a bit earlier than you usually do in order to catch the show. In general, people in eastern parts of the country will see Mars disappear a little later; in Phoenix, for example, it’ll happen at 4:37:27 a.m., Chicagoans can watch it at 6:07:10 a.m., and New Yorkers might even already be awake when the moon swallows Mars at 7:36:37 a.m.

If you can’t help but hit the snooze button, you can skip the disappearing act (also called immersion) and wait for Mars to reappear on the other side of the moon (called emersion). Emersion times vary based on location, too, but they’re around an hour and a half later than immersion times on average. You can check the specific times for hundreds of cities across the country here [PDF].

Since it takes only 14 seconds for Mars to fully vanish (or reemerge), punctuality is a necessity—and so is optical aid. Mars won’t be bright enough for you to see it with your naked eye, so Sky & Telescope recommends looking skyward through binoculars or a telescope.

Thinking of holding an early-morning viewing party on Tuesday? Here are 10 riveting facts about Mars that you can use to impress your guests.

[h/t Sky & Telescope]

SECTIONS

arrow
LIVE SMARTER