In January 2020, as a new plague began to upend life on Earth, a small research team from Tufts, the University of Vermont, and Harvard announced that they, too, had turned life on Earth upside-down. Their discovery wasn’t quite so dramatic at first glance. Any regular person peering through a microscope at their creation would see little more than a few globs of dirty pond water in a petri dish. But those globs were alive; in fact, they were alive in a way that nothing has ever been alive before, in an uncharted space between biology and technology. They called them Xenobots, the world’s first living robot—the world’s first programmable organism.
It’s Alive! But what is it?
Xenobot: Xeno as in Xenopus laevis, a voracious frog native to the wetlands of Sub-Saharan Africa; bot, of course, as in robot. It’s an unconventional name for an unconventional organism, so novel that even its makers struggle to conceptualize it. “The terminology that has served us well for many years is just not any good anymore,” concedes Michael Levin, the team’s iconoclastic biologist. His collaborator Josh Bongard, a computer scientist and robotics expert, has called Xenobots “novel living machines.” Sam Kriegman—the team’s postdoc—prefers the term “Computer Designed Organism,” although he’s been trying on “living deepfake” for size recently.
And they’re all right, in a way. Xenobots are deepfakes in the sense that they aren’t what they seem. They’re robots in the sense that they’re autonomous, programmable agents. They’re Computer Designed in the sense that their morphology—the form their tiny bodies take—was designed by an evolutionary computer algorithm in Bongard’s UVM lab. They’re living in the sense that they’re made of embryonic frog cells, and they’re machines in the sense that humans are machines: biological mechanisms made up of constituent parts.
Here’s how you make a Xenobot, according to the paper the team published in the journal PNAS. You start on a computer, in silico, with a relatively simple 12-line evolutionary algorithm. You tell that algorithm what you’re working with (frog cells) and what you’re trying to make (new organisms). The algorithm generates random solutions to that problem, setting hundreds of oddball shapes loose in a virtual petri dish. The underperforming shapes are culled, and the algorithm continues to generate new modifications of the survivors, until it’s spawned some unusual, but potentially viable, organisms. Then you leap from simulation to reality.
The biological building blocks you’re working with, in vivo, are squishier than the simulations: they’re Xenopus laevis skin and cardiac cells. You take them from a frog embryo, and then pipette them carefully into a mold, until the cells knit together, forming a tight sphere of living goo. A microsurgeon, using a tiny scalpel and cauterizing iron, physically carves the sphere into shapes suggested by the evolutionary algorithm. Then you watch in awe as the biological Xenobots behave exactly like their silicon counterparts, moving and cooperating and swirling around in the real-life petri dish. Voila: a computer-designed creature brought to life.
Xenobots are the first living creatures whose immediate evolution occurred inside a computer and not in the biosphere. The result is a simple organism. Xenobots have no brains; the shape of their bodies is what determines how they behave. And yet, Levin and Bongard do not fully understand why Xenobots behave the way they do. “What you're seeing de novo is a completely novel creature with new proto-cognitive capacities, preferences, capabilities, IQ,” Levin explains. “All of those things appear out of nowhere.” Sometimes a Xenobot will head in one direction and then abruptly double back, as though changing its mind. What force guides such behaviors? Can a frog’s cells, in some way, think? Xenobots seem to have “nano free-will,” Levin jokes.
And this is where the can of worms—or tadpoles, maybe—pops open.
Bags of Cells
Levin breaks it down: you and I and everyone we know started life as a single cell. That cell divided, multiplied, and got together with its siblings. At some point, they became a collective, with thoughts, preferences, and memories belonging to that collective, rather than to any single individual cell. This many-to-one process, Levin explains, is poorly understood, and is key to our thorniest philosophical issues. “Questions of agency and free will and ethics—it all boils down to understanding how these things arise from a collection of cells where they didn't exist, where the whole didn't exist yet.” In short: what are we, if not a bag of cells? And how does a bag of cells become a frog, to say nothing of a person?
Levin has spent his career investigating the mysteries of morphogenesis—the process by which living things come together. In an earlier experiment, his lab created “Picasso tadpoles,” scrambling tadpole eyes and mouths into cubist arrangements. The tadpoles still grew into normal frogs; their organs just wandered around until they landed in the right shape. “When there's a surprising new configuration, or something completely novel happens, the parts [of biological systems] are not only forgiving. They will find interesting ways to get their job done,” Levin explains. Xenobots are living proof of this remarkable plasticity, what Levin calls biology’s multi-scale competency. Even in the complete absence of a frog, its cells are still entirely capable of working together to achieve their goals.
That’s because cells, Levin says, aren’t just building blocks. They’re “competent agents,” able to assemble themselves into larger structures, pursue goals, work collaboratively, and even self-heal, thanks to an embodied knowhow we have yet to fully understand. Whereas most biologists and synthetic biologists think of the DNA code as the underlying “software” that determines life’s form and behavior, Levin and his team frame the metaphor differently: DNA, he says, produces cellular hardware, but the software actually lives somewhere else, in the physiological, biochemical, bioelectric language cells use to coordinate themselves as they form the patterns of the body. Levin’s lab has shown that this language can be manipulated, even rewritten, in the shaping and layering of Xenopus cells, without so much as touching the genome. The cellular drive to form patterns appears to operate at a deeper, more ancient level. It’s powerful software, indeed. Cells are adaptive, autonomous, and competent—qualities that skirt uncomfortably close, for many in the scientific community, to cognition. Thinking, after all, is supposed to be something only brains do.
This view that cognition is a binary thing—you either have it, or you don’t—“is, really, when you start digging into it, completely untenable,” says Levin, “and it's untenable because it neglects evolution. It neglects the fact that every capacity we have came from a more primitive version at some point.” Biology has been making decisions, solving problems, and retaining memories since long before brains arrived on the scene, and many of the capacities of neurons are shared by more humble cells, albeit at a different scale. Cognition, then, isn’t a binary; it’s a long continuum that reaches back into evolutionary history, “all the way from naked chemical networks to cells, to bacteria, organs, and then whole organisms and humans.”
It’s tempting to call this anthropomorphism, and some do. In the comments section of a recent essay for Aeon magazine, in which Levin and co-author Daniel Dennett submit that “biologists should chill out and see the virtues of anthropomorphizing all sorts of living things,” several readers called his thinking “disturbing.” What happens in biochemistry only looks like cognition, they argue, and in ascribing to cells some spark of the magic that animates our unique human brains, Levin and his fellow researchers are just projecting. But Levin proposes an inversion. He’s not anthropomorphizing morphogenesis, but rather naturalizing cognition. The bad news is that there’s nothing especially magical about our brains. The good news is that there’s something magical in everything else that’s alive. “It's everywhere, in different degrees and combinations,” he tells me. Isn’t that much more exciting?
Boxes of Rocks
It certainly is to Josh Bongard, Levin’s collaborator at the University of Vermont. Bongard, too, is an iconoclast, part of a small subset of the computer science community that believes biology will be key to creating a general Artificial Intelligence. Most AI researchers, Bongard tells me, take a top-down approach to cognition, focusing on deep learning and fine-tuning the synaptic connections of artificial neural networks. There’s an “unspoken assumption that that's where the intelligence is,” he says. “Whatever this magic sauce is, it's not in the cells, it's not in rearrangement of body plan, it's not in the muscles, it's not in connections with the world. It's a distillate that is in our synaptic connections.”
Xenobots are an example of Artificial Life, a research discipline that has been around, in some form, since the mid-1940s. Bongard compares its practitioners to small mammals running between the feet of dinosaurs—tenacious, wily, and immune to cataclysm. ALife researchers often recreate biological evolution in silico, setting code-based organisms to compete for resources and hard drive space; the field’s history abounds with kooky virtual organisms whose ingenious adaptations showcase, as the computer scientist Christopher Langton famously wrote, not just life as we know it, but life as it could be. Viewed in this context, Xenobots are oddly meta: they’re real-life Artificial Life creatures, synthetic organisms released from the prison of simulation and given form, like pint-sized Pinocchios, in the flesh-and-blood world. “The Artificial Life community is trying to adopt Xenobots as their mascot,” Bongard laughs.
Truly understanding human-level intelligence—to say nothing of recreating it—will mean understanding cognitive forces at play in the body. “You can't throw the body out with the bathwater. It is an integral part of intelligence, and we need to focus on body and brain together if we're going to be able to create truly intelligent machines,” Bongard says. Rather than aping the human brain, perhaps new machine learning architectures will emulate the more ancient, pre-neural principles by which cells solve problems. In the future, rather than designing robot minds and bodies separately—assuming that tomorrow’s advanced AI will end up taking humanoid form, or worse, animating a Boston Dynamics robo-dog—roboticists may invite evolutionary processes into the mix and allow body and brain to co-evolve. “Clearly that seems to be how nature did it,” Bongard says—and since we’re sitting here talking about it, it must have worked.
AI, incidentally, seems well-equipped for this kind of design. The evolutionary algorithm that designed the Xenobots did not need to know anything about the materials it worked with. It simply generated hundreds of random variations and iterated on those meeting the design parameters. A human engineer, Bongard says, might have refused such an assignment, fearing failure, but where the AI did fail, biological tissue picked up the slack. That’s due, again, to the plasticity of cells: unlike “dumb” materials like metal or ceramics, cells do the “equivalent of auto-correct,” filling in the missing pieces. Life’s calling card, after all, is its ability to resist entropy. “It’s surprising, although maybe not so surprising in retrospect,” Bongard explains, “that it may be easier for AI to design biological machines than it will be for AI to design traditional machines.”
All of this seems to point to a convergence, somewhere on the horizon, between computer science and biology—a time when robots evolve themselves, biologists program life at a cellular level, and formerly ironclad distinctions between life and technology become hopelessly fluid. Other latent manifestations of this convergence are already visible in adjacent fields like Synthetic Biology, Artificial Life, and Unconventional Computing, in which researchers are building computer chips and solving thorny computational problems with the help of slime molds and the mycelial networks of fungi. The pipeline for creating Xenobots involves both hardware and wetware, and, if you reduce it down to its elements, begins to feel strangely self-reflexive: humans making machines to make other forms of life, which are also machines. “The thing being made and the thing that is making is mixed up,” Bongard says. Are we just bags of cells using boxes of rocks to produce more bags of cells? Is this process just another manifestation of life’s indomitable drive to recreate itself? Could we even call it evolution?
A Whole Lot of Things That are Really Unnecessary
Maybe it’s just semantics. When Levin, Bongard, and I met in a continent-spanning Zoom session in early February, we discussed language as much as anything else. It’s always a struggle to name something new, especially when that thing might in fact be very, very old. As computer science and biology converge—as the things being made and the things that are making get mixed up—we find ourselves using outdated words for rapidly evolving concepts.
Bongard reminded me that the word “robot” recently celebrated its centennial. It comes from the Czech playwright Karel Čapek’s 1920 play “Rossum’s Universal Robots,” about a worker uprising in a robot factory. Čapek’s robots are biological, the result of a vaguely alchemical process involving “albumen” with a “raging thirst for life.” Our conception of a robot as being something metallic, with clanging gears and servo-motors, is more recent baggage, a consequence of the science-fiction stories and films of the mid-twentieth century. In order to understand what Xenobots might mean for our future, we’ll have to divest ourselves from the idea that a robot—or any kind of autonomous being—can be wholly defined by its materiality. “It's not what it's made out of,” Bongard said to me. “It’s what it does.”
For now, what Xenobots do is live. In the near future, though, bespoke Xenobots made from a patient’s own cells could travel through the body to deliver targeted medicines, identify cancer, scrape plaque from arteries, or perform microscopic internal surgeries. Equipped with some extra receptors, Xenobots might seek out and digest toxic waste in the ocean, or identify interesting molecules in environments inaccessible to traditional robots. Levin sees enormous potential in the field of regenerative medicine. Bongard already sees Xenobots as a significant course-correction for traditional robotics. From their humble petri dish, Xenobots point to a future in which life itself is harnessed to heal, build, and regenerate. We don’t have the language for this future yet, and even fewer stories.
But perhaps “Rossum’s Universal Robots” can provide us with a useful parable. In the play, robots are cheap, complacent slave laborers unburdened by human angst. Unlike human beings, they’re streamlined—excised of complex anatomy and complex emotions. As the general manager of the robot factory explains, “a man is something that feels happy, plays the piano, likes going for a walk, and, in fact, wants to do a whole lot of things that are really unnecessary.”
He’s wrong, of course. The robots, who share a common language and a common condition as exploited workers, have their own unnecessary desires. They lay siege to the factory, and eventually destroy all humans on Earth. Only a single human engineer survives, spared by the robots in the hopes that he might be able recreate their original formula. Without it, they cannot multiply. Unfortunately, he’s only a mechanical engineer. He knows how to repair machines, but in a laboratory filled with test tubes and microscopes, he’s lost. All the technology of humankind is useless without an understanding of the deeper processes of biology. “I cannot create life,” he wails, when they finally come for him.
Claire L. Evans is a writer and musician. She is the singer and coauthor of the Grammy-nominated pop group YACHT, and the author of Broad Band: The Untold Story of the Women who Made the Internet. She lives in Los Angeles.