DNA scientists pressed pause 50 years ago, will AI researchers do the same?

DNA scientists pressed pause 50 years ago, will AI researchers do the same?

In the summer of 1974, a group of international researchers published an urgent open letter asking their colleagues to suspend work on a potentially dangerous new technology. The letter was the first in the history of science – and now, half a century later, it has happened again.

The first letter, “Potential Risks of Recombinant DNA Molecules,” called for a moratorium on certain experiments that transfer genes between different species, a key technology in genetic engineering.

The March letter, “Pause Giant AI Experiments,” came from leading artificial intelligence researchers and notables such as Elon Musk and Steve Wozniak. As with the recombinant DNA letter, the researchers called for a moratorium on certain AI projects, warning of a possible “AI extinction event.”

Some AI scientists had already called for cautious AI research in 2017, but their concern garnered little public attention until the arrival of generative AI, first publicly released as ChatGPT. Suddenly, an AI tool can write stories, paint pictures, conduct conversations and even write music – all previously unique human skills. The March letter hinted that AI could one day turn hostile and even possibly become our evolutionary replacement.

Though separated by 50 years, the debates that followed the DNA and AI letters have a fundamental similarity: in both, a relatively specific concern raised by researchers quickly became a public representation of a whole range of political, social and even spiritual concerns. .

The recombinant DNA card focused on the risk of accidentally creating new fatal diseases. Opponents of genetic engineering have extended this concern to various disaster scenarios: a genocidal virus programmed to kill only one racial group, genetically engineered salmon so vigorous it could escape fish farms and destroy coastal ecosystems, increased fetal intelligence accessible only to the wealthy. . There were even street protests against recombinant DNA experimentation in key research cities, including San Francisco and Cambridge, Massachusetts. The mayor of Cambridge warned of bioengineering “monsters” and asked: “Is this the answer to Dr. Frankenstein?”

In the months since the “Pause Giant AI Experiments” letter, disaster scenarios have also proliferated: AI enables the ultimate totalitarian surveillance state, a crazed AI military application launches a nuclear war, superintelligent AIs collaborate to undermine the planet’s infrastructure. And then there are less apocalyptic forebodings, too: unstoppable AI-powered hackers, massive global AI disinformation campaigns, rampant unemployment as artificial intelligence takes our jobs.

The recombinant DNA letter led to a four-day meeting at the Asilomar Conference Grounds on the Monterey Peninsula, where 140 researchers gathered to come up with safety guidelines for the new work. I covered that conference as a journalist, and the proceedings radiated history in the making: who’s who of leading molecular geneticists, including Nobel Prize winners, as well as younger researchers who added 1960s idealism to the mix. The discussion in session after session was contentious; careers, work in progress, freedom of scientific inquiry were all potentially at stake. But there was also the underlying fear that if researchers didn’t write their own regulations, Congress would do it for them, much more heavily.

With just a few hours to spare on the final day, the conference voted to approve guidelines that would then be codified and enforced by the National Institutes of Health; versions of these rules still exist today and must be followed by any research organization receiving federal funding. The guidelines also indirectly influence the commercial biotech industry, which relies largely on federally funded science for new ideas. The rules aren’t perfect, but they worked well. In the 50 years since, we haven’t had a single genetic engineering disaster. (Even though the COVID-19 virus escaped from a lab, its genome showed no evidence of genetic engineering.)

The artificial intelligence challenge is a more complicated problem. Much of the new AI research is done in the private sector, by hundreds of companies, from small startups to multinational tech giants – none as easily regulated as academic institutions. And there are already laws around cybercrime, privacy, racial bias, and more that cover many of the fears surrounding advanced AI; how many new laws are really needed? Finally, unlike genetic engineering guidelines, AI rules are likely to be made by politicians. In June, the European Union Parliament passed its AI Bill, a far-reaching proposal to regulate AI that could be ratified by the end of the year but which has already been criticized by researchers as prohibitively rigid.

No proposed legislation so far addresses the most dramatic concern in the AI ​​moratorium letter: human extinction. But the history of genetic engineering since the Asilomar Conference suggests we may have some time to consider our options before any potential AI apocalypse.

Genetic engineering has proven to be much more complicated than anyone expected 50 years ago. After the initial fears and optimism of the 1970s, each decade has confronted researchers with new puzzles. A genome can have large sets of identical and repetitive genes, for reasons that are not fully understood. Human diseases often involve hundreds of individual genes. Epigenetic research has revealed that external circumstances – diet, exercise, emotional stress – can significantly influence the functioning of genes. And RNA, previously thought of only as a chemical messenger, turns out to have a much more powerful role in the genome.

This increasing complexity may also be true for AI. Even the most human poems, paintings or conversations produced by AI are generated by a purely statistical analysis of the vast database that is the internet. Bringing about human extinction will require much more from the AI: specifically, a self-awareness capable of ignoring the wishes of its creators and instead acting in the AI’s own interests. In short, conscience. And, like the genome, consciousness is sure to get a lot more complicated the more we study it.

Both the genome and consciousness have evolved over millions of years, and to assume we can reverse engineer it in a few decades is a bit presumptuous. However, if such arrogance leads to over-caution, that’s fine. Before we really have full control of evolution or consciousness, we’ll have a lot of time to figure out how to proceed as responsible adults.

Michael Rogers is an author and futurist whose most recent book is “Email from the Future: Notes from 2084. His coverage of the Asilomar recombinant DNA conference, “The Pandora’s Box Congress,” was published in Rolling Stone in 1975.

Leave a Comment

%d bloggers like this: