Hiltzik: AI hype is as old as AI itself

Hiltzik: AI hype is as old as AI itself

Those of us who have spent the last few decades reporting on technology have seen fads and fads rise and fall in investment bubbles.

In the late 1990s, it was dot-coms, more recently crypto, blockchain, NFTs, driverless cars, the “metaverse”. Everyone had their day in the sun amid promises that they would change the world, or at least banking and finance, the arts, transportation, society in general. So far, these promises are spectacularly unfulfilled.

This brings us to artificial intelligence chatbots.

In three to eight years we will have a machine with the general intelligence of an average human being…. In a few months it will be at the level of genius and a few months later its powers will be incalculable.

— AI pioneer Marvin Minsky — in 1970

Since last October, when I raised a red flag about the hype in the field of artificial intelligence, investor enthusiasm has only grown exponentially (as have public fears).

Wall Street and venture capitalists are investing billions of dollars in AI startups – Microsoft alone made a $10 billion investment in OpenAI, the company that produced the ChatGPT bot.

Companies looking for capital have learned that they only need to claim an AI connection to bring investors to their doors, just as startups typed “dot-com” into their business plans a few decades ago. Nvidia Corp. acquired a market value of trillions of dollars on the strength of a chip it manufactures that is considered crucial for the data processing required by AI chatbots.

AI promoters are excited about their products’ capabilities (and potential profits).

Here is venture capitalist Marc Andreessen: Among the perks of “our new age of AI,” he recently wrote, “every child will have an AI tutor who is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful”; each person “an AI assistant/coach/mentor/coach/counselor/therapist”; every scientist “an AI assistant/collaborator/partner”; every political leader is the same super-intelligent adviser.

There’s not much to be said about this prediction, except that it’s endearing in its childlike naivety, given that in today’s world we still don’t get the broadband Internet connections that originated in the 1990s to millions of Americans. Would it surprise you to learn that Andreessen’s venture firm has invested in over 40 AI-related companies? Me neither.

Andreessen also wrote, “Anything people do with their natural intelligence today can be done much better with AI.”

This is probably not true. Examples of AI that confound its users’ expectations have been piling up on an almost weekly basis.

Among the most famous cases is that of a New York attorney who filed a lawsuit in federal court citing or citing dozens of fictitious lawsuits generated by ChatGPT. When the judge ordered the lawyer to verify the quotes, he asked ChatGPT if they were real, which is a bit like asking a young mother if her baby is the most adorable baby ever. The bot said, sure enough, another “hallucination”.

In the end, the attorney and his associates were fined $5,000 and required to write abject letters of apology to opposing parties and all of the judges the bot had falsely identified with the fake cases. He also lost the lawsuit.

Reports of other similar fiascoes abound. An eating disorders association replaced the humans working on its helpline with a chatbot, possibly as a union bustling move — but later had to take the bot offline because, according to Vice, it was encouraging callers to adopt “eating habits unhealthy”.

A Texas teacher failed an entire class because ChatGPT claimed to author their work. The university administration exonerated nearly all of the students after they proved the bot wrong; someone even submitted the professor’s doctoral dissertation to the bot, which claimed to have written it too.

Claims made about the abilities or dangers of AI chatbots have often been shown to be misleading or chimerical.

A team of MIT researchers set out to find that ChatGPT could beat school math and computer science curricula with “a perfect solve rate” on subject tests; its discovery was debunked by a group of MIT students. (Original article has been retracted.) A telling report that an AI program in an Air Force simulation “killed” its human operator to pursue its programmed objectives (HAL style) turned out to be fictional.

Therefore, it is useful to take a closer look at what AI chatbots can and cannot do. We can start with the terminology. For these programs, “artificial intelligence” is a misnomer. They are not intelligent in the sense that humans and animals are intelligent; they are only designed to look smart to an outsider who is unaware of the electronic processes going on inside. In fact, using the term itself distorts our perception of what they are doing.

This problem was noticed by Joseph Weizenbaum, the designer of the pioneering chatbot ELIZA, who replicated a psychotherapist’s responses so convincingly that even test subjects who knew they were talking to a machine thought it displayed emotions and empathy.

“What I had not realized,” Weizenbaum wrote in 1976, “is that extremely short exposures to a relatively simple computer program can induce powerful delusional thoughts in quite normal people.” Weizenbaum warned that the “reckless anthropomorphization of the computer” – that is, treating it as a kind of thinking companion – produced a “simplistic view… of intelligence”.

Even the most advanced computers lacked the capacity to acquire information beyond being “spoon-fed,” Weizenbaum wrote. This is true for today’s chatbots, which acquire their data by “scraping” texts found on the web; it’s just that their ability to devour data is much greater now, thanks to exponential improvements in computing power, than it was in 1976.

The chatbots that are attracting so much interest today are what AI researchers call “generative” AI – which uses statistical rules to answer queries by extrapolating previously acquired data.

At its core, says Australian technologist David Gerard, ChatGPT is “a stupendously extended autocomplete,” like a word processing program that predicts the end of a word or phrase you’ve started typing. The program “just spews word combinations based on massive amounts of training text.”

Professional enthusiasm for these programs – predictions that they have brought us closer to true artificial intelligence, or that they are capable of learning, or that they harbor the potential to destroy the human race – may seem unprecedented.

But not that. The same sure predictions – what Weizenbaum called “grandiose fantasies” – have been part of the AI ​​world since its inception. In 1970, Marvin Minsky of MIT, one of the godfathers of AI, told Time magazine that “In three to eight years we will have a machine with the general intelligence of an average human being…a machine that will be able to read Shakespeare, oil a car, do office politics, tell a joke, and fight. … In a few months, it will be at genius level, and a few months later its powers will be incalculable.

Minsky and his contemporaries finally had to recognize that programs that seemed to show unlimited potential were only suitable within narrow limits. This is still true.

ChatGPT can produce doggerel poetry or essays for freshmen and sophomores, pass tests on some technical subjects, write press releases, compile legal proceedings with a veneer of professionalism.

But these are all generic writing classes; samples produced by humans in these categories often have a robotic and vacuous quality. Asked to produce something truly original or creative, chatbots fail or, as those hapless lawyers discovered, they manufacture. (When Charlie Brooker, creator of the TV series “Black Mirror,” asked ChatGPT to write an episode, the product was of a quality he described with an unprintable epithet.)

That might give pause to companies hoping to hire chatbots to reduce their human payrolls. When they find out they might have to hire workers to check chat output to avoid attracting customer ire or even lawsuits, they might not be as eager to give bots even routine responsibilities, let alone mission critical assignments. .

This, in turn, could indicate the fate of the current investment frenzy. “The optimistic AI spring of the 1960s and early 1970s,” writes Melanie Mitchell of the Santa Fe Institute, gave way to the “AI winter,” in which government funding and popular enthusiasm collapsed.

Another boom, this time about “expert systems”, materialized in the early 1980s, but faded by the end of the decade. (“When I received my PhD in 1990, I was advised not to use the term ‘artificial intelligence’ on my job applications,” writes Mitchell.) Every decade seemed to have its ups and downs.

Is the AI ​​genuinely threatening? Even the chilling warnings issued about the dangers of technology and the need for regulation read like promotional campaigns in disguise. Your ulterior motive, writes Ryan Calo of the University of Washingtonis “focusing public attention on an unlikely scenario that doesn’t require much change in their business models” and convincing the public that AI is exceptionally powerful (if deployed responsibly).

Interestingly, Sam Altman, CEO of OpenAI, lobbied the European Union not to “over-regulate” his business. EU regulations, incidentally, would focus on short-term issues related to job losses, privacy violations and copyright infringements.

Some of these concerns are the motivations for the writers’ and actors’ strikes now taking place in Hollywood – union members are worried about losing work to AI bots exploited by petty producers and studio heads who think the public is dumb. too much to know the difference between human and robotic creativity.

What is little recognized by today’s AI entrepreneurs, like their predecessors, is how difficult it will be to leap from the current class of chatbots to true machine intelligence.

The image of human intelligence as merely the product of one hundred trillion neural connections in the human brain – an unfathomable number per the human brain—leads some AI researchers to assume that once their programs reach this scale, their machines will become conscious. It is the last difficult problems in achieving intelligence, let alone consciousness, that may be insurmountable: human researchers have not even agreed on a definition of intelligence and have failed to identify the seat of consciousness.

Is there a reasonable role in our lives for chatbots? The answer is yes, if they are seen as tools to enhance learning, not cheat; teachers are concerned about identifying chat-generated assignments, but over time they will use the same techniques they use to identify plagiarism – comparing the product to what they know about their students’ abilities and rejecting those that seem too polished ( or have errors).

ChatGPT can help students, writers, lawyers, and doctors organize large amounts of information or data to help clarify ideas or produce creative insights. It can be helpful in the same way that smart teachers advise students to use other web resources with obvious limitations, such as Wikipedia: they may be the first place you turn to when preparing an assignment, but better not be the last.

Leave a Comment

%d bloggers like this: