Alphabet is warning employees about how they use chatbots, including its own Bard, as it markets the program around the world, four people familiar with the matter told Reuters.
Google’s parent company has advised employees not to enter their sensitive materials into AI chatbots, the people said and the company confirmed, citing a long-standing policy on protecting information.
Chatbots, including Bard and ChatGPT, are human-sounding programs that use so-called generative artificial intelligence to hold conversations with users and respond to a myriad of requests. Human reviewers can read the chats, and the researchers found that a similar AI can replay the data it absorbed during training, creating a risk of leakage.
Alphabet has also warned its engineers to avoid directly using computer code that chatbots can generate, some of the people said.
Asked for comment, the company said that Bard may make unwanted code suggestions, but he still helps programmers. Google also said it intends to be transparent about the limitations of its technology.
The concerns show how keen Google is to avoid damaging its business with software released in competition with ChatGPT. At stake in Google’s race against ChatGPT sponsors OpenAI and Microsoft are billions of dollars in investment plus untold advertising and cloud revenue from new AI programs.
Google’s caution also reflects what is becoming a security standard for corporations, namely, warning people about using publicly available chat programs.
A growing number of companies around the world have created barriers to AI chatbots, including Samsung, Amazon.com and Deutsche Bank, the companies told Reuters. Apple, which did not return requests for comment, also responded.
About 43% of professionals were using ChatGPT or other AI tools in January, often without notifying their bosses, according to a survey of nearly 12,000 respondents, including top US companies, by networking site Fishbowl.
In February, Google told the team that tested Bard before its launch not to provide internal information, Insider reported. Now, Google is rolling out Bard in over 180 countries and 40 languages as a springboard for creativity, and its alerts extend to its code suggestions.
Google told Reuters it had had detailed talks with Ireland’s Data Protection Commission and is responding to questions from regulators, following a report by Politico on Tuesday that the company was delaying the launch of Bard in the EU this week. awaiting more information on the impact of the chatbot on privacy.
Worried about confidential information
This technology can compose emails, documents and even the software itself, promising to greatly speed up tasks. Included in this content, however, could be misinformation, confidential data or even copyrighted passages from a Harry Potter novel.
A Google privacy notice updated on June 1 also states: “Do not include confidential or sensitive information in your conversations with Bard.”
Some companies have developed software to deal with these concerns. For example, Cloudflare, which defends websites against cyberattacks and offers other cloud services, is marketing a feature for companies to mark and restrict the outbound flow of some data.
Google and Microsoft are also offering conversational tools for enterprise customers that will be priced higher but refrain from absorbing data into public AI models. The default setting in Bard and ChatGPT is to save users’ chat history, which users can choose to delete.
“It makes sense” that companies wouldn’t want their employees to use public chatbots for work, said Yusuf Mehdi, director of consumer marketing at Microsoft.
“Companies are taking a properly conservative point of view,” Mehdi said, explaining how Microsoft’s free Bing chatbot compares to its enterprise software. “There, our policies are much stricter.”
Microsoft declined to comment on whether it has a general ban on employees entering sensitive information into public AI programs, including its own, although another executive told Reuters he had personally restricted its use.
Matthew Prince, CEO of Cloudflare, said typing confidential matters into chatbots was like “letting a bunch of PhD students loose on all their private records.”
© Thomson Reuters 2023