Google launched Bard despite major ethical concerns from its employees

Google pushed the Bard launch, despite employees internally calling it “a pathological liar” and “cringeworthy.”

According to 18 former and current Google workers who talked to Bloomberg, the drive to compete with OpenAI’s ChatGPT and the AI-powered Bing has steamrolled ethical considerations. You might have noticed this yourself if you’ve tried Bard, but the AI chatbot is prone to inaccuracies. According to the report, Bard gave dangerous advice on how to land a plane. Another employee said answers about scuba diving “would likely result in serious injury or death.”

Google, which has dominated the search engine market, is in a precarious position, needing to fend off AI challengers on one hand, and hold its position as the top tech dog on the other. Despite concerns that the launch of an underbaked large language model (LLM) could produce harmful, offensive, or inaccurate responses, Google is scrambling to integrate Bard and generative AI into consumer-facing tools. And any pushback from safety and ethics workers is considered a deterrent to Google’s new priority. “The staffers who are responsible for the safety and ethical implications of new products have been told not to get in the way or to try to kill any of the generative AI tools in development,” employees told Bloomberg. The ethics team is now “disempowered and demoralized,” according to former and current staffers.

Before OpenAI launched ChatGPT in November 2022, Google’s approach to AI was more cautious and less consumer-facing, often working in the background of tools like Search and Maps. But since ChatGPT’s enormous popularity prompted a “code red” from executives, Google’s threshold for safe product releases has been lowered in an effort to keep up with its AI competitors.

Jen Gennai, who leads the “responsible innovation” team at Google, overruled a risk evaluation from her team which said Bard wasn’t ready yet. The reasoning was that by cushioning Bard as an “experiment” it could be launched, and start improving based on public feedback. Similarly, Google announced AI features for Docs and Slides. But this approach has had an adverse effect. Bard is widely considered to be an inferior product to ChatGPT, and a day after Google’s Docs and Slides announcement, Microsoft debuted full AI integration with its work tools — not just Microsoft Word and PowerPoint, but the full Office suite, including Teams and Outlook. Samsung is also considering replacing Google with Bing as its default search engine on mobile devices.

Ever since Google removed the famous phrase “Don’t be evil” from its code of conduct, it’s seemed more and more like its frenemy Meta, whose former internal motto “Move fast and break things” seemed to become the unofficial mantra of all of Silicon Valley. And when ethics and safety are compromised in service of moving fast, public trust is what risks being broken.