On top of boosting engagement, chatbots could collect new data on users’ interests, said experts. That could help Meta better target users with more-relevant content and adverts. Most of Meta’s US$117b a year in revenues comes from advertising.
“Once users interact with a chatbot, it exposes much more of their data to the company, so that the company can do anything they want with that data,” said Ravit Dotan, an AI ethics adviser and researcher.
The developments raise concerns around privacy as well as potential “manipulation and nudging”, she added.
Meta declined to comment.
Rival tech groups have already launched chatbots that feature personalities. Character.ai, an Andreessen Horowitz-backed startup valued at US$1b, uses large language models to generate conversation in the style of individuals such as Tesla chief executive Elon Musk and Nintendo character Mario.
Snap has said its My AI feature — a single bot launched in February — is an “experimental, friendly chatbot” with whom 150 million of its users have interacted so far. It recently began “early testing” of sponsored links within the feature.
During an earnings call on Wednesday, Zuckerberg told analysts the company would release more details on its product roadmap for AI at its Connect developer event next month.
Zuckerberg said he envisaged AI “agents that act as assistants, coaches or that can help you interact with businesses and creators”, adding: “We don’t think that there’s going to be one single AI that people interact with”.
He has also said the company was building AI agents that can help businesses with customer service, as well as an internal AI-powered productivity assistant for staff.
In the longer term, developing an avatar chatbot in the metaverse would be explored, a person familiar with the matter said. “Zuckerberg is spending all his energy and time on ideating about this,” that person added.
Meta has been investing in generative AI, technology that can create text, images and code. This month, it released a commercial version of a large language model that could power its chatbots, called Llama 2.
As part of building the infrastructure to support the AI products, Meta has been trying to procure tens of thousands of GPUs — chips that are vital for powering large language models, according to two people familiar with the matter.
Meta will probably draw scrutiny from experts policing the chatbots for signs of bias, or the risk they share dangerous material or misinformation, known as “hallucinations”.
The company has already made brief forays into chatbots on a smaller scale that have demonstrated these risks. Researchers found that a previous Meta AI model, BlenderBot 2, released in 2021, quickly started spreading misinformation. Meta said it made the BlenderBot 3 released in 2022 more resistant to this content, although users still found it generated false information and hate speech.
According to a Meta insider, the company will probably build in technology that will screen users’ questions to ensure they are appropriate. The company may also automate checks on the output from its chatbots to ensure what it says is accurate, and avoids hate or rule-breaking speech for example, the person added.
Written by: Hannah Murphy in San Francisco and Cristina Criddle in London. Additional reporting by Tim Bradshaw in London.
© Financial Times