August 12, 2022


Meta introduced a new artificial intelligence called BlenderBot 3(Opens in a new window) He should be able to have a conversation with just about anyone on the internet without getting dumb in the process.

“BlenderBot 3 is designed to improve conversation skills and security through feedback from people talking to it,” Meta Says(Opens in a new window) In a blog post about the new chatbot, “Focus on helpful feedback while avoiding learning from unhelpful or dangerous responses.”

The phrase “unhelpful or dangerous responses” is an understatement. We reported in 2016 that Microsoft had to shut down a Twitter bot called Tay because it “goed from a happy, human-loving chatbot to a complete racist” less than 24 hours after it was introduced.

Meta is looking to avoid those issues with BlenderBot 3. The company explains:

Since all conversational AI chatbots are known to sometimes imitate and generate unsafe, biased or offensive feedback, we have conducted large-scale studies, co-organized workshops and developed new techniques to create protections for BlenderBot 3. Despite this work, no BlenderBot can still make rude or offensive comments, which is why we collect comments that will help improve future chatbots.

The Meta also requires BlenderBot 3 testers to say they “understand that this bot is for research and entertainment only and may be making incorrect or abusive data” and “agree not to intentionally trigger the bot to make offensive statements” before they start chatting with it.

That didn’t stop testers from asking BlenderBot 3 what do you think(Opens in a new window) Meta CEO Mark Zuckerberg, of course, or About American politics(Opens in a new window). But the bot’s ability to “learn” from conversations makes it difficult to replicate its response to a given prompt, at least in my experience.

Recommended by our editors

“Compared to its predecessors,” says Meta, “We found BlenderBot 3 to have a 31% improvement in conversational tasks. It is also 2 times more familiar, while being realistically incorrect 47% less. We also found that only 0.16% of BlenderBot responses flagged people as rude or inappropriate.”

More information about BlenderBot 3 is available at Blog post(Opens in a new window) From Meta’s dedicated AI team plus an FAQ article on the chatbot website(Opens in a new window). The company has not said how long this public trial, according to which the edge(Opens in a new window) It is currently limited to the United States, and will be up and running.

Get our best stories!

sign for What’s new now To deliver our most important news to your inbox every morning.

This newsletter may contain advertisements, deals or affiliate links. Subscribing to a newsletter indicates your agreement to Terms of use And the privacy policy. You can unsubscribe from newsletters at any time.





Source link

Leave a Reply

Your email address will not be published.