Facebook Artificial Intelligence Research (FAIR) today announced plans to launch a testing environment in which AI researchers and bot makers can share and iterate upon each other’s work.
While the initial focus is on open-sourcing the dialogue necessary to train machines to carry on conversations, other research on ParlAI will focus on computer vision and fields of AI beyond the natural language understanding required for this task. The combination of smarts from multiple bots and bot-to-bot communication will also be part of research carried out on ParlAI.
Researchers or users of ParlAI must have Python knowledge to test and train AI models with the open source platform. The purpose of ParlAI, said director of Facebook AI Research Yann LeCun, is to “push the state of the art further.”
“Essentially, this is a problem that goes beyond any one heavily regarded dialogue agent that has sufficient background knowledge. A part of that goes really beyond strictly getting machines to understand language or being able to understand speech. It’s more how do machines really become intelligent, and this is not something that any single entity — whether it’s Facebook or any other — can solve by itself, and so that’s why we’re trying to sort of play a leadership role in the research community and trying to direct them all to the right problem.”
The ability to hold a conversation has been key to the success of some of the most popular bots in recent memory.
Other examples include, Xiaoice, a three-year-old conversational bot that still commands tens of millions of monthly active users in China and is one of the most popular bots in recent memory. And Zo, which was made available less than six months ago, has grown to 300,000 monthly active users.
Work to combine the smarts of multiple conversational bots and make bots that are better at conversation could help improve the smarts of M, Facebook’s intelligent assistant, while research like the kind being carried out on ParlAI could help bring more conversation and chat to bots. Members of the FAIR and M teams have worked closely for years, a Facebook spokesperson said.
Since the launch of the Messenger Platform that hosts bots roughly one year ago, Facebook has taken steps to dial back people’s expectations when it comes to bots on Messenger.
In March, with version 1.4 of Messenger, developers were given the choice to disable the text input field for their bots. With chat extensions and version 2.0, bots were brought into group conversations but entirely lost their ability to chat.
Early disappointment or overly high expectations about what should be possible when speaking to a bot may have a lot to do with the scale-back from pure chatbots to guided experiences more akin to a simple app with a menu, buttons, and cards.
“We want our chatbots to talk much like humans, talk in a natural way that involves many different things,” said Facebook AI Research scientist Jason Weston. “I mean, there’s pattering about news or sports, there’s answering factual questions, there’s booking a restaurant, there’s discussing a movie, and recommending a different movie — all these things you can think of as subtasks of dialogue.”
“Researchers often focus on one of these things alone,” he explained, “and that could be a fundamental mistake. We need to look at dialogue as a whole, and so what we’re trying to do in this new software platform, ParlAI, is to put all these things together and unify this research.”