“LaMDA will be incorporated into Google Assistant, Google Workspace, and Gooogle Search. The latter is the most important because LaMDA has the potential to make information and computing radically more and easier to use,” Pichai explained.
For the time being, LaMDA can only respond with a few sentences to a user’s query in a dialogue that gives the impression of a conversation between two people, and it could soon become part of Google’s products and services. There is also MUM, the multitask unified model, which allows you to search textually or visually. In other words, given a photo of a pair of shoes, MUM can comprehend the content of the image and the intent behind the query, letting the questioner know, for example, whether they would be appropriate for hiking or dancing.
It currently works in 75 languages, but the product will expand to not only new languages but also other services.