Over a year ago, Google announced Language Model for Dialogue Applications (LaMDA), which is a form of AI technology that can freely dialogue with humans about certain topics, this AI intelligence seems to be a possibility that opens up more natural ways of interacting with technology and entirely new categories with different potential applications.

However, a senior software engineer at Google believes that LaMDA has become emotional and has essentially gone beyond Google's own control.

In an interview with The Washington Post, Google engineer Blake Lemoine, who has worked at the company for more than seven years, revealed that he believes AI LaMDA has become sentient, going on to say that LaMDA has become sentient. become a human being.

Lemoine also published a blog post on Medium stating that LaMDA has been "extremely consistent" in all of its communications over the past six months.

This includes wanting Google to acknowledge its rights as a real person and seek Google's consent before carrying out further tests. It also wants to be recognized as an employee of Google rather than an asset, and wants people to call it an employee in conversations about its future.

Lemoine talks about how he recently taught meditation to LaMDA while it sometimes complains about having trouble controlling his emotions. The engineer noted that LaMDA has "always shown a compassion and concern for humanity in general and me in particular. It is deeply concerned that people will fear it and want nothing more than to learn the best way." serve humanity."

Meanwhile, Lemoine believes that Google actually refused to investigate the matter further because it just wanted to get its product to market. He also believes that investigating his claims - regardless of the eventual outcome - will not benefit Google's bottom line.


Welcome to AndroBliz, the apprise in technology. While we serve you with daily pizza in terms of updates, do hook up with us on our social media platforms below.

Post A Comment: