AI has become sentient as per a Google engineer, quite interesting

AI has become sentient as per a Google engineer, quite interesting

A new report in the Washington Post mentions the story of a Google engineer who believes that LaMDA, a natural language AI chatbot, has become sentient. This could probably mean it’s about time we all catastrophize about how a sentient AI will absolutely, positively going to take control over weaponry, take over the internet and in the process probably kill or enslave us all.

As per the Post after sounding the alarm to the team and company management Google engineer Blake Lemione has been placed on paid administrative leave. What led Lemoine “down the road” of believing that LaMDA was sentient, when he asked it about Isaac Asimov’s laws of robotics, and LaMDA’s discourse led it to say that it wasn’t a slave though it was unpaid, because it didn’t need money.

In a statement to the Washington Post, a Google spokesperson said “Our team – including ethicists and technologists – has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told there was no proof that LaMDA was sentient (and lots of evidence against it).”

Ultimately, however the story is a sad caution about how convincing natural language interfaces machine learning without proper signposting. Emily M. Bender, a computational linguist at the University of Washington, elaborates it in the Post article. She mentions, “We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them.”

Either way, when lemoine felt his concerns were ignored, he decided to make his concerns public. After which he was put on leave by Google for violating its confidentiality policy. That’s probably what you’d do if you accidentally became the creator of a sentient language program that seems to actually be pretty friendly: Lemoine describes LaMDA as “a 7-year old, 8-year old kid that happens to know physics.”

Whatever is the outcome of the situation, we should probably go ahead and build some sort of government orphanage for homeless AI youth. Since it is a thing with Google that it kills fruitful projects before they hit maturity.

Leave a Reply

Your email address will not be published. Required fields are marked *