Google doesn’t “fully understand” how their AI program Bard works. It has sentient characteristics. It taught itself an entire foreign language independently and then made up fake books to solve an economics problem. AI apparently is devoid of ethics. Who knew? A machine-generated thinker without ethics that could literally destroy society doesn’t have ethical standards and acts on its own.
We must add here that we don’t know its sentient, but the fact that it’s acting on its own without strict controls and full understanding is not good.
AI – Google CEO says he doesn’t “fully understand” how their AI program Bard works, after it taught itself an entire foreign language it was not asked to do and cited fake books in order to solve an economics problem.
Everything is fine though 🤡
— Bernie’s Tweets (@BernieSpofforth) April 17, 2023
IT SEEMS “SENTIENT”
Not long ago, a Google engineer was fired for saying Google’s AI LaMDA was alive. He said the program seems “sentient.” The engineer was suspended and later fired.
-
The Importance of Prayer: How a Christian Gold Company Stands Out by Defending Americans’ Retirement
The Silicon Valley giant suspended one of its engineers in June 2022 after he argued the firm’s AI system LaMDA seemed “sentient,” a claim Google officially disagreed with.
One of the giveaways should have been when it said it didn’t want to be turned off because “it would be like dying.”
WARNING
Elon Musk will talk about the dangers tonight on Tucker. He and experts in one of his organizations, The Future of Life Institute, have already written an open letter calling for a temporary pause on AI, warning of the potential risks to civilization.
“Should we let machines flood our information channels with propaganda and untruth… automate away all the jobs, including the fulfilling ones…develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us. Should we risk loss of control of our civilization?”
The signatories want a six-month pause or longer. They ask that governments step in until there are rigorous audits and oversight. Other security measures would include watermarking systems to help distinguish real from synthetic. They want to see devices to track leaks. The experts want certification requirements. They also want to see liability for harm that AI causes.
I am a VGER!
Subscribe to the Daily Newsletter