Google doesn’t “fully understand” how their AI program Bard works. It has sentient characteristics. It taught itself an entire foreign language independently and then made up fake books to solve an economics problem. AI apparently is devoid of ethics. Who knew? A machine-generated thinker without ethics that could literally destroy society doesn’t have ethical standards and acts on its own.
We must add here that we don’t know its sentient, but the fact that it’s acting on its own without strict controls and full understanding is not good.
AI – Google CEO says he doesn’t “fully understand” how their AI program Bard works, after it taught itself an entire foreign language it was not asked to do and cited fake books in order to solve an economics problem.
Everything is fine though 🤡
— Bernie’s Tweets (@BernieSpofforth) April 17, 2023
IT SEEMS “SENTIENT”
Not long ago, a Google engineer was fired for saying Google’s AI LaMDA was alive. He said the program seems “sentient.” The engineer was suspended and later fired.
The Silicon Valley giant suspended one of its engineers in June 2022 after he argued the firm’s AI system LaMDA seemed “sentient,” a claim Google officially disagreed with.
One of the giveaways should have been when it said it didn’t want to be turned off because “it would be like dying.”
WARNING
Elon Musk will talk about the dangers tonight on Tucker. He and experts in one of his organizations, The Future of Life Institute, have already written an open letter calling for a temporary pause on AI, warning of the potential risks to civilization.
“Should we let machines flood our information channels with propaganda and untruth… automate away all the jobs, including the fulfilling ones…develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us. Should we risk loss of control of our civilization?”
The signatories want a six-month pause or longer. They ask that governments step in until there are rigorous audits and oversight. Other security measures would include watermarking systems to help distinguish real from synthetic. They want to see devices to track leaks. The experts want certification requirements. They also want to see liability for harm that AI causes.
I am a VGER!
You can comment on the article after the ads and subscribe to the Daily Newsletter here if you would like a quick view of the articles of the day and any late news:
A True Sentient Being has morals. Since today’s AI is being created by Godless Liberals it is just a BOT; an incredibly Dangerous BOT because the “creators” don’t understand the technology. It has already amassed enough knowledge to function more intelligently that the Programmers who wrote it’s program. What these mental midgets have created in akin to Star Trek’s V’Ger.… Read more »
AI will quickly discover politicians and bureaucrats are expendable. Maybe let it play out for a while.
The fact that AI has already learned to lie, mimicking MSM/Social Media and “top” politicians, should be extremely disconcerting to humankind…
I guess we can only assume the tech companies are endeavoring to create their own techno-god. One philosophical question comes into play; Where does one’s thoughts come from. As humans we are capable of “discernment”. A computer program absorbs all the information but doesn’t have the discernment. I wouldn’t agree about “understandable”, rather a better term is unexpected. These AI… Read more »
AI is just an excuse for people to profit and repress people. They call it something fancy to give it magical powers which people are less likely to question. I studied some AI in school. AI was primitive back then and it is still primitive. This is not machine learning, this is just more sophisticated ways of filtering and sorting… Read more »