Home Home Anthropic Wants Power Reserved for Elected Officials

Anthropic Wants Power Reserved for Elected Officials

4
38

AI Czar David Sacks wrote that the “real issue” is “Anthropic’s agenda to backdoor Woke AI and other AI regulations through Blue states like California.”

President Trump unceremoniously canceled Anthropic AI in all agencies. It will take six months to eliminate them entirely. The problem arose when Anthropic said it will not allow AI to be used in limited cases. The issues are mass domestic surveillance and fully autonomous weapons.

The Pentagon tried to make a deal on the two issues because they agreed with the company CEO.

Anthropic claims it is still trying to reach a deal, but the administration hasn’t responded in a way it can accept. CEO Dario Amodei said that no one in the administration has run into their redlines. However, on the linked video (11:00), Amodei admits he wants to make the decision if he thinks the red line is crossed. He would make the decision instead of the government.

Amodei insists that he be allowed to make the decisions about how Anthropic is used in their “limited cases.”  He keeps saying there are only two cases. However, they can be broadened enormously. They are vague.

He has set up a clever cloak. No one likes mass surveillance or fully autonomous weapons. In reality, he wants to “balance “what the government does by his own words.

Check out his constitution, which we link below.

Secretary Hegseth wrote on X:

Our position has never wavered and will never waver. The Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.

Instead, @AnthropicAI and its CEO @DarioAmodei, have chosen duplicity. Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission—a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.

The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield.

Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable.

As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives.

Incompatible with American Priniciples

Anthropic’s stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered.

In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.

America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.

The President said that the United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars and appoints leaders.

He called them “woke” and said they made a disastrous mistake.

In October, Dario Amodei insisted he is not a wokester after heavy criticism from AI Czar David Sacks.

Sacks wrote that the “real issue” is “Anthropic’s agenda to backdoor Woke AI and other AI regulations through Blue states like California.”

Anthropic has its own constitution, which follows the woke UN Declaration of Human Rights.

Palmer Luckey explains:

This gets to the core of the issue more than any debate about specific terms.

Do you believe in democracy? Should our military be regulated by our elected leaders or corporate executives? Seemingly innocuous terms from the latter, like “You cannot target innocent civilians,” are actually moral minefields that leverage differences of cultural tradition into massive control.

Who is a civilian and who is not? What makes them innocent or not? What does it mean for them to be a “target” vs collateral damage? Existing policy and law have very clear answers for these questions, but unelected corporations managing profits and PR will often have a very different answer.

Imagine if a missile company tried to enforce the above policy, that their product cannot be used to target innocent civilians, and that they can shut off access if elected leaders decide to break those terms.

Sounds good, right? Not really—in addition to the value judgement problems I list above, you also have to account for questions like

-What level of information, classified and otherwise, does the corporation receive that would allow them to make these determinations? How much leverage would they have to demand more?

-What if an elected President merely threatens a dictator with using our weapons in a certain way, ala Madman Theory/MAD? Is the threat seen as empty because the dictator knows the corporate executives will cut off the military? Is the threat enough to trigger the cutoff? How might either of those determinations vary if the current corporate executive happens to like the dictator or dislike the president?

-At what level of confidence does the cutoff trigger, both in writing and in reality?

The fact that this is a debate over AI does not change the underlying calculus. The same problems apply to definitions and uses of ethically fraught yet important capabilities, such as surveillance systems or autonomous weapons. It is easy to say, “But they will have cutouts to operate with autonomous systems for defensive use!”, but you immediately get into the same issues and more—what is autonomous? What is defensive? What about defending an asset during an offensive action or parking a carrier group off the coast of a nation that considers us to be offensive?

At the end of the day, you have to believe that the American experiment is still ongoing, that people have the right to elect and unelect the authorities making these decisions, and that our imperfect constitutional republic is still good enough to run a country without outsourcing the real levers of power to billionaires and corpos and their shadow advisors. I still believe.

And that is why “bro just agree the AI won’t be involved in autonomous weapons or mass surveillance why can’t you agree it is so simple, please, bro” is an untenable position that the United States cannot possibly accept.

If you read Anthropic’s constitution, you will see what he means.

Previous articleEye-Opening: A Little Heavy on the Scale
0 0 votes
Article Rating
Subscribe
Notify of
guest
4 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
The Prisoner
The Prisoner
10 seconds ago

Anthropic seems to be trying to set up a booby trap into government operations. We already have far too much sabotage. Imagine AI blocking military action if the USA was attacked. Leftists would sure like that.

Saltherring
Saltherring
5 minutes ago

Consider how this could have played out if Cackling Commiela or the Bidung Autopen were running our government? But then, people who vote for Democrats are far too simple-minded and poorly-educated to comprehend the downstream consequences of their votes.

Last edited 4 minutes ago by Saltherring
Anonymous
Anonymous
45 minutes ago

ANTHROPIC WANTS TO WAG THE DOG.
TAKE YOUR BEATDOWN AND SHUT UP.

Anonymous
Anonymous
47 minutes ago

A

4
0
Would love your thoughts, please comment.x
()
x