Microsoft Launches AI to Hunt for “Harmful Content” Online

6
260

Microsoft launched Azure Content Safety, a new AI service that allegedly creates secure online spaces. It will seek out “inappropriate texts and images”  across the Internet.

Do you want Microsoft to determine what you are allowed to say? They will. If it sounds like Maoist censorship, it’s because it is a burgeoning CCP social credit system.

Microsoft, the company just a few months ago, laid off the ethics and society team within its larger AI organization. The move left Microsoft without a dedicated team to ensure its AI principles are closely tied to product design.

It detects alleged hateful, violent, sexual, and self-harm content in images and text. It assigns severity scores to get businesses to limit and prioritize for content moderators to review.

It handles nuance and context.

We can imagine. Oh, by the way, you don’t have free speech.

The announcement said it included the following:

  • Unsafe Content Classifications: Azure Content Safety classifies harmful content into four categories: sexual, violent, self-harm, hate
  • Severity Scores: It returns a severity level for each unsafe content category on a scale from 1 – 6.
  • Semantic Understanding: Our AI-powered content moderation solution uses natural language processing techniques to address the meaning and context of language, closely mirroring human intelligence. It can analyze text in both short form and long form.
  • Multilingual Models:  Understands multiple languages.
  • Customizable Settings & Regulatory Compliance: With customizable settings to address regulations and policies.
  • Computer Vision: Powered by Microsoft’s Florence Foundation model to perform advanced image recognition. This technology is trained with billions of text-image pairs.
  • Real-Time Detection: Our platform detects harmful content in real-time
Earlier AI programs were a disaster. Microsoft ‘Bing’ was threatening or contradicting users in February. This program should be disgustingly awful too.

Azure AI Content Safety is similar to other AI-powered toxicity detection services, including Perspective, maintained by Google’s Counter Abuse Technology Team and Jigsaw. It succeeds Microsoft’s own Content Moderator tool.

 


PowerInbox
0 0 votes
Article Rating
Subscribe
Notify of
guest

6 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments