Microsoft Legal Action Against Group Bypassing AI Safeguards

Microsoft is taking legal action against a group of cybercriminals that have been developing tools designed to bypass AI safeguards in an effort to create harmful content....
Microsoft Legal Action Against Group Bypassing AI Safeguards
Written by Matt Milano

Microsoft is taking legal action against a group of cybercriminals that have been developing tools designed to bypass AI safeguards in an effort to create harmful content.

All major AI models have various safeguards in place that are designed to prevent the models from creating illegal or harmful content. Depending on the company behind the AI model, those safeguards may also include varying degrees of protection against content that may be deemed offensive, with X’s Grok being among the most lenient in that regard.

As with all things that can be used to create something, there is a growing market for tools designed to bypass those safeguards, giving users the ability to use AI to create anything they want, including malware, illicit content, and more.

Microsoft has had enough, and is taking at least one such group of cybercriminals to court in the Eastern District of Virginia.

Microsoft’s AI services deploy strong safety measures, including built-in safety mitigations at the AI model, platform, and application levels. As alleged in our court filings unsealed today, Microsoft has observed a foreign-based threat–actor group develop sophisticated software that exploited exposed customer credentials scraped from public websites. In doing so, they sought to identify and unlawfully access accounts with certain generative AI services and purposely alter the capabilities of those services. Cybercriminals then used these services and resold access to other malicious actors with detailed instructions on how to use these custom tools to generate harmful and illicit content. Upon discovery, Microsoft revoked cybercriminal access, put in place countermeasures, and enhanced its safeguards to further block such malicious activity in the future.

This activity directly violates U.S. law and the Acceptable Use Policy and Code of Conduct for our services. Today’s unsealed court filings are part of an ongoing investigation into the creators of these illicit tools and services. Specifically, the court order has enabled us to seize a website instrumental to the criminal operation that will allow us to gather crucial evidence about the individuals behind these operations, to decipher how these services are monetized, and to disrupt additional technical infrastructure we find. At the same time, we have added additional safety mitigations targeting the activity we have observed and will continue to strengthen our guardrails based on the findings of our investigation.  

Microsoft says it is taking additional action, beyond legal measures, to strengthen its AI safeguards.

Beyond legal actions and the perpetual strengthening of our safety guardrails, Microsoft continues to pursue additional proactive measures and partnerships with others to tackle online harms while advocating for new laws that provide government authorities with necessary tools to effectively combat the abuse of AI, particularly to harm others. Microsoft recently released an extensive report, “Protecting the Public from Abusive AI-Generated Content,” which sets forth recommendations for industry and government to better protect the public, and specifically women and children, from actors with malign motives.  

For nearly two decades, Microsoft’s DCU has worked to disrupt and deter cybercriminals who seek to weaponize the everyday tools consumers and businesses have come to rely on. Today, the DCU builds on this approach and is applying key learnings from past cybersecurity actions to prevent the abuse of generative AI. Microsoft will continue to do its part by looking for creative ways to protect people online, transparently reporting on our findings, taking legal action against those who attempt to weaponize AI technology, and working with others across public and private sectors globally to help all AI platforms remain secure against harmful abuse.

Microsoft’s legal action underscores the growing battle AI firms will face in their efforts to ensure AI remains secure and is not used in harmful ways.

Subscribe for Updates

AITrends Newsletter

The latest news, updates and trends in AI.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.
Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us