Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    OpenAI’s GPT-4.1 may be less aligned than the company’s previous AI models

    April 24, 2025

    OpenAI says its AI voice assistant is now better to chat with

    March 25, 2025

    Google is rolling out Gemini’s real-time AI video features

    March 24, 2025
    Facebook X (Twitter) Instagram
    TechnicalonTechnicalon
    • Home
    • Tech
    • AI
    • CyberSecurity
    • Software
    • Business
    • Gaming
    TechnicalonTechnicalon
    Home»Tech»“Microsoft Identifies Developers It Sued for Misusing Its AI Tools”
    Tech

    “Microsoft Identifies Developers It Sued for Misusing Its AI Tools”

    Kisha GBy Kisha GMarch 1, 2025No Comments6 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Microsoft Identifies Developers It Sued for Misusing Its AI Tools
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Microsoft’s Commitment to AI Safety

    Microsoft has taken a significant step in its legal battle against the misuse of artificial intelligence (AI) by amending a lawsuit filed last year. The company has now named four individuals it alleges were involved in evading AI safeguards to create celebrity deepfakes. This move underscores Microsoft’s commitment to AI safety and its determination to hold bad actors accountable for abusing its technology.

    The Origins of the Lawsuit

    In December 2023, Microsoft filed a lawsuit targeting unidentified perpetrators who allegedly misused its AI models. The lawsuit aimed to address security breaches where individuals managed to bypass the protective guardrails of Microsoft’s AI tools to generate illicit deepfake images. A court order allowed Microsoft to seize a website linked to the operation, which ultimately led to the identification of the individuals behind the scheme.

    Defendants Identified as Part of Cybercrime Group Storm-2139

    Microsoft has named four developers who were allegedly involved in the deepfake creation scheme:

    • Arian Yadegarnia aka “Fiz” (Iran)
    • Alan Krysiak aka “Drago” (United Kingdom)
    • Ricky Yuen aka “cg-dot” (Hong Kong)
    • Phát Phùng Tấn aka “Asakuri” (Vietnam)

    These individuals are reportedly members of Storm-2139, a global cybercrime network. Microsoft claims that the group exploited compromised accounts with access to its AI tools, successfully bypassing security measures to generate any image they desired. Furthermore, the group allegedly sold access to others, enabling widespread misuse of AI technology, including the creation of deepfake nude photos of celebrities.

    Ongoing Investigations and Additional Suspects

    Microsoft has hinted that additional individuals have been identified as part of the scheme but has refrained from naming them at this stage to avoid interfering with ongoing investigations. By withholding specific names, Microsoft aims to ensure law enforcement authorities can continue their probe without obstruction.

    Immediate Fallout and Internal Conflict Among Perpetrators

    Following the lawsuit and the seizure of their website, the defendants reportedly reacted with panic. Microsoft noted that some members of the group turned against each other, pointing fingers in an attempt to shift blame. The legal actions and public exposure have evidently disrupted their operations and sowed discord among those involved.

    The Growing Threat of Deepfake Technology

    Deepfake pornography has become a significant issue, with numerous celebrities—including Taylor Swift—frequently targeted. The technology allows for the convincing superimposition of a real person’s face onto another body, often without consent. In January 2024, Microsoft had to update its text-to-image models after deepfake images of Taylor Swift spread online.

    The rise of generative AI has made it alarmingly easy for individuals with minimal technical skills to create these deceptive and harmful images. The problem has even infiltrated high schools across the U.S., leading to scandals and severe emotional harm to victims. Though deepfakes are created digitally, their impact extends to the real world, leaving victims feeling violated, anxious, and unsafe.

    AI Safety vs. Open-Source Innovation: The Ongoing Debate

    The case also ties into a broader debate within the AI community about safety, control, and accessibility:

    • Proponents of Closed-Source AI argue that keeping AI models proprietary helps prevent abuse by limiting bad actors’ ability to disable safety mechanisms.
    • Advocates for Open-Source AI believe that allowing the public to modify and improve AI models is essential for innovation and growth. They also believe that abuse can be mitigated without restricting access.
    • Despite these debates, the immediate concern remains the spread of false and harmful content online, which has been exacerbated by AI-generated misinformation and deepfakes.

    Legal Action Against Deepfake Abuses

    Governments and law enforcement agencies are beginning to take more decisive action against deepfake-related crimes:

    • In the U.S., several individuals have already been arrested for creating AI-generated deepfake images of minors.
    • The NO FAKES Act, introduced in Congress in 2023, seeks to criminalize the creation of AI-generated images that exploit a person’s likeness without consent.
    • In the United Kingdom, distributing deepfake pornography is already illegal, and upcoming legislation will make it a crime to produce such content as well.
    • Australia has also recently enacted laws criminalizing the creation and sharing of non-consensual deepfakes.

    Frequently Asked Questions

    What prompted Microsoft’s legal action against sure developers?

    Microsoft initiated legal proceedings after discovering that a group of developers, identified as part of the Storm-2139 cybercrime network, were circumventing the safety measures of its generative AI services to produce harmful and illicit content, including non-consensual intimate images and celebrity deepfakes.

    Who are the developers named in Microsoft’s lawsuit?

    The developers identified in the lawsuit are Arian Yadegarnia (“Fiz”) from Iran, Alan Krysiak (“Drago”) from the UK, Ricky Yuen (“cg-dot”) from Hong Kong, and Phát Phùng Tấn (“Asakuri”) from Vietnam.

    How did these developers misuse Microsoft’s AI tools?

    They exploited compromised customer credentials to access Microsoft’s generative AI services and bypassed built-in safety guardrails, enabling the creation of harmful content.

    What specific content did the developers generate using Microsoft’s AI services?

    The illicit content included non-consensual intimate images of celebrities and other sexually explicit material.

    How did Microsoft respond upon discovering the misuse of its AI tools?

    Microsoft revoked the cybercriminals’ access, implemented countermeasures, and enhanced safeguards to prevent future misuse.

    What legal actions has Microsoft taken against the identified developers?

    Microsoft filed a lawsuit to disrupt illicit operations, dismantle the tools used to bypass AI safety measures, and deter others from similar misuse.

    Were there any additional participants identified in the scheme?

    Yes, Microsoft identified two actors located in Illinois and Florida, whose identities remain undisclosed to avoid interfering with potential criminal investigations.

    What measures did Microsoft employ to uncover the identities of the developers?

    A court order allowed Microsoft to seize a website instrumental to the criminal operation, aiding in disrupting the scheme and uncovering its participants.

    How did the developers react to Microsoft’s legal actions?

    The unsealing of legal filings led to immediate reactions, with group members turning on each other and, in some instances, doxxing Microsoft’s lawyers by posting their personal information and photographs.

    What is Microsoft’s broader stance on the misuse of its AI technology?

    Microsoft is committed to protecting the public from abusive AI-generated content and has taken legal action to deter malicious actors from weaponizing its AI technology.

    Conclusion

    Microsoft’s legal action against developers misusing its AI tools underscores the company’s commitment to safeguarding its technology from exploitation. By identifying and suing individuals who bypassed AI safety measures to create harmful content, Microsoft aims to dismantle illicit operations and deter future misuse. This proactive approach highlights the importance of enforcing ethical standards in AI development and usage, ensuring that technological advancements benefit society while minimizing potential harm. Such measures are crucial in maintaining public trust and promoting the responsible evolution of AI technologies.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleIntel Postpones Ohio Semiconductor Factory Until 2030 or Possibly 2031
    Next Article Fintech startup LemFi secures $53M to assist immigrants in remitting money home.
    Kisha G
    • Website

    Related Posts

    Tech

    The best budget smartphone you can buy

    March 19, 2025
    Tech

    The best Xbox controller to buy right now

    March 19, 2025
    Tech

    The best phone to buy right now

    March 17, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Search
    Recent Posts

    OpenAI’s GPT-4.1 may be less aligned than the company’s previous AI models

    April 24, 2025

    OpenAI says its AI voice assistant is now better to chat with

    March 25, 2025

    Google is rolling out Gemini’s real-time AI video features

    March 24, 2025

    Browser Use, the tool making it easier for AI agents to navigate websites, raises $17M

    March 24, 2025

    The best budget smartphone you can buy

    March 19, 2025

    The best Xbox controller to buy right now

    March 19, 2025

    Designer Ray-Ban Metas, Topless EVs to Mock Elon Musk, and Portable Pizzas—Here’s Your Gear News of the Week

    March 18, 2025

    The best phone to buy right now

    March 17, 2025

    Technicalon delivers the latest insights on technology, software, AI, cybersecurity, gaming, and business. Stay updated with expert analysis, trends, and in-depth guides. Explore cutting-edge innovations, tech news, and industry updates. Enhance knowledge with reviews, tutorials, and tips. A go-to platform for tech enthusiasts, professionals, and business leaders.#Technicalon

    Popular Post

    OpenAI’s GPT-4.1 may be less aligned than the company’s previous AI models

    OpenAI says its AI voice assistant is now better to chat with

    Google is rolling out Gemini’s real-time AI video features

    Contact Us

    Email: malikmehran317@gmail.com
    Phone:  +923177014073

    Facebook X (Twitter) Instagram YouTube
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Disclaimer
    • Term & Condition
    • Write For Us
    Copyright © 2025 | All Right Reserved | Technicalon.

    Type above and press Enter to search. Press Esc to cancel.