#1 Middle East & Africa Trusted Cybersecurity News & Magazine |

30 C
Dubai
Thursday, November 13, 2025
HomeTopics 1AI & CybersecurityFighting for User Privacy: OpenAI Pushes Back Against The New York Times’...

Fighting for User Privacy: OpenAI Pushes Back Against The New York Times’ Demand for 20 Million ChatGPT Conversations

Date:

Related stories

spot_imgspot_imgspot_imgspot_img

In what’s shaping up to be a major battle over digital privacy and journalistic boundaries, OpenAI has accused The New York Times of attempting to invade user privacy by demanding access to 20 million private ChatGPT conversations. The request, part of an ongoing lawsuit filed by the Times, seeks to uncover potential copyright violations, but OpenAI argues the demand would expose millions of highly personal interactions between users and their AI assistant.

According to OpenAI, the company has refused to comply, calling the move “an unprecedented violation of privacy” that disregards established security and data-protection standards.

A Clash Between Privacy and Legal Power

The dispute arises from a lawsuit filed by The New York Times against OpenAI and Microsoft, alleging copyright infringement related to the newspaper’s articles being used to train AI models. As part of its discovery process, the Times reportedly demanded that OpenAI hand over 20 million randomly selected user conversations spanning from December 2022 to November 2024.

OpenAI says this demand is not only excessive but also irrelevant to the legal claims, as it includes conversations from millions of users who have no connection to the case.

“Your private conversations are yours—and they should not become collateral in a dispute over online content access,” said Dane Stuckey, Chief Information Security Officer at OpenAI.

This is not the first time the Times has attempted such a request. Earlier, it sought access to 1.4 billion user conversations and even tried to remove users’ ability to delete their private chats. OpenAI successfully resisted those efforts and has vowed to do so again.

Why It Matters

This case goes beyond a single lawsuit, it strikes at the heart of how AI companies manage user data and how far legal discovery can stretch in the digital age.

Each week, over 800 million people rely on ChatGPT to assist with sensitive and personal tasks—from writing legal documents to discussing health concerns. Turning over such data, even in anonymized form, could expose deeply personal information, damaging user trust and raising concerns about the future of privacy in AI interactions.

OpenAI maintains that it has robust safeguards in place, including encryption, data de-identification, and strict internal access controls. It warned that complying with the Times’ demand would force it to hand over the same data it works hard to protect from hackers and nation-state attackers.

The Broader Implications

The conflict highlights a growing tension between journalistic transparency and individual privacy rights in the age of artificial intelligence. While investigative journalism has long played a role in defending privacy, OpenAI argues that this particular demand undermines that tradition.

For organizations in the Middle East and Africa, this case resonates strongly. With many countries in the region accelerating their adoption of AI technologies, questions of data ownership, user consent, and AI regulation are becoming increasingly urgent. Governments and companies are watching closely, knowing that a U.S. precedent could influence global data privacy frameworks, including GDPR-aligned laws and upcoming African data-protection acts.

10 Best Practices for Organizations and Security Teams

  1. Prioritize Data Minimization: Collect and store only what’s necessary to deliver services.
  2. Implement Strong Encryption: Use client-side encryption to protect user conversations and files.
  3. Regularly Audit Data Access: Limit who can view or export sensitive information.
  4. Enhance Transparency: Communicate clearly to users how their data is handled and stored.
  5. Comply with Regional Privacy Laws: Stay updated on GDPR, NESA, NCA, and SAMA requirements.
  6. Deploy Data De-Identification: Strip personal identifiers from stored content to protect privacy.
  7. Train Teams on Privacy Awareness: Run regular cybersecurity awareness programs for staff handling user data.
  8. Establish Legal Safeguards: Work with counsel to define strict data-sharing boundaries.
  9. Use Secure Infrastructure: Partner with trusted providers such as Saintynet Cybersecurity to harden systems.
  10. Maintain User Control: Always allow users to delete or export their personal data.

Conclusion

OpenAI’s firm stance against The New York Times marks a defining moment in the evolving relationship between AI innovation, journalism, and privacy rights. While courts will decide the legal outcome, the broader issue is clear: users must retain control over their digital conversations, no matter who demands access.

As AI continues to weave itself into the fabric of personal and professional life, protecting privacy is not just a technical responsibility, it’s a moral one.

“We are committed to a future where you can trust that your most personal AI conversations are safe, secure, and truly private,” said Stuckey.

In an era where data is the new currency, this fight isn’t just OpenAI’s—it belongs to every digital citizen who values the right to privacy.

Ouaissou DEMBELE
Ouaissou DEMBELEhttp://cybercory.com
Ouaissou DEMBELE is a seasoned cybersecurity expert with over 12 years of experience, specializing in purple teaming, governance, risk management, and compliance (GRC). He currently serves as Co-founder & Group CEO of Sainttly Group, a UAE-based conglomerate comprising Saintynet Cybersecurity, Cybercory.com, and CISO Paradise. At Saintynet, where he also acts as General Manager, Ouaissou leads the company’s cybersecurity vision—developing long-term strategies, ensuring regulatory compliance, and guiding clients in identifying and mitigating evolving threats. As CEO, his mission is to empower organizations with resilient, future-ready cybersecurity frameworks while driving innovation, trust, and strategic value across Sainttly Group’s divisions. Before founding Saintynet, Ouaissou held various consulting roles across the MEA region, collaborating with global organizations on security architecture, operations, and compliance programs. He is also an experienced speaker and trainer, frequently sharing his insights at industry conferences and professional events. Ouaissou holds and teaches multiple certifications, including CCNP Security, CEH, CISSP, CISM, CCSP, Security+, ITILv4, PMP, and ISO 27001, in addition to a Master’s Diploma in Network Security (2013). Through his deep expertise and leadership, Ouaissou plays a pivotal role at Cybercory.com as Editor-in-Chief, and remains a trusted advisor to organizations seeking to elevate their cybersecurity posture and resilience in an increasingly complex threat landscape.

Subscribe

- Never miss a story with notifications

- Gain full access to our premium content

- Browse free from up to 5 devices at once

Latest stories

spot_imgspot_imgspot_imgspot_img

LEAVE A REPLY

Please enter your comment!
Please enter your name here