cybercory

#1 Middle East & Africa Trusted Cybersecurity News & Magazine |

23 C
Dubai
Tuesday, February 17, 2026
cybercory
HomeTopics 1AI & CybersecurityKaspersky Warns of Scam Exploiting OpenAI’s Teamwork Features

Kaspersky Warns of Scam Exploiting OpenAI’s Teamwork Features

Date:

Related stories

How to Protect Your DNS Server from DDoS Attacks: A 2026 Security Guide

Why DNS is the Internet's Most Targeted Weak Point What...

AI, Evolving Threats & Detection Challenges: A Practical Cybersecurity Conversation

Dubai, UAE - February 4, 2026.Artificial intelligence is reshaping...

Windows Remote Access Flaw Allows Denial-of-Service Attacks on Unpatched Systems

Microsoft has disclosed a Windows Remote Access Connection Manager...

Why Passkeys are Replacing Passwords (And Why Your Network Needs Them)

Password-based breaches drained organizations of over $4.5 billion in...
spot_imgspot_imgspot_imgspot_img

Attackers abuse legitimate OpenAI collaboration tools to send convincing scam emails, raising new concerns about trust in widely used AI platforms.

Cybersecurity researchers at Kaspersky have uncovered a new scam campaign that weaponizes OpenAI’s legitimate teamwork and collaboration features, turning a trusted platform into an unexpected delivery channel for fraud.

The finding, disclosed on January 21, 2026, highlights a growing trend in cybercrime: attackers no longer need to compromise systems to launch effective scams they simply exploit built-in platform features and human trust.

What happened – and why it matters

According to Kaspersky, attackers are registering accounts on the OpenAI platform and abusing the organization creation and team invitation features to send scam emails that appear to come directly from OpenAI itself.

Because these invitations are sent from official OpenAI email infrastructure, they look legitimate from a technical standpoint. Traditional email security controls may not flag them, and recipients are more likely to trust them making this campaign particularly effective.

In an era where AI platforms are deeply embedded in business workflows, this type of abuse represents a serious social engineering risk.

How the scam works

Kaspersky researchers explain that the attack unfolds in several simple but clever steps:

  1. The attacker creates an OpenAI account.
  2. During registration, they choose an “organization name.”
  3. Instead of a normal company name, they insert scam content into that field—including deceptive messages, malicious links, or phone numbers.
  4. The attacker then uses the “Invite your team” feature to send invitations to targeted victims.
  5. The email lands in the victim’s inbox as a legitimate OpenAI invitation, with the scam message embedded inside.

The scam content is visually inconsistent with the rest of the invitation template, but attackers rely on speed, distraction, and trust to bypass scrutiny.

What victims are seeing

Kaspersky observed several types of scam messages delivered this way, including:

  • Emails advertising fraudulent or adult services
  • Vishing scams claiming a subscription was renewed for a large amount, urging victims to call a fake support number
  • Other deceptive messages designed to provoke urgency and emotional reactions

In many cases, victims are instructed to click a link or call a phone number to “resolve” the issue steps that can lead to financial loss or further compromise.

“This case highlights how platform features can be weaponised for social engineering email attacks,” said Anna Lazaricheva, Senior Spam Analyst at Kaspersky. “Scammers are exploiting user trust in reputable services to bypass both filters and skepticism.”

Why this matters globally – including MEA

This campaign is not limited by geography. Any organization or individual using OpenAI’s collaboration features could be targeted.

For businesses across the Middle East and Africa where AI adoption, digital transformation, and remote collaboration are accelerating, the risk is especially relevant. Enterprises, startups, and government entities increasingly rely on cloud and AI platforms, making trust-based attacks a growing concern.

It reinforces a critical reality: cybersecurity is no longer just about defending infrastructure, but about defending workflows, platforms, and user behavior a core focus of modern cybersecurity risk management practices promoted by firms.

Wider implications for the industry

This incident raises uncomfortable questions for all platform providers:

  • How can collaboration features be abused?
  • Are “non-security” fields (like organization names) being properly sanitized?
  • How much responsibility should platforms bear when their infrastructure is used for scams?

It also shows why security awareness training remains essential, even when messages appear to come from trusted brands—a key area addressed by training and awareness programs at training.saintynet.com.

What security teams and users should do now

Kaspersky and industry experts recommend the following actions:

  1. Treat unsolicited collaboration invitations with suspicion—even from trusted platforms.
  2. Carefully inspect email content for inconsistencies or unusual formatting.
  3. Hover over and verify URLs before clicking any links.
  4. Never call phone numbers provided in unexpected or suspicious emails.
  5. Look up official support contact details directly on the service’s website.
  6. Report suspicious invitations to the platform provider immediately.
  7. Enable multi-factor authentication (MFA) on all accounts.
  8. Educate employees about social engineering attacks abusing legitimate tools.
  9. Review email security policies to account for “trusted sender” abuse scenarios.
  10. Regularly update incident response playbooks to include platform-based scams.

Organizations seeking structured guidance can benefit from governance, risk, and compliance advisory services provided by Saintynet Cybersecurity, as well as ongoing awareness initiatives highlighted on cybercory.com.

The takeaway

The OpenAI invitation scam is a reminder that attackers evolve as quickly as technology does. When trusted platforms become attack vectors, awareness and vigilance matter as much as technical controls.

AI tools are transforming how we work, but as this campaign shows, they also reshape how scams are delivered. Trust, once broken, is hard to rebuild.

Subscribe

- Never miss a story with notifications

- Gain full access to our premium content

- Browse free from up to 5 devices at once

Latest stories

spot_imgspot_imgspot_imgspot_img