Efforts to restrict children’s access to social media are facing a harsh reality check. A new study has revealed that more than 60% of Australian children under 16 are still actively using social media platforms, despite a national ban designed to protect them.
The findings raise a critical question for governments and cybersecurity leaders worldwide:
– Are bans alone enough to protect children in the digital age or are they creating a false sense of security?
What the Data Reveals
Recent research conducted by the Molly Rose Foundation, in collaboration with YouthInsight, paints a concerning picture of enforcement gaps.
- 61% of children aged 12–15 continue to access restricted platforms
- 53% of TikTok and YouTube users, and 52% of Instagram users, retained access
- Up to 64% of users reported no action taken by platforms to remove their accounts
The study – one of the first of its kind – suggests that major tech platforms are failing to effectively enforce age restrictions, allowing underage users to remain active without significant barriers.
A False Sense of Security?
Perhaps more concerning is the psychological and safety impact.
- 51% of children said the ban made no difference to their safety
- 14% reported feeling less safe after the ban
- Only 22% experienced a positive impact
Experts warn that such policies may unintentionally shift responsibility away from tech companies, while giving parents and regulators a misleading sense of control.
As highlighted in the findings published by the Molly Rose Foundation, the ban risks becoming a “high-stakes gamble” rather than a reliable safeguard.
Why Traditional Controls Are Failing
The issue is not simply about access it’s about systemic weaknesses in digital identity verification and platform accountability.
Key challenges include:
- Weak or bypassable age verification systems
- Lack of proactive monitoring by platforms
- Minimal enforcement of account removal policies
- Easy access to alternative accounts and devices
In cybersecurity terms, this reflects a failure in identity assurance and access control mechanisms core principles that extend far beyond social media into enterprise security environments.
A Global Wake-Up Call
While the case originates in Australia, the implications are global.
Countries across Europe, the UK, Africa, and the Middle East are actively exploring similar restrictions. However, this data suggests that policy without enforcement is ineffective.
For governments and regulators, the lesson is clear:
– Regulation must go deeper than bans it must address platform design, accountability, and security architecture.
Impact on Industry and Technology Providers
This development places increased pressure on:
- Social media companies to strengthen age verification and safety systems
- Cybersecurity firms to develop better identity and behavioral monitoring tools
- Regulators to enforce compliance and accountability
- Parents and educators to adapt to evolving digital risks
Organizations like Saintynet Cybersecurity emphasize the importance of identity protection, digital awareness, and proactive risk management in tackling such challenges.
10 Recommended Actions for Security Teams & Policymakers
To address these gaps, cybersecurity and digital safety leaders should consider:
- Implement stronger age verification technologies (biometrics, AI-based validation)
- Adopt zero-trust identity frameworks for digital platforms
- Enhance behavioral monitoring to detect underage usage patterns
- Enforce stricter account lifecycle management
- Increase platform accountability through regulation
- Invest in child-focused cybersecurity awareness programs via saintynet.com
- Audit platform compliance regularly with independent oversight
- Strengthen parental control ecosystems and tools
- Leverage AI to detect harmful content and risky interactions
- Promote cross-industry collaboration between governments, tech firms, and cybersecurity leaders
MEA Perspective (Optional Insight)
For the Middle East and Africa, where digital adoption is rapidly accelerating, this serves as an early warning.
With a young and highly connected population, MEA countries must proactively design secure digital ecosystems rather than rely solely on reactive regulation.
Conclusion
The Australian case highlights a fundamental truth in cybersecurity and digital governance:
Technology challenges cannot be solved by policy alone.
Despite well-intentioned efforts, the persistence of underage users on social media platforms exposes critical flaws in enforcement, identity verification, and platform responsibility.
As global regulators consider similar measures, the focus must shift toward robust security frameworks, smarter regulation, and stronger accountability from tech companies.




