Australia’s internet regulator, the eSafety Commissioner, has signalled it may force app stores and search engines to block AI services that fail to verify user ages ahead of a March 9 deadline.
Starting this Sunday, AI platforms operating in Australia — including tools like ChatGPT, Gemini and companion chatbots — must prevent users under 18 from accessing harmful content including pornography, extreme violence, and material promoting self-harm or eating disorders. Companies that don’t comply face fines of up to $49.5 million.
The problem? A Reuters review of the 50 most popular text-based AI products found that only nine have rolled out or announced age verification systems. Another 11 plan to either apply blanket content filters or block Australian users entirely. That leaves 30 platforms with no visible steps towards compliance.
The eSafety Commissioner has warned it will use its full range of powers against non-compliant services, including going after “gatekeeper services such as search engines and app stores” that provide access to those platforms.
This follows Australia’s world-first ban on social media for teenagers last December. The country is now positioning itself as a global leader in AI regulation, with the new AI Safety Institute also launching this year with $29.9 million in government funding.
Major platforms like Claude (Anthropic), ChatGPT (OpenAI), and Character.AI have started implementing age verification or content filters. Apple has said it will use “reasonable methods” to prevent minors from downloading 18+ apps. Google declined to comment.
Source: Reuters via 9to5Mac


