Starting December 10th, Australia will implement a ban on individuals under the age of 16 from accessing major social media platforms, including TikTok, X, Facebook, Instagram, YouTube, Snapchat, and Threads.
Under the new regulations, minors will be barred from creating new accounts, and existing profiles will be subject to deactivation.
This decision marks a global first and is being closely observed by other nations.
The government cites the need to mitigate the adverse effects of social media’s “design features that encourage [young people] to spend more time on screens while also serving up content that can harm their health and wellbeing.”
A government-commissioned study from earlier in 2025 revealed that 96% of children aged 10-15 use social media, and 70% have been exposed to harmful content, ranging from expressions of misogyny and violence to content promoting eating disorders and suicide.
Furthermore, one in seven children reported experiencing grooming-like behavior from adults or older children, and over half indicated they have been victims of cyberbullying.
The ban encompasses ten platforms at present: Facebook, Instagram, Snapchat, Threads, TikTok, X, YouTube, Reddit, and streaming platforms Kick and Twitch.
The government evaluates potential sites against three primary criteria:
YouTube Kids, Google Classroom, and WhatsApp are excluded from the ban as they do not meet the specified criteria.
Individuals under 16 will still be able to view content on online platforms that do not require account registration.
Critics have urged the government to extend the ban to encompass online gaming sites.
Platforms like Roblox and Discord have begun implementing age verification measures on certain features, seemingly in anticipation of potential inclusion.
Children and parents will not incur penalties for violating ban.
Instead, social media companies face fines of up to A$49.5 million (US$32 million, £25 million) for severe or repeated breaches.
The government affirms that firms must take “reasonable steps” to prevent underage access to their platforms, including the implementation of various age assurance technologies.
These technologies may include government-issued IDs, facial or voice recognition, or “age inference,” which assesses online behavior and interactions to estimate a user’s age.
Platforms are prohibited from relying solely on user self-certification or parental vouching.
Meta, the parent company of Facebook, Instagram, and Threads, initiated the closure of teen accounts starting December 4th, stating that individuals mistakenly removed could present government identification or a video selfie to confirm their age.
Snapchat has announced that users can utilize bank accounts, photo IDs, or selfies for verification.
Some express concerns that age assurance technologies may inadvertently block adult users while failing to identify underage users.
The government’s own report suggests that facial assessment technology is least reliable for teenagers.
Doubts have also surfaced regarding the substantial of potential fines.
“It takes Meta about an hour and 52 minutes to make A$50 million in revenue,” Stephen Scheeler, a former Facebook executive, told the AAP news agency.
Critics further contend that the limited scope of the ban, even if properly enforced, undermines its capacity to protect children.
Dating websites stand excluded along with gaming platforms, as are AI chatbots, which were recently highlighted for allegedly encouraging children to commit suicide and for engaging in “sensual” conversations with minors.
Others argue that educating children about responsible social media use would be more effective.
Some teenagers have informed the BBC of their intent to establish fake profiles prior to the deadline, despite government warnings to social media companies to identify and remove such accounts. Others have transitioned to shared accounts with their parents.
Commentators also anticipate a surge in VPN use, which conceals a user’s location, similar to what occurred in the UK following the implementation of age control regulations there.
Communications Minister Annika Wells acknowledged that the ban may not be “perfect.”
“It’s going to look a bit untidy on the way through,” she said in early November. “Big reforms always do.”
Concerns have also been voiced regarding the extensive data collection and storage needed to verify users’ ages.
Australia, like many countries, has experienced notable data breaches, resulting in the theft and publication or sale of sensitive personal data.
However, the government contends that the legislation incorporates “strong protections” for personal data.
These protections stipulate that data may only be used for age verification purposes and must be subsequently destroyed, with “serious penalties” imposed for breaches.
Social media companies expressed dismay when the ban was announced in November 2024.
Firms argued that the ban would be difficult to enforce, easily circumvented, time-consuming for users, and pose risks to their privacy.
Companies also suggested that it might drive children to more obscure corners of the internet and deprive young people of social interaction.
Snap, the owner of Snapchat, and YouTube also contested being classified as social media companies.
Days before the ban’s implementation, YouTube expressed concerns that the “rushed” new laws would leave children less safe, as they would still be able to use the platform without an account, thereby removing “the very parental controls and safety filters built to protect them.”
YouTube’s parent company, Google, reportedly considered legal action over its ban but did not respond to a BBC request for comment.
Despite its early implementation, Meta cautioned that the ban would leave teens with “inconsistent protections across the many apps they use.”
During parliamentary hearings in October 2025, TikTok and Snap stated their opposition to the ban but affirmed their intention to comply.
Kick, the only Australian company subject to the new law, stated that it would introduce a “range of measures” as it continued to engage “constructively” with authorities.
Denmark has announced plans to ban social media for under-15s, while Norway is considering a similar proposal.
A French parliamentary enquiry also recommended banning under-15s from social media and implementing a social media “curfew” for 15- to 18-year-olds.
The Spanish government has drafted a law that would require legal guardians to authorize access for under-16s.
In the UK, new safety rules introduced in July 2025 impose potentially large fines or even imprisonment for executives of online companies that fail to protect young people from illegal and harmful content.
Meanwhile, in 2024, a federal judge blocked an attempt in the US state of Utah to ban social media use for under-18s without parental consent.
BBC Sport’s Ask Me Anything team looks at why Steve Smith is wearing black tape under his eyes
Australia delay naming their side for the second Ashes Test against England, meaning captain Pat Cummins could make a shock return on Thursday.
More than 2,000 extremely abusive social media posts were sent about managers and players in the Premier League and Women’s Super League in a single weekend, a BBC investigation finds.
The Google-owned platform says parental controls will be stripped away as a result of the ban.
Lowestoft’s Jess Asato says the “Wild West” of social media is creating “a lost generation”.
