Tue. Dec 16th, 2025
Australia Considers Social Media Ban for Under 16s: Implementation Challenges Ahead

Starting December 10th, Australia will implement a ban on individuals under the age of 16 from accessing major social media platforms, including TikTok, X, Facebook, Instagram, YouTube, Snapchat, and Threads.

Under the new regulations, minors will be barred from creating new accounts, and existing profiles will be subject to deactivation.

This decision marks a global first and is being closely observed by other nations.

The government cites the need to mitigate the adverse effects of social media’s “design features that encourage [young people] to spend more time on screens while also serving up content that can harm their health and wellbeing.”

A government-commissioned study from earlier in 2025 revealed that 96% of children aged 10-15 use social media, and 70% have been exposed to harmful content, ranging from expressions of misogyny and violence to content promoting eating disorders and suicide.

Furthermore, one in seven children reported experiencing grooming-like behavior from adults or older children, and over half indicated they have been victims of cyberbullying.

The ban encompasses ten platforms at present: Facebook, Instagram, Snapchat, Threads, TikTok, X, YouTube, Reddit, and streaming platforms Kick and Twitch.

The government evaluates potential sites against three primary criteria:

YouTube Kids, Google Classroom, and WhatsApp are excluded from the ban as they do not meet the specified criteria.

Individuals under 16 will still be able to view content on online platforms that do not require account registration.

Critics have urged the government to extend the ban to encompass online gaming sites.

Platforms like Roblox and Discord have begun implementing age verification measures on certain features, seemingly in anticipation of potential inclusion.

Children and parents will not incur penalties for violating ban.

Instead, social media companies face fines of up to A$49.5 million (US$32 million, £25 million) for severe or repeated breaches.

The government affirms that firms must take “reasonable steps” to prevent underage access to their platforms, including the implementation of various age assurance technologies.

These technologies may include government-issued IDs, facial or voice recognition, or “age inference,” which assesses online behavior and interactions to estimate a user’s age.

Platforms are prohibited from relying solely on user self-certification or parental vouching.

Meta, the parent company of Facebook, Instagram, and Threads, initiated the closure of teen accounts starting December 4th, stating that individuals mistakenly removed could present government identification or a video selfie to confirm their age.

Snapchat has announced that users can utilize bank accounts, photo IDs, or selfies for verification.

Some express concerns that age assurance technologies may inadvertently block adult users while failing to identify underage users.

The government’s own report suggests that facial assessment technology is least reliable for teenagers.

Doubts have also surfaced regarding the substantial of potential fines.

“It takes Meta about an hour and 52 minutes to make A$50 million in revenue,” Stephen Scheeler, a former Facebook executive, told the AAP news agency.

Critics further contend that the limited scope of the ban, even if properly enforced, undermines its capacity to protect children.

Dating websites stand excluded along with gaming platforms, as are AI chatbots, which were recently highlighted for allegedly encouraging children to commit suicide and for engaging in “sensual” conversations with minors.

Others argue that educating children about responsible social media use would be more effective.

Some teenagers have informed the BBC of their intent to establish fake profiles prior to the deadline, despite government warnings to social media companies to identify and remove such accounts. Others have transitioned to shared accounts with their parents.

Commentators also anticipate a surge in VPN use, which conceals a user’s location, similar to what occurred in the UK following the implementation of age control regulations there.

Communications Minister Annika Wells acknowledged that the ban may not be “perfect.”

“It’s going to look a bit untidy on the way through,” she said in early November. “Big reforms always do.”

Concerns have also been voiced regarding the extensive data collection and storage needed to verify users’ ages.

Australia, like many countries, has experienced notable data breaches, resulting in the theft and publication or sale of sensitive personal data.

However, the government contends that the legislation incorporates “strong protections” for personal data.

These protections stipulate that data may only be used for age verification purposes and must be subsequently destroyed, with “serious penalties” imposed for breaches.

Social media companies expressed dismay when the ban was announced in November 2024.

Firms argued that the ban would be difficult to enforce, easily circumvented, time-consuming for users, and pose risks to their privacy.

Companies also suggested that it might drive children to more obscure corners of the internet and deprive young people of social interaction.

Snap, the owner of Snapchat, and YouTube also contested being classified as social media companies.

Days before the ban’s implementation, YouTube expressed concerns that the “rushed” new laws would leave children less safe, as they would still be able to use the platform without an account, thereby removing “the very parental controls and safety filters built to protect them.”

YouTube’s parent company, Google, reportedly considered legal action over its ban but did not respond to a BBC request for comment.

Despite its early implementation, Meta cautioned that the ban would leave teens with “inconsistent protections across the many apps they use.”

During parliamentary hearings in October 2025, TikTok and Snap stated their opposition to the ban but affirmed their intention to comply.

Kick, the only Australian company subject to the new law, stated that it would introduce a “range of measures” as it continued to engage “constructively” with authorities.

Denmark has announced plans to ban social media for under-15s, while Norway is considering a similar proposal.

A French parliamentary enquiry also recommended banning under-15s from social media and implementing a social media “curfew” for 15- to 18-year-olds.

The Spanish government has drafted a law that would require legal guardians to authorize access for under-16s.

In the UK, new safety rules introduced in July 2025 impose potentially large fines or even imprisonment for executives of online companies that fail to protect young people from illegal and harmful content.

Meanwhile, in 2024, a federal judge blocked an attempt in the US state of Utah to ban social media use for under-18s without parental consent.

BBC Sport’s Ask Me Anything team looks at why Steve Smith is wearing black tape under his eyes

Australia delay naming their side for the second Ashes Test against England, meaning captain Pat Cummins could make a shock return on Thursday.

More than 2,000 extremely abusive social media posts were sent about managers and players in the Premier League and Women’s Super League in a single weekend, a BBC investigation finds.

The Google-owned platform says parental controls will be stripped away as a result of the ban.

Lowestoft’s Jess Asato says the “Wild West” of social media is creating “a lost generation”.

Australia Considers Social Media Ban for Under 16s: Implementation Challenges Ahead

Beginning December 10th, a ban will prohibit Australians under the age of 16 from utilizing major social media platforms, including TikTok, X, Facebook, Instagram, YouTube, Snapchat, and Threads.

These individuals will be restricted from creating new accounts, and existing profiles will face deactivation.

This unprecedented measure is being closely observed by other nations worldwide.

According to the government, the ban aims to mitigate the detrimental effects of social media’s inherent “design features that encourage [young people] to spend more time on screens, while also serving up content that can harm their health and wellbeing.”

A government-commissioned study conducted earlier in 2025 revealed that 96% of children aged 10-15 engage with social media, with 70% encountering harmful content, including misogynistic, violent, and material promoting eating disorders and suicide.

The study also indicated that one in seven children reported experiencing grooming behavior from adults or older children, and over half reported experiencing cyberbullying.

The ban currently encompasses ten platforms: Facebook, Instagram, Snapchat, Threads, TikTok, X, YouTube, Reddit, and streaming services Kick and Twitch.

The government assesses potential sites against three main criteria:

YouTube Kids, Google Classroom, and WhatsApp are excluded as they do not meet the outlined criteria.

Individuals under 16 will still be able to view content on online platforms not requiring an account.

Critics have urged the government to extend the ban to encompass online gaming sites.

Platforms like Roblox and Discord have implemented age verification protocols for some features, ostensibly to preempt inclusion in the ban.

Neither children nor parents will face penalties for violating the ban.

Instead, social media companies are subject to fines of up to A$49.5 million (US$32 million, £25 million) for egregious or repeated violations.

The government mandates that companies take “reasonable steps” to prevent children from accessing their platforms, employing diverse age verification technologies.

These technologies may include government-issued identification, facial or voice recognition, or “age inference,” which analyzes online behavior to estimate a user’s age.

Platforms are prohibited from relying solely on user self-certification or parental vouching.

Meta, the parent company of Facebook, Instagram, and Threads, began terminating teen accounts on December 4th. The company stated that individuals erroneously removed could verify their age using government-issued identification or a video selfie.

Snapchat has indicated that bank accounts, photo IDs, or selfies can be used for verification.

Concerns have been raised that age verification technologies may inadvertently block adults while failing to detect underage users.

The government’s own research indicates that facial assessment technology is least reliable for teenagers.

Questions have also emerged regarding the magnitude of potential fines.

“It takes Meta about an hour and 52 minutes to make A$50 million in revenue,” Stephen Scheeler, a former Facebook executive, told the AAP news agency.

Critics further contend that the ban’s limited scope, even if effectively implemented, undermines its protective capacity for children.

Dating websites, gaming platforms, and AI chatbots – which have recently gained notoriety for allegedly encouraging children to commit suicide and engaging in “sensual” conversations with minors – are excluded.

Others argue that educating children on navigating social media would be a more effective approach.

Some teenagers informed the BBC that they intend to create fraudulent profiles before the deadline, although the government has cautioned social media companies to identify and remove such accounts. Others have opted for joint accounts with their parents.

Commentators also anticipate a surge in the use of VPNs – which conceal a user’s location – similar to the trend observed in the UK after the implementation of age control regulations.

Communications Minister Annika Wells acknowledged that the ban may not be “perfect”.

“It’s going to look a bit untidy on the way through,” she stated in early November. “Big reforms always do.”

Concerns have also been expressed regarding the extensive data collection and storage required to verify users’ ages.

Australia – like much of the world – has experienced a series of high-profile data breaches involving the theft, publication, or sale of sensitive personal information.

However, the government asserts that the legislation includes “strong protections” for personal data.

These protections stipulate that data can only be used for age verification and must be subsequently destroyed, with “serious penalties” for breaches.

Social media companies reacted with dismay upon the ban’s announcement in November 2024.

Companies argued that implementation would be challenging, circumvention would be easy, it would be time-consuming for users, and it would pose risks to their privacy.

Companies further suggested that the ban might drive children to more obscure online spaces and deprive young people of social interaction.

Snap – which owns Snapchat – and YouTube also disputed being classified as social media companies.

Days prior to the ban’s enforcement, YouTube contended that the “rushed” new regulations would compromise children’s safety, as they would still be able to use the platform without an account, thereby eliminating “the very parental controls and safety filters built to protect them.”

YouTube’s parent company, Google, reportedly considered a legal challenge regarding its inclusion but did not respond to a BBC request for comment.

Despite its early implementation, Meta cautioned that the ban would subject teens to “inconsistent protections across the many apps they use.”

During parliamentary hearings in October 2025, TikTok and Snap stated their opposition to the ban but pledged to comply.

Kick – the only Australian company affected by the new law – indicated that it would introduce a “range of measures” as it continues to engage “constructively” with authorities.

Denmark has announced plans for a social media ban for individuals under 15, while Norway is contemplating a similar proposal.

A French parliamentary inquiry has also recommended banning under-15s from social media and imposing a social media “curfew” for 15–18-year-olds.

The Spanish government has drafted legislation that would require legal guardians to authorize access for individuals under 16.

In the UK, new safety regulations introduced in July 2025 entail significant fines or even imprisonment for company executives who fail to implement measures to protect young people from illegal and harmful content.

Meanwhile, an attempt in the US state of Utah to ban individuals under 18 from social media without parental consent was blocked by a federal judge in 2024.

BBC Sport’s Ask Me Anything team looks at why Steve Smith is wearing black tape under his eyes

Australia delay naming their side for the second Ashes Test against England, meaning captain Pat Cummins could make a shock return on Thursday.

More than 2,000 extremely abusive social media posts were sent about managers and players in the Premier League and Women’s Super League in a single weekend, a BBC investigation finds.

The Google-owned platform says parental controls will be stripped away as a result of the ban.

Lowestoft’s Jess Asato says the “Wild West” of social media is creating “a lost generation”.