Shock Waves: Meta, TikTok & Google Face Massive Fines After Breaking Australia's Social Media Ban!

The Australian government is taking a hard stance against major tech companies like Meta, TikTok, and Google for allegedly failing to enforce a recent ban on social media accounts for users under the age of 16. This comes after a survey revealed that a significant number of Australian children still have social media accounts, despite the new regulations aimed at protecting younger users from online risks.

According to a survey of nearly 900 Australian parents, around 31% reported that their children continue to hold one or more social media accounts after the ban was enacted in December 2022. This is a marked decline from 49% prior to the implementation of the law. Alarmingly, among the under-16s who had accounts on platforms like Instagram, Snapchat, and TikTok before the ban, about 70% have managed to retain access.

On Tuesday, Australia's Communications Minister Anika Wells announced that platforms including Instagram, Facebook, Snapchat, TikTok, and YouTube are under investigation for potential non-compliance with the ban. Wells criticized the companies for their insufficient efforts to enforce the age restrictions. She emphasized that the technology they are using, such as facial recognition for age estimation, is ineffective and that the firms have been lax in their age verification processes.

“None of this is impossible. None of this is even difficult for big tech who are innovative billion dollar companies. What this update shows is unacceptable,”

Wells stated during a press conference in Canberra. “If these companies want to do business in Australia, they must obey Australian laws.”

The regulations classify platforms such as Facebook, Instagram, Snapchat, Threads, TikTok, Twitch, X (formerly known as Twitter), YouTube, Kick, and Reddit as “age-restricted platforms.” The laws are designed to require these companies to take reasonable steps to prevent children under 16 from creating accounts. Violating these laws can result in penalties of up to A$49.5 million (approximately US$33.9 million).

In January, the government reported that over 4.7 million social media accounts were deactivated, removed, or restricted within the first few days after the ban took effect. However, they did not specify how many accounts were removed from each individual platform. While the Australian government has touted the ban as a success, recent findings reveal that many children remain online, indicating that the policy has not fully achieved its intended goals.

The eSafety Commission's compliance report, issued more than three months post-implementation, noted that despite an overall drop in account ownership, a substantial number of children under 16 still maintain accounts on these platforms. The report revealed that around 70% of parents whose children had accounts prior to the ban reported that their kids still had access to Facebook (63.6%), Instagram (69.1%), Snapchat (69.4%), and TikTok (69.3%). Notably, nearly half (48.5%) of the parents indicated that their children still maintained a YouTube account.

Before the ban, about 49.7% of children surveyed had social media accounts; this number has since dropped to 31.3%. The Albanese government acknowledges that it may take time for all minors to be removed from these platforms, emphasizing that the law is designed to help parents establish rules for their households.

The eSafety report highlighted that many children still have access to their social media accounts primarily because they had not yet been prompted by the platforms to verify their age. It raised concerns about numerous “poor practices” that these companies reportedly engage in, such as allowing children to repeatedly attempt age verification methods even after declaring themselves as underage. Additionally, the eSafety Commission criticized the use of facial age estimation, indicating that this method has higher error rates for those near the 16-year-old threshold. The report claimed that some platforms may have been aware that users aged 14 or 15 could receive “false” results indicating they were over 16.

In response to these allegations, Meta stated that it is committed to complying with the social media ban and is working with eSafety and the Australian government. The company acknowledged that accurately determining age online is a challenge for the entire industry, particularly at the age-16 boundary, and suggested that robust age verification and parental approval at the app store and operating system level would be a more effective approach.

The Australian government is continuing its investigation and collecting evidence to determine whether to impose fines against the non-compliant companies. As debates continue about how to safeguard children in an increasingly digital world, this situation raises broader questions regarding the responsibilities of tech companies in protecting vulnerable users.

You might also like:

Go up