Abstract
This Brief provides an overview of Australia’s social media minimum age laws including: what harms are intended to be addressed, what services are captured, the obligation imposed on in-scope services, and the privacy implications for users of social media.
The Prime Minister Anthony Albanese declared on 8 November 2024 that ‘social media is doing harm to our kids. I’m calling time on it.’ 1 His government’s social media minimum age legislation (Online Safety Amendment (Social Media Minimum Age) Act 2024 (Cth)) was the first of its kind to pass anywhere in the world.
As 2024 drew to a close, an American publication, POLITICO Tech, published a conversation with Australia’s eSafety Commissioner, Julie Inman Grant, which opened with a discussion on what had led to the social media minimum age legislation: The government felt that the safety changes and improvements across the technology industry had been incremental rather than monumental. And I think there’s been a global movement that, frankly, started with Jonathan Haidt’s book, The Anxious Generation. One of our premiers … his wife read the book, and he came out and said, I want parental consent for kids at the age of 14. And then another state premier said, Well, I want to do this, but ban kids at the age of 16. And then every state and territory had a different plan. And then what happened was the head of the opposition party, Peter Dutton, said, We have an upcoming election. If the coalition comes in, we will institute a ban in 100 days. And then News Corp, which is a very powerful force in the Australian media landscape, was running a campaign around keeping kids safe. And so there was huge political momentum.
2
This is how the events unfolded: With just one week left of the Parliamentary sitting year, on Thursday 21 November 2024, the then Communications Minister Michelle Rowland introduced the Online Safety Amendment (Social Media Minimum Age) Bill 2024 into the House of Representatives.
The object of the legislation was very broadly defined as ‘to reduce the risk of harm to age-restricted users from certain kinds of social media platforms’.
3
The Explanatory Memorandum indicated that the harms the social media minimum age framework is intended to mitigate include the risks arising from: • exposure to harmful content, including content that is detrimental to physical and mental health such as drug abuse, suicide, and self-harm, • harmful features such as persistent notifications and alerts which have been found to have a negative impact on sleep, stress levels, and attention.
4
The Bill was referred to the Senate Environment and Communications Legislation Committee for report by Tuesday 26 November 2024. The closing date for submissions was 22 November – the day after the Bill was introduced into Parliament. Despite the 24-hour deadline, hundreds of submissions were provided to the Committee.
The Bills Digest prepared by the Parliamentary Library posed the question whether the ‘measure is proportionate to the risk’ and noted: There is limited evidence to suggest that a blanket ban to prohibit children from using social media is the most advantageous solution to addressing online harms.
5
The Bill passed the Senate on 28 November 2024 with bipartisan support – only a week after it had been introduced into Parliament.
What is the minimum age obligation?
The Online Safety Act 2021 (Cth) was amended to introduce an obligation on ‘age-restricted social media platforms’, a newly defined category in the Act, to take ‘reasonable steps’ to prevent children under 16 years of age from having an account on their platforms.
The focus on accounts means that under 16s will still be able to view any content that is available to users who are not logged into the service. The Explanatory Memorandum noted that, as an example, the ‘obligation would not affect the current practice of users viewing content on YouTube without first signing in’. 6
The obligation is on the platforms. There is no penalty imposed on young people who manage to circumvent age assurance processes (or on their parents, carers or educators).
The social media minimum age requirement is to commence on 10 December 2025.
Which platforms must comply with the minimum age obligation?
The Act introduced a new and deliberately expansive definition of ‘age-restricted social media platform’, which is defined to mean:
an electronic service that satisfies the following
conditions: (i) the sole purpose, or a significant purpose, of the service is to enable online social interaction between 2 or more end-users; (ii) the service allows end-users to link to, or interact with, some or all of the other end-users, (iii) the service allows end-users to post material on the service.
7
Well-known platforms captured by this definition include Facebook, X (formerly Twitter), Instagram, SnapChat, TikTok and YouTube.
The definition also allows for significant ministerial discretion as the Minister can make legislative rules to exclude a specific service or class of service from the definition of ‘age-restricted social media platform’. Before doing so, the Minister must seek advice from the eSafety Commissioner and have regard to that advice.
The Explanatory Memorandum which accompanied the social media minimum age Bill provided that, following enactment of the legislation, the government would undertake public consultation on the draft rules, which proposed to exclude certain classes of services. 8
This public consultation on the draft rules did not take place.
On 30 July 2025, the Prime Minister together with Communications Minister, Anika Wells, held a press conference 9 coinciding with the tabling of legislative rules which set out classes of services that are excluded from the definition of ‘age-restricted social media platform’.
The announcement that captured the most media attention was the Minister’s decision not to exclude YouTube from the scope of the minimum age obligation. 10 This decision was aligned with advice from the eSafety Commissioner. While eSafety acknowledged that YouTube has educational and other beneficial uses, it was concerned that the popularity of YouTube among children, coupled with the platform’s use of certain features and functionality (such as autoplay, endless scroll, and algorithmic recommendations) created a risk of excessive consumption and was therefore not consistent with the purpose of the social media minimum age obligation to reduce the risk of harm.
Classes of services that are excluded from the definition of ‘age-restricted social media platform’ by the legislative rules are: • Services that have the sole or primary purpose of enabling users to: - communicate by messaging, email, voice or video-calling, - play online games with other users - share information (such as reviews technical support or advice) about products or services - engage in professional networking or development • Services that have the sole or primary purpose of supporting the education or health of users • Services that have a significant purpose of facilitating communication between: - educational institutions & students or students’ families - providers of health care and people using those providers’ services.
11
How will the obligation be implemented by social media platforms?
The detail of ‘how’ the obligation will be implemented by platforms is unknown at the time of writing. ‘Reasonable steps’ is not a defined term in the Act.
In November 2024, the government awarded a tender to the Age Check Certification Scheme to undertake an Age Assurance Technology Trial to determine the effectiveness of technologies which will be considered as options to prevent access to online pornography by children and young people under the age of 18, and to limit access to social media platforms for those under 16 years of age. 12
Age assurance technologies include methods that verify a user’s identity credentials to accurately determine their age, as well as methods that estimate the age of a user – such as using biometric markers or digital usage patterns.
The results of this Trial were taken into account by the eSafety Commissioner in the regulatory guidelines issued in September 2025 to assist the platforms in understanding what are reasonable steps to prevent under 16s from having accounts on their service.
The Trial found that ‘a wide range of approaches exist, but there is no one-size-fits-all solution for all contexts’.
13
eSafety does not require specific types of age assurance to be employed. The regulatory guidance highlights that there ‘is no one-size fits all approach to what constitutes the taking of reasonable steps’ and ‘it is reasonable for platforms to take a layered approach across the user journey’ and implement a range of measures including taking reasonable steps to: - determine which accounts are held by under-16s and deactivate or remove those accounts with ‘kindness, care and clear communication’ - prevent under-16s from creating new accounts - mitigate circumvention of measures.
14
There are heavy penalties of up to $49.5 million per breach for companies which fail to take reasonable steps.
Implications for all social media users
The social media minimum age obligation does not only impact children. It is likely to have privacy and security implications for everyone in Australia who has a social media account.
Platforms will collect or use personal information in undertaking age assurance.
Privacy concerns were raised by a number of senators and the legislation was amended to provide that platforms must not collect government-issued identification unless a reasonable alternative method of age assurance is also offered that does not require the collection of government-issued identification. That is, a platform can never require an end user to give government-issued identification as the sole method of age assurance. 15
The Act also provides that personal information collected for the purpose of complying with the minimum age obligation must be destroyed after using it or disclosing it for the purpose it was collected. 16
Worryingly, some vendors of age assurance technology are not heeding privacy and security by design principles. The Age Assurance Technology Trial included a finding that [s]ome providers were found to be building tools to enable regulators, law enforcement or Coroners to retrace the actions taken by individuals to verify their age which could lead to increased risk of privacy breaches due to unnecessary and disproportionate collection and retention of data.
17
By the end of December 2027, the Communications Minister must initiate an independent review of the operation of the law.
There will be intense interest in Australia and abroad on the outcomes of this social media minimum age obligation. Will the social media minimum age make a ‘significantly positive difference’ to the wellbeing of young Australians, as suggested by the Communications Minister? 18 It is imperative that there is ongoing research and consultation with young people across Australia to understand the impact of the implementation of the social media minimum age obligation.
Footnotes
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
