Children In Digital Crossfire: Legal Architecture of Australia’s Online Safety Legislation & Its Transnational Normative Implications
I. INTRODUCTION
Few legislative interventions in recent years have attracted as broad a coalition of imitators as Australia’s Online Safety Amendment (Social Media Minimum Age) Act 2024 (Cth) (‘the 2024 Act’).[1] Passed by the Australian Parliament in November 2024 and brought into operational effect in December 2025, the statute imposes an absolute prohibition on social media platforms permitting account registration by children under the age of sixteen.[2] The obligation to prevent such registration lies with the platform; the child bears no independent statutory duty, nor does the parent. This structural inversion — from individual obligation to corporate obligation — is, as this article contends, the legislation’s most conceptually significant feature, and the one most frequently elided in the enthusiasm of other governments to adopt its surface contours.
The global regulatory field is in ferment. France, Denmark, Spain, Indonesia, and Malaysia have all announced review processes examining analogous restrictions.[3] Within India, both the state of Goa and the state of Andhra Pradesh have initiated detailed studies of the Australian model, with the latter convening a ministerial panel chaired by the state’s IT and education minister to assess feasibility.[4] The Madras High Court, in December 2025, directed India’s Union Government to consider Australia-style legislative restrictions, observing that existing mechanisms were insufficient to shield minors from harmful online content.[5] The normative diffusion of Australia’s approach is, therefore, not merely academic; it is a live regulatory phenomenon with substantial real-world stakes.
This article proceeds in four parts. Part II excavates the legal architecture of the 2024 Act, examining its definitional choices, enforcement mechanisms, and the burden-allocation logic that underwrites the entire scheme. Part III identifies three structural tensions inherent in the legislation — the age-verification paradox, the definitional elasticity problem, and the constitutional equity concern — each of which presents material risks to the statute’s long-term integrity. Part IV analyses the particular challenges facing India, where federal-state jurisdictional fault lines complicate any straightforward transposition. Part V draws conclusions about the conditions under which the Australian model can constitute a genuine advance for children’s rights rather than a performative exercise in legislative symbolism.
II. THE LEGAL ARCHITECTURE OF THE 2024 ACT
The 2024 Act amends the Online Safety Act 2021 (Cth), which established the Office of the eSafety Commissioner and created a graduated framework of obligations for digital platforms operating in Australia.[6] The 2024 amendment introduces what it terms “age-restricted social media services” (ARSMS) and mandates that providers of such services take “reasonable steps” — a formulation that immediately invites regulatory elaboration — to prevent under-sixteens from creating accounts.[7]
Three features of the legislative design deserve close scrutiny. First, the statute does not prescribe a specific age-verification technology. This regulatory agnosticism was a deliberate drafting choice, responsive to the concern that locking the statute to a particular technological method would produce rapid obsolescence.[8] The consequence, however, is that the “reasonable steps” standard is formally open-ended, delegating the practical substance of the obligation to the eSafety Commissioner through guidelines. Second, the Act imposes civil penalties on platforms that fail to comply, calibrated to a percentage of the platform’s global turnover — a structuring device borrowed from the European Union’s General Data Protection Regulation and intended to deter commercially powerful actors from treating non-compliance as an acceptable cost of operation.[9] Third, the Act expressly exempts a notable set of platforms, including Discord, GitHub, Pinterest, Roblox, and Steam, from the ARSMS designation, raising immediate questions about the coherence of the regulatory perimeter.[10]
The exemptions are not, on their face, arbitrary. The Explanatory Memorandum accompanying the Bill notes that the exempted platforms were assessed either as presenting lower harm profiles, as serving predominantly adult professional purposes, or as possessing intrinsic age-gatekeeping characteristics.[11] Yet the harm profiles of certain exempted platforms — Discord, in particular, has been extensively documented as a space where grooming and exploitation occur[12] — suggest that the exemption logic is at least partially driven by political economy rather than pure risk assessment. The legislative architecture, in other words, is not ideologically innocent.
III. THREE STRUCTURAL TENSIONS
The first and most acute tension is what this article terms the age-verification paradox. Effective enforcement of an age ban requires the platform to verify the age of prospective users. Age verification at scale, however, requires the collection and processing of sensitive personal data — precisely the category of data whose processing in relation to children the legislation purports to protect. The Joint Parliamentary Committee on Intelligence and Security acknowledged this paradox in its pre-legislative scrutiny report, noting that any technically robust age-assurance system would itself generate novel privacy risks.[13] The statute’s response is to leave resolution to the eSafety Commissioner’s guidelines and the future development of privacy-preserving verification technologies.
This deferral is not irrational — the legislative process cannot pause for technology to mature — but it does mean that the 2024 Act is, in a meaningful sense, a statutory promise whose redemption depends on technological and regulatory developments that remain uncertain. When Meta began notifying Australian teenagers that their accounts would be closed in anticipation of the Act’s operational effect, the implementation difficulties of accurate age determination became immediately visible.[14] The company’s acknowledgment that users are “not always truthful at sign-up” is a frank admission that any age-verification system resting on user self-declaration will systematically fail.
The second tension concerns definitional elasticity. The designation of a service as an ARSMS under the 2024 Act turns on a combination of criteria including the service’s primary purpose and its user base composition.[15] The definitional question of what constitutes the “primary purpose” of a multi-functional platform is deeply resistant to stable legal determination. A platform that functions principally as a video-sharing service may, through algorithmic design and social affordances, operate as a social network for a large proportion of its minor users. The definitional boundary between a “social media service” and a “video platform” or “messaging application” is not merely a legal question; it is a design question, and platforms retain substantial capacity to manipulate their own design characteristics in ways that affect their regulatory classification. The risk of regulatory arbitrage through platform redesign is, therefore, inherent in the Act’s structure.
The third tension is a constitutional equity concern that has received insufficient attention. The High Court of Australia has long recognised that legislation which imposes burdens in ways that produce differential constitutional treatment of corporations and individuals may attract scrutiny under the implied freedom of political communication[16] and under Chapter III principles governing the exercise of judicial power. While the 2024 Act targets corporate platforms rather than individual children, its practical effect — preventing children from accessing services that, for many, constitute a primary medium of social participation and civic engagement — raises a genuine question about whether a complete exclusion from nominated platforms constitutes an unjustified limitation on the implied freedom as applied to minors as rights-bearing persons. No challenge on these grounds has yet been mounted, but the theoretical foundation exists.
IV. INDIA’S REGULATORY LANDSCAPE: FEDERAL COMPLEXITY AND CONSTITUTIONAL SPECIFICITY
India presents a particularly instructive case study in the transnational diffusion of the Australian model precisely because the formal conditions for straightforward transposition are absent. As Kazim Rizvi of The Dialogue think tank has correctly observed, internet governance in India is a matter of federal legislative competence, falling under the Union List.[17] State governments in Goa and Andhra Pradesh can study the Australian law with the most earnest intent, but they cannot amend the Information Technology Act 2000 or the Digital Personal Data Protection Act 2023 (DPDPA) — the two primary federal instruments governing digital persons and data in India — without the concurrence of Parliament. State-level action, absent enabling central legislation, is constitutionally precluded.
The DPDPA 2023 does contain provisions protective of children’s data, requiring verifiable parental consent before processing personal data of individuals under eighteen and prohibiting behavioural targeting and monitoring of minors.[18] However, the operational rules implementing these protections are being phased in through 2027, meaning the statutory protection, though enacted, is not yet operative. This temporal gap between legislative promise and regulatory reality creates precisely the window of vulnerability that the Madras High Court identified in its December 2025 observations.[19]
Any Indian legislative intervention modelled on the 2024 Act would need to reckon with two constitutional specificities absent from the Australian context. First, the right to access information and the right to freedom of expression under Article 19(1)(a) of the Indian Constitution have been construed by the Supreme Court of India as extending to digital speech and digital access, with restrictions permissible only under the narrow grounds enumerated in Article 19(2).[20] A blanket social media ban for all persons under sixteen would need to satisfy the proportionality doctrine, which requires that restrictions be the least restrictive means of achieving a legitimate aim. Second, the right to privacy, as recognised in KS Puttaswamy v Union of India,[21] includes an informational autonomy dimension that potentially encompasses the right of developing adolescents to form social connections and develop identity without state-imposed digital exclusion.
V. CONCLUSIONS: CONDITIONS FOR LEGITIMATE TRANSPOSITION
Australia’s 2024 Act is best understood not as a concluded solution to the problem of children’s online safety, but as a legislative wager — a bet that platform-borne responsibility, combined with civil monetary penalties calibrated to global turnover, will produce behavioural change adequate to the protective purpose. The wager may be well-placed, but it is not guaranteed to pay out, and its odds depend on regulatory, technological, and judicial variables that the statute does not control.
For jurisdictions considering transposition of the model, three conditions appear necessary for the enterprise to constitute a genuine rather than merely performative advance. First, the age-verification mechanism must be resolved before, not after, the statute takes operational effect; the delegation of this question to future guidelines risks producing a prohibition that is formally enacted but practically unenforceable. Second, the definitional perimeter of regulated platforms must be drawn with sufficient rigour to foreclose redesign-based regulatory arbitrage, ideally through an outcomes-based definition keyed to functional harm profiles rather than platform genre. Third, and most fundamentally, any blanket exclusion of minors from digital social space must be accompanied by alternative provision of age-appropriate digital participation, failing which the legislation risks replicating, in the digital domain, the exclusionary social dynamics it ostensibly seeks to overcome.
The transnational appeal of the Australian model is real and, on its face, intelligible. The welfare of children in the digital environment is a genuinely pressing concern, and the precedent of locating primary responsibility in platform operators rather than individual families reflects a mature and defensible understanding of where structural power in the digital economy actually resides. But legal transplantation without constitutional acclimatisation is one of comparative law’s most persistent pathologies. Jurisdictions that import the Australian model’s conclusions without engaging its tensions — and particularly jurisdictions with constitutional structures, federal architectures, and jurisprudential cultures as distinct as India’s — may find that the promise of the model dissolves in the complexity of its implementation.
[1] Online Safety Amendment (Social Media Minimum Age) Act 2024 (Cth) (Australia). The Act received Royal Assent on 29 November 2024.
[2]ibid s 45AA; Department of Infrastructure, Transport, Regional Development, Communications and the Arts, ‘Commencement of Social Media Minimum Age Provisions’ (Press Release, December 2025).
[3]Jagmeet Singh, ‘Indian States Weigh Australia-Style Ban on Social Media for Children’ TechCrunch (27 January 2026) <https://techcrunch.com> accessed 17 February 2026.
[4]ibid. See also statement of Andhra Pradesh IT and Education Minister Nara Lokesh at the World Economic Forum, Davos, January 2026.
[5]S Vijayakumar v Union of India & Ors, WP(MD) No 23323 of 2018 (Madras High Court, December 2025); see Ruchi Sharma, ‘India Must Consider Australia’s Model for Child Internet Safety, Says HC’ Latest Laws (26 December 2025) <https://latestlaws.com> accessed 17 February 2026.
[6]Online Safety Act 2021 (Cth); see Bridget Haire and Lena Seng, ‘The Online Safety Act 2021 and the Architecture of Platform Accountability’ (2022) 45 University of New South Wales Law Journal 1422.
[7]Online Safety Amendment (Social Media Minimum Age) Act 2024 (Cth) s 45AC(1).
[8]Explanatory Memorandum, Online Safety Amendment (Social Media Minimum Age) Bill 2024 (Cth) [42]–[47].
[9]Regulation (EU) 2016/679 (General Data Protection Regulation) [2016] OJ L119/1, art 83. The influence of GDPR penalty structuring on Australasian and Asian legislative design is examined in Graham Greenleaf, ‘Global Data Privacy Laws 2023’ (2023) 181 Privacy Laws & Business International Report 14.
[10]Online Safety Amendment (Social Media Minimum Age) Act 2024 (Cth) s 45AB(3)(b) (exempted services schedule).
[11]Explanatory Memorandum (n 8) [78]–[95].
[12]See Australian Institute of Criminology, ‘Online Child Sexual Exploitation: Examining Platform Risk Profiles’ (Research Report, 2023) 31–34; Internet Watch Foundation, ‘Annual Report 2024’ 22.
[13]Parliamentary Joint Committee on Intelligence and Security, ‘Review of the Online Safety Amendment (Social Media Minimum Age) Bill 2024’ (Report, October 2024) [3.14]–[3.29].
[14]Singh (n 3).
[15]Online Safety Amendment (Social Media Minimum Age) Act 2024 (Cth) s 45AB(1)(a)–(c).
[16]Australian Capital Television Pty Ltd v Commonwealth (1992) 177 CLR 106; Lange v Australian Broadcasting Corporation (1997) 189 CLR 520.
[17]Kazim Rizvi (founding director, The Dialogue) quoted in Singh (n 3). See also Constitution of India, Schedule VII, List I, Entry 31 (Posts and Telegraphs; Telephones, Wireless, Broadcasting and Other Like Forms of Communication).
[18]Digital Personal Data Protection Act 2023 (India) s 9.
[19]S Vijayakumar v Union of India & Ors (n 5).
[20]Shreya Singhal v Union of India (2015) 5 SCC 1; Anuradha Bhasin v Union of India (2020) 3 SCC 637. On digital access as a fundamental right see also Faheema Shirin RK v State of Kerala (2019) (Kerala HC).
[21]KS Puttaswamy (Privacy-9J) v Union of India (2017) 10 SCC 1 (Indian Supreme Court, Constitution Bench).
By: Preeti Badal
University Institute Of Legal Studies
Panjab University, Chandigarh
B.COM LLB (Hons.) & 1st Year
