Abstract
Digital parenting today is shaped by shifting guidelines, moralised discourse, and often unstable institutional advice, leaving families navigating uncertainty without meaningful support. Drawing on governmentality theory and media studies, this piece examines how screentime discourse can position parents as primarily responsible for risks that are in part, produced by systems outside the home. Using the Australian policy reversal on YouTube as a case study, it highlights how inconsistent guidance can erode parental confidence and at times sustain cycles of shame and surveillance. Rather than consistently addressing structural factors such as platform design and algorithmic amplification, media and policy narratives displace responsibility onto families. The commentary argues for reframing of digital parenting as a relational practice embedded within sociotechnical systems, rather than a purely private moral challenge. It calls for clear, co-designed, and contextually relevant guidance developed collaboratively by platforms, educators, and public health bodies. By centring collective care and shared responsibility, the piece advocates moving beyond compliance-focussed directives towards frameworks that empower parents as partners in children’s digital wellbeing. In doing so, it invites broader reflection on how digital governance, media narratives, and health communication can more justly support families navigating complex digital environments.
Keywords
Parenting in the digital age is shaped less by clear guidance but by shifting and often contradictory expectations, that leave parents navigating moralised uncertainty. The reversal of YouTube’s status as a child-friendly platform is illustrative, showing how evolving advice can produce new forms of parental confusion and moral judgement, and how digital parenting discourse often functions as a mode of governance.
From conception onwards, caregivers encounter directives to limit screen time, avoid certain applications, and supervise all digital content. While concerns about ‘screen time’ and ‘online risk’ often overlap, they derive from distinct traditions: the former rooted in health and developmental discourses, the latter in safety discourses about content, contact, and conduct. Both position parents as responsible for regulating digital engagement. These recommendations provide structure yet can also operate as moral imperative that frame digital devices primarily as sources of harm, positioning parents as the gatekeepers of developmental outcomes. Although screentime guidelines have shifted towards more context-sensitive frameworks, earlier prohibitive messages persist. This persistence reflects media narratives, institutional inertia, and broader cultural anxieties.
The recent reversal in assessments of YouTube highlights some of this instability. In 2024, YouTube was presented as a valuable educational resource, and exempted from proposed age restrictions (Rowland, 2024). Just months later, the eSafety Commissioner warned of children’s exposure to harmful content (Inman Grant, 2025). The platform was simultaneously framed as beneficial and harmful, leaving parents without clear direction. These inconsistencies arise not only from cultural expectations but from regulatory communication itself, eroding parent self-efficacy. Rather than supporting parents to make contextually informed choices, such shifts generate uncertainty and moral scrutiny.
This raises important questions about how parental agency might be recognised relationally. If digital parenting is understood as embedded with sociotechnical systems, then shared responsibility and collective forms of care become possible, rather than a continued reliance on household-level vigilance. This commentary invites further reflection on how responsibility for navigating children’s media environments is constructed – and unevenly distributed – across families, institutions, and platforms.
Technopanic
Technopanic narratives frame screens as inherently harmful (Marwick, 2008), obscuring the commercial infrastructures and governance failures that shape children’s digital experiences (Lupton and Williamson, 2017). Screens are often framed as threats to be contained, not tools to be critically engaged with. Technopanic is sustained both by media narratives and governance failures. Headlines warning of ‘TikTok brain’ or ‘screen zombies’ simplify complex sociotechnical risks.
The YouTube reversal within Australia’s policy landscape also highlights the contradictions at the heart of technopanic governance, with risk profiles shifting with platform economies and content trends (Auxier et al., 2023). Algorithms can shape and intensify interests, nudging users towards increasingly extreme or emotionally charged content (Wiard et al., 2022). Data from the Keeping Kids Safe Online survey (eSafety Commissioner, 2025) cited by the eSafety Commissioner in support of the YouTube account ban for minors indicated that approximately 70% of children aged 10–15 reported had encountered harmful content on the platform. Reported exposure included misogynistic material, pro-eating disorder communities, violent content, and dangerous viral challenges.
These reversals destabilise parental trust, making consistent digital practices difficult to sustain. If national safety bodies cannot offer consistent guidance, how can parents feel confident in their everyday digital choices? The uncertainty here extends beyond how much time children spend online to what kinds of digital environments they encounter, highlighting how screen time and online risk intersect but are not synonymous. Should YouTube be seen as a digital library or a vector of misogyny? The answer, it seems, depends on which part of the governance apparatus is speaking and when. This reflects a broader pattern in technopanic discourse, where solutions are framed in terms of personal responsibility rather than systemic reform. Media literacy initiatives and parental controls are proposed as primary interventions, while platform accountability, data monetisation, and the attention economy remain under-examined (Chaudron et al., 2018; Willett and Wheeler, 2021).
As digital cultures evolve, so too do the risks. Harmful content is not static; it adapts with user trends, technological affordances, and commercial imperatives. New risks emerge faster than platforms can moderate them. Content moderation struggles to keep pace, while algorithms continue to amplify material that is the most engaging, regardless of content risk. For parents, this means navigating conflicting messages about whether a platform is a resource or a risk and not just monitoring current risks but anticipating future ones – an impossible demand (Livingstone and Third, 2017).
Livingstone and Blum-Ross (2020) argue, digital parenting is an ongoing negotiation of imperfect options. Parents are expected to safeguard wellbeing, support learning, and maintain social connexion, all while preserving family harmony. Technopanic narratives erase these complexities into moral judgements about parent adequacy and persist because they displace structural responsibility onto families, reinforcing a moral economy of parenting that privileges individual vigilance over systemic accountability.
Moral economy of screentime advice
Screentime guidelines are rarely neutral health recommendations; they often act as moral scripts that shape how parents govern children’s digital lives. Drawing on Foucault’s (1977) terms of governmentality, these guidelines extend beyond expert advice to become diffuse tools of social regulation. They disperse authority into the micro-decisions of daily parenting, frequently framing compliance as evidence of care and positioning deviation as a sign of neglect.
Early guidelines, such as the American Academy of Paediatrics’ (AAP) directives of ‘no screens before two’ (i.e. before the age of 2 years) American Academy of Pediatrics (AAP, 1999), were rooted in precautionary logic, but have become cultural shorthand for responsible caregiving. Despite limited empirical foundation, these recommendations gained cultural traction due to their simplicity and moral clarity. Despite subsequent shifts by the American Academy of Paediatrics (AAP, 2021) and the World Health Organisation (WHO; 2019) towards content quality and co-engagement, earlier messages persist in cultural discourse as symbolic markers of ‘good’ parenting.
Lupton and Williamson (2017) describe this as digital governmentality, where parents internalise external surveillance and become self-monitoring subjects. They not only regulate their children’s technology use but also assess their own parenting and that of others, creating something akin to a panoptic system of mutual judgement. This culture of surveillance often generates guilt and shame when caregiving realities, such as work demands or caregiving pressures, conflict with idealised standards.
More fundamentally, these dynamics deflect attention from systemic responsibility. Despite growing concern about platform design features that promote harmful content and exploit user attention, regulatory responses tend to focus on educating parents rather than holding platforms accountable. Structural issues such as algorithmic amplification or commercial exploitation of children’s data can be reframed as parental shortcomings.
Shame, surveillance, and the performance of good parenting
In contemporary parenting culture, it is not only actions but the visible performance of certainty that mark parental adequacy. Langton et al. (2025) show how first-time parents feel pressure to anticipate future digital risks before their child even begins using devices. This anticipatory labour reflects broader ideals of what constitutes good parenting in the digital age: proactive, vigilant, and risk averse. At the same time, tools such as school’s digital reporting systems, online health assessments, and commercial parental monitoring apps contribute to a datafied model of caregiving. As Mascheroni (2018) notes, these systems position parenting as something to be measured and tracked, shifting emphasis away from relational engagement towards quantifiable indicators of responsibility.
This labour is gendered, with mothers often positioned as moral guardians of childhood (Chen and Hou, 2024). Balleys (2022) shows that mothers often become the default managers of children’s digital engagement, responsible for monitoring screen use and performing the emotional and moral accountability attached to it. As Heaselgrave (2025) notes, digital mothers are expected to manage risk, model restraint, and frame their digital choices as deliberate acts of care. This aligns with the broader argument that responsibility for digital wellbeing is gendered and relational, shaped by social norms, rather than simply individual choice.
Platforms such as Instagram, Facebook and TikTok act as informal arenas of surveillance, where parenting practices are shared and judged. These platforms promote visibility, inviting parents to publicly perform their caregiving for an imagined audience of peers, professionals, and broader communities. Shame, in this context, is not solely a private emotion but a relational process (Ahmed, 2014), shaped through peer observation and institutional expectations. Whether a parent hands a child a tablet in a waiting room or posts a photo of their child watching TV, the implication is clear: you could have done better. Yet the standard of ‘better’ is shifting and ambiguous, especially when expert guidance is contradictory or changes midstream.
These pressures contribute to erosion of parental confidence (Milford et al., 2024). Parents must demonstrate balance in environments that are structurally imbalanced. Parenting culture is increasingly shaped by what Lee et al. (2010) describe as a morality of precaution, where parents are expected to predict and prevent harm before it occurs. The parent who permits screen use is seen as both resourceful and reckless, attentive and neglectful.
Layered visibility intensifies these contradictions. Parents are encouraged to model healthy digital behaviours – co-using devices, engaging in critical conversations, and selecting high-quality content. Yet, these same behaviours can be subject to scrutiny by schools, health services, and online communities. Parents are often praised for creating media-rich learning environments yet simultaneously criticised for allowing too much access. Parenting thus often manifests as a performance of balance in a context that is structurally imbalanced.
Wall (2022) describes how contemporary digital parenting discourse extends the logic of intensive parenting into the digital space. Parents are tasked with shaping their children into competent, self- regulating digital citizens while simultaneously monitoring them for risk. This reframes parenting from a practice of care to a form of anticipatory governance. Simultaneously, these discourses erase children’s agency, framing them as passive recipients of risk rather than as active digital participants.
Furthermore, expectations are unequally distributed. Blum-Ross and Livingstone (2016) argue that digital parenting is shaped by socioeconomic, cultural, and structural inequalities. Access to high-quality content, time for reflective parenting and supportive work environments are privileges not available to all. Yet dominant narratives of responsible digital parenting often assume a middle-class ideal: educated, resourced, and time rich. Families facing financial pressures, cultural marginalisation, or additional caregiving demands are disproportionately pathologised. This is particularly true for Indigenous families and minority groups, whose parenting practices are often subjected to deficit framing and heightened scrutiny (Green et al., 2021; Zhang and Livingstone, 2019). Surveillance also extends beyond peer judgement to institutional practice.
Within this model, the idealised image of the digital parent is one who not only limits screen time, but one who actively tracks, reports, and reflects on it, demonstrating their moral worth through metrics. Self-efficacy becomes equated with surveillance compliance: the more you monitor, the better you parent. Yet for many families, particularly those facing structural disadvantage, this expectation is neither achievable nor empowering. It shifts responsibility away from systemic factors, such as platform design, advertising, and algorithmic curation, that more significantly shape children’s digital environments.
Shame and confusion
If digital parenting is to be supported rather than surveilled, guidance must move beyond compliance-focussed directives towards contextually grounded, relational approaches. Research shows that parents experience shame and self-stigma when their digital parenting practices deviate from idealised norms, even when those practices reflect care, pragmatism, and necessity (Langton et al., 2025; Milford et al., 2025). This shame is not simply an individual feeling, but a social process shaped by interactions with peers, educators, and health professionals. This dynamic echoes longstanding patterns in media culture. Willett (2015) shows how discourses surrounding children’s digital media routinely construct ‘good parenting’ as the careful evaluation and monitoring of platforms. Parental choice becomes a visible moral performance reinforcing hierarchies of adequacy and judgement. This alignment demonstrates that the burden of digital responsibility is not new but reproduced through evolving media forms and governance narratives.
Research by Milford et al. (2024) shows that parents’ confidence in managing digital media is linked to how much screentime children have, yet the evidence base remains shaped by health agendas that prioritise minimisation over meaningful engagement. This casts screentime primarily as a health risk to be controlled, obscuring its potential as a site of connection, leisure, or emotional regulation. Parents facilitating video calls with distant relatives or engaging with educational apps may find their practices excluded from the mainstream narratives of ‘healthy’ screen use, even when such uses align with developmental needs.
Many health-based frameworks struggle to fully account for the nuanced ways in which parents integrate digital technologies into family life. Screens are not only used for entertainment or convenience; they are used for connexion, caregiving, learning, and respite. Yet these lived practices are often absent from screentime discourse, which continues to promote universal restrictions that do not reflect diversity of family circumstances, work demands, or cultural practices. Willett and Wheeler (2021) and Wall (2022) argue that digital parenting is embedded within broader familial care work, but this complexity remains invisible in dominant narratives of responsible parenting. Schools may advocate for the use of digital learning platforms while health agencies warn against screen exposure (Straker et al., 2018). Parents are left to reconcile these competing priorities; often without acknowledgement of the compromises involved.
The result is a climate where uncertainty and shame prevail. Rather than supporting parent with clear, consistent, and flexible guidance, much public discourse tends to reinforce anxiety and moral judgement. In this way, screentime advice functions less as a tool for empowerment and more as a mechanism of regulation and social control (Wall, 2022), disciplining parents into alignment with shifting standards and contributing to the emotional exhaustion many families experience. Digital parenting is a dynamic practice shaped by expert advice and the lived realities of family life. These realities involve negotiation, compromise, and context-sensitive care. Guidelines that fail to reflect this, risk becoming less relevant or even harmful.
Reducing shame and confusion requires a broader cultural shift beyond technopanic narratives towards collective care, structural accountability, and recognition of both parental and child agency. As long a parenting continues to be framed primarily as a private moral challenge, parents will continue to shoulder disproportionate blame for harms produced by systems beyond their control. Resisting this dynamic requires more than clearer guidance; it requires a reorientation of responsibility. By locating failure in the home, some institutions avoid confronting their own inconsistent messaging and the broader digital architectures they help sustain.
Conclusion
In today’s volatile digital landscape, parents navigate not only their child(ren)’s digital lives but shifting, often contradictory, expectations of ‘good’ digital parenting. This instability erodes parental confidence and displaces systemic failures onto the family. If governance bodies cannot sustain consistent, evidence-based guidance, why should parents be left to carry the burden of digital risk? Supporting parental self-efficacy requires rethinking how guidance is framed and delivered.
Responsibility for digital wellbeing must be rebalanced. While parents play a vital role, they are not the sole or primary line of defence against digital harms, but partners in a broader sociotechnical system. Platform developers, educators, and policymakers shape children’s digital environments through design choice, content curation, and policy frameworks. Platforms influence children’s online experience through algorithmic design and monetisation strategies that prioritise engagement over safety. Schools and policymakers shape digital ecologies through technology procurement, curriculum integration, and safety frameworks. Recognising these actors as co-responsible requires shifting digital wellbeing efforts from household-level vigilance to system accountability.
Recent developments suggest that this balance of responsibility may be beginning to shift. Proposed social media age restrictions, together with mounting public debate around platform accountability, signal a tentative move towards recognising that digital wellbeing cannot rest solely with families. While such measures remain untested, it marks an important rhetorical and policy turn where platforms and governments are increasingly expected to play an active role in safeguarding young people. While proposals like these reflect growing concern, they risk oversimplifying the problem. Without clear communication and ongoing parental engagement, such policies may foster false confidence rather than meaningful safety. Restrictive policies cannot replace the relational work of co-engagement, dialogue, and scaffolding of children’s independent decision-making.
The goal is not to remove parents from the equation. Supporting parents means providing them with tools that are responsive to their unique family contexts. Guidance must be co-developed with families, reflect everyday realities, and acknowledge diverse forms of care. This advice must acknowledge the complexities of family life resisting binary notions of right and wrong and recognise the systems that shape digital risk. It means acknowledging that parents are not digital engineers or media regulators; they are caregivers doing their best in a rapidly changing environment. While educating parents and children is valuable, these efforts alone cannot redress the structural power imbalances that shape the digital landscape
Creating safe and equitable digital ecosystems demands shared responsibility, an expectation reflected in emerging policy measures (yet in practice the day-to-day enforcement and policing of these measures will likely fall to parents and caregivers). Whether through rethinking platform accountability or reimagining public discourse, this work begins by recognising parents not as moral scapegoats but as partners in a broader system. If parenting is framed as a public performance, and parents themselves remain subjects of surveillance, shame can continue to govern digital family life. What it means to support digital parenting, and who bears responsibility for that support, remains an open question – one that media, technology, and society must answer together.
Footnotes
Ethical considerations
No new empirical data were collected for this commentary. As no human participants were involved, ethics approval was not required. The analysis draws soley on published and publicly available sources.
Consent to participate
Because no participant data were generated or used, informed consent was not applicable.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The author is employed with the ARC Centre of Excellence for the Digital Child. Publication costs were supported by Curtin University through its CAUL Open Access agreement. No additional financial support was received for the research, authorship, or preparation of this article.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Data availability statement
No new data were generated or analysed in the preparation of this commentary.
Other identifying information
The author is a postdoctoral researcher affiliated with the ARC Centre of Excellence for the Digital Child, Curtin University and Edith Cowan University. The views expressed are those of the author.
