Governments around the world are intensifying efforts to regulate children’s access to social media platforms amid growing concerns over their impact on mental health, online safety, and data privacy. As outlined in the recent article titled “ET Explains: Global push to restrict children’s social media use” published by the Economic Times, several countries are proposing or enacting legislation aimed at minimizing the harms associated with minor users engaging with digital platforms.
The push stems from mounting empirical evidence linking excessive social media use among minors to increased risks of depression, anxiety, cyberbullying, and disrupted sleep patterns. Policymakers are responding to these findings with a broad range of regulatory tools, from age-verification mandates to outright bans on under-16s creating social media accounts without parental consent.
In the United States, lawmakers in various states are taking divergent approaches. Some are advocating for stricter parental controls and age-restricted access, while others call for enhanced transparency from tech companies regarding algorithms and content moderation practices. The Federal Trade Commission is also under pressure to more strictly enforce children’s online privacy protections under the 1998 Children’s Online Privacy Protection Act (COPPA), which critics argue is outdated in today’s digital environment.
Europe, meanwhile, is aiming to bolster existing protections through the Digital Services Act, which includes requirements for large tech firms to assess and mitigate systemic risks posed to minors. France recently passed legislation requiring parental authorization for children under 15 to open social media accounts. Ireland and the Netherlands are considering similar policies.
In Asia, regulators are also moving toward stronger impositions. China has already limited the use of social media and video platforms for minors under 18 to a maximum of 40 minutes per day, enforced through real-name verification systems. South Korea and Japan are reportedly exploring age verification technologies to ensure compliance with age-based restrictions.
Tech companies, while publicly supporting safer internet experiences for children, have expressed concerns about some of the proposed regulations, particularly regarding implementation challenges and implications for user privacy. They have argued that mandatory age checks and government IDs could inadvertently expose users to privacy risks and lead to over-collection of sensitive data.
The global policy shift comes at a time of heightened scrutiny of big tech companies, which are being increasingly held accountable for the content hosted on their platforms and the design of features that may encourage compulsive behavior. Social media giants including Meta, TikTok, and Snapchat have introduced a range of tools intended to help parents monitor usage and to alert users to excessive screen time. However, critics argue that such voluntary measures are insufficient to mitigate the systemic issues rooted in the platforms’ design and business models.
As the Economic Times article notes, executives from several major firms have been summoned to testify before legislative bodies in the U.S. and abroad, as regulators weigh long-term safeguards for younger users. These hearings have underscored bipartisan momentum behind child-focused internet safety reforms, even while debates continue around the balance between civil liberties, data protection, and the state’s role in overseeing digital behavior.
While legal paths and technological capabilities differ across jurisdictions, the global consensus appears to be shifting toward more proactive guardrails around how children interact with digital ecosystems. Whether through legislative mandates, regulatory oversight, or industry-led initiatives, stakeholders are converging on the view that the online safety of minors can no longer be an afterthought.
