As the media landscape evolves, with AI and tools like ChatGPT leading the way, we are witnessing an exciting time for digital exploration. However, it raises a crucial question: what steps are social media giants like Instagram and TikTok taking to protect children online?
What is a 'Teen Account’?
In a recent update, Meta addressed these online anxieties by introducing 'Teen Accounts' for all Instagram users under 18, featuring enhanced safety tools like default private settings and parental controls, including daily time limits.
These changes, aimed at protecting children from online risks, will roll out in the UK, US, Australia and Canada, with plans to expand to the EU later this year. While the measures primarily target users 15 and under, those aged 16 and 17 can opt out of these features without parental consent.
What are the Government doing?
Meta's moves have been welcomed, but many argue that the changes have not gone far enough. Growing scepticism around self-regulation by social media companies led to the UK's Online Safety Act 2023, which focuses on protecting children from harmful content. Set to take effect in early 2025, the Act will clarify platforms' responsibilities, with Ofcom issuing relevant codes of practice.
Meta's announcement followed Australia's plans to introduce age limits for social media use, likely between 14 and 16. While Meta denies any link, it appears platforms are bracing for global regulations focused on child protection.
UK Technology Secretary Peter Kyle is closely monitoring Australia's developments, stressing the importance of enforceable age bans. This comes as age verification technology faces scrutiny over potential workarounds, such as VPNs, and data protection concerns related to collecting children's ID.
Another proposal, the Safer Phones Bill, seeks to raise the “internet adulthood” age from 13 to 16, requiring parental consent for younger users. This could reduce data collection and limit exposure to addictive content.
It's important to recognise that government action on child safety online extends beyond legislation. In October, 14 US states sued TikTok, alleging it fosters addictive behaviour in children, potentially leading to serious psychological and physiological harms, such as anxiety, depression, and body dysmorphia.
This underscores the ongoing challenge for businesses: balancing the drive to maximise screen time for profit while ensuring the safety of young users. Meta's introduction of parental controls, including screen time limits, can be viewed as a response to this increasing legal pressure.
Looking Forward: Safeguarding as a Priority
In the same month, a UK school became the first to ban smartphones during the school day, Meta introduced Teen Accounts, reflecting the growing focus on children's online safety. Both the UK and Australian governments are closely monitoring these developments, with Australia planning to introduce age restrictions for younger teens.
Technology companies must recognise the complex challenge of safeguarding children online. The public demands stronger protections, and Meta’s Teen Accounts are part of its mission to empower parents and protect young users. Platforms hosting child users should reassure parents by improving safety features.
However, companies should also expect tighter regulation, as governments face pressure to act. Ofcom’s Chief Executive, in line with the Online Safety Act 2023, has stressed that social media platforms are responsible for keeping children safe, with the regulator ready to intervene.
If the UK follows Australia’s lead on age restrictions, social media platforms will need to ensure compliance with age verification regulations.
Despite the overwhelming attention on this issue, one thing is clear: social media companies must be ready to improve protections for children.
This article was originally published in The Scotsman, and can be read on their website.