What will it take to get social media companies to protect children?

What will it take to get social media companies to protect children?

Controversial sites ask.fm and Yik Yak are trying to improve child protection, but many social media companies are accused of not taking online safety seriously, writes TheGuardian in its review about children protection in Social Media. 

Social media is a double-edged sword. Children can use it to learn, play, create, communicate and connect. More than ever, and at increasingly younger ages, children chat, upload, share, ask and comment across multiple media platforms.

Yet these same sites and apps can facilitate harm towards young users, leaving them vulnerable to bullying, porn, grooming, terror threats and child abuse.

Earlier in June, Peter Wanless, chief executive of the National Society for the Prevention of Cruelty to Children (NSPCC), wrote a letter to the UK government and Facebook criticizing social media companies for not taking online safety seriously enough.

Social media businesses are governed by the laws of the country in which they are headquartered, but they are also expected to comply with local laws where they operate. Online businesses must comply with criminal law; what is illegal in the virtual world is illegal in real life. However, some laws have been added to tackle online crimes such as harassment and, most recently, revenge porn.

The government-commissioned UK Council for Child Internet Safety (UKCCIS) has developed industry guidelines. They welcome a wide range of safety measures, including content moderation, age verification and reporting services, but they are not legally binding.

Claire Lilley, head of child safety online at NSPCC, doesn’t think this self-regulation approach is working or that companies are doing enough to protect young users. She points out that even Twitter is problematic – officially only open to those over 13-years-old, but searchable by anyone. Unlike Facebook and Instagram, Twitter does not censor pornographic content that’s shared and many aspiring porn stars use the site. A harmless search for someone who shares the name of a budding porn star can lead you to find some very graphic content.

Due to US law, many social media sites have an age restriction of 13, but methods of enforcement vary. Some sites drop a cookie if someone underage attempts to register, making it impossible to try again unless cookies are reset. Others simply ask users for their date of birth to check their eligibility.

Once on a site, most social media organizations have a reporting mechanism in place aimed at flagging inappropriate content, but Ken Corish, online safety manager at South West Grid for Learning and UK Safer Internet Centre, says they vary in terms of efficiency.

“One of the most effective ways [to flag inappropriate content] is through anonymous reporting and through what Facebook put together, which is social reporting,” he says, referring to the infrastructure that allows you to directly contact a person who shared something you didn’t like or to share your concerns with a trusted adult.

Corish thinks that although social media businesses have a responsibility to make their mechanisms intuitively safe, the media is sometimes too quick to blame them for something that he sees as a wider problem.

“In terms of industry responsibility, we’ve always found that industry is very keen to engage,” he says, adding that parents share the responsibility. “Knowing what your child does … It’s not to do with technology; it’s to do with behavior.”

One social networking site that has turned its policy around is ask.fm, which in 2013 was linked to several teen suicides and faced calls for a boycott from David Cameron. The site was subsequently bought by Ask.com in 2014, which in turn is owned by IAC. IAC also owns Tinder and OkCupid, two of the companies that Lilley says make it easy for underage users to sign up. Since the acquisition, ask.fm has kicked out its founders and invested in a safety centre, a safety advisory board, moderation and new terms and policies.

Annie Mullins, a former government advisor and now ask.fm’s director of safety in Europe, says education is important for this new generation of tech savvy young people, who can make their voices heard like never before with the responsible use of social media.

“We have a big challenge now between parents, government and tech companies too really help [young people] understand that they are accountable for how they behave,” she says.

A company that is learning that lesson is YikYak, a location-based anonymous messaging service that launched in 2013. It quickly attracted controversy over racial bullying, threats of violence and terrorist threats on US campuses.

Having increased the age limit and introduced algorithms that flag up warning messages if certain words are used, YikYak now also enforces geofencing. The technology blocks users from accessing the app within any given fenced-off area, something that a YikYak spokesperson says applies to 95% of US middle- and high-schools.

The newer apps often lag behind when it comes to clear policies on protecting children. Video messaging apps such as ooVoo are an emerging concern for parents, according to NSPCC.

“It just feels like child protection is an add-on a lot of the time. A lot of the companies are not thinking about it at the development stage of the product,” says Lilley, and Mullins agrees.

“They’ve got to pay attention at the beginning of developing these wonderful new apps, particularly if they are targeted at the teen market,” says Mullins, who is currently working with the UKCCIS on producing a set of child safety guidelines for apps. Time will tell if they will shield children from that double-edged sword.

Photo: Chris Willson/Alamy

Share

About the author

admin administrator