Image by Nik on Unsplash Image by Nik on Unsplash BY RACHEL BARR, KAT WILLIAMS, & SCOTT R. STROUD

[PDF VERSION]

The birth of digital technology has undoubtedly altered many aspects of the childhood experience. In particular, with recent and rapid advancements in AI and algorithm technology, parental supervision and social media content-control mechanisms have struggled to maintain complete control over what children can access online. Due to lax censorship regulations, algorithms often expose children “to inappropriate information [and] extensive forms of cyberbullying and harassment,” on top of promoting dangerous behavior such as self-harm and “pro-anorexia” content (Jaffer, 2019; Hill, 2023). This lack of restrictive censorship on social media algorithms raises ethical concerns regarding content exposure to children on various sites. However, the benefits of AI-generated algorithms cannot go unacknowledged. Indeed, without algorithms, most of the internet would remain unstructured and highly disorganized, minimizing the effectiveness for users and making the utilization of the internet increasingly more challenging for ordinary citizens (Kim, 2017). Additionally, AI and algorithms lower the cost and time associated with curating accessible information and relevant news for specific audiences (Kim, 2017).

While AI-generated algorithms were invented to benefit adult audiences, the bulk of social media users are younger demographics. A survey by The Guardian found that 97% of children under 12 years old have some form of social media, with many admitting that they worry about the effects of reinforcing algorithms on their mental health (Hill, 2023). One child was quoted as admitting that they developed body image issues because of constant self-comparison due to “‘sites like TikTok, [where] the only people you see are gorgeous due to the algorithms’” (Hill, 2023). As algorithms become more advanced and personalized, research done by The Center for Countering Digital Hate (CCDH) concluded that “TikTok’s ‘recommendation algorithm’ pushes self-harm and eating disorder content to teenagers within minutes of them expressing interests in the topics” (Hill, 2023). In response to this research, a spokesperson for TikTok argued that the findings were not accurately reflective of users’ experience on the social media platform, stating that the company “regularly consult[s] with health experts, remove[s] violations of our policies and provide[s] access to supportive resources for anyone in need” (Hill, 2023).

One devastating example of the harm algorithms can have on children is illustrated by the tragic story of Molly Russell, a 14-year-old who took her own life after her social media algorithms took a dark turn and consistently filled her feed with content related to “suicide, self-harm, and depression” – over 2,000 of which she reposted and liked (Najib, 2022). During the investigation of the 14-year-old’s death, it was revealed that several posts she interacted with violated Instagram’s regulations, despite still showing up on the young teen’s feed (Najib, 2022). In response to the inquest into Russell’s death and the harmful content she was constantly exposed to on Pinterest and Instagram, a spokesperson for parent company Meta replied that the company is “committed to ensuring that Instagram is a positive experience for everyone, particularly teenagers” (Najib, 2022).

Nevertheless, AI-generated algorithms do provide multiple benefits to society. For example, algorithms allow ordinary people to maximize the efficiency of the internet for personal and professional life, thus promoting equal access to information for all rather than any particular sector of the population (Jaffer, 2019). Algorithms have also been shown to spread diverse information and perspectives, promoting a more tolerable and culturally aware society (Golino, 2021). Additionally, algorithms have helped raise awareness about a number of important topics, including true crime stories, environmental news, and educating everyday people on complex subjects such as financial literacy. Finally, without algorithms, the internet would be highly disorganized and borderline useless for anyone without a technological background. Thus, it is argued that in addition to more advanced marketing techniques, algorithms have helped equalize the playing field for everyone to have accessible, relevant information.

While the tech companies and developers responsible for pushing algorithms onto the feeds of social media users are aware of the potential dangers that can occur when youth access unregulated content, the mechanisms for controlling this harm have not kept up with the alarming rate of ethical dilemmas associated with a personalized feed. Indeed, the benefits of algorithms have made accessing relevant and essential information much more straightforward. Additionally, algorithms facilitate the rapid spread of pressing news to adults and children alike. However, the lack of comprehensive regulations on algorithms concerning children’s content consumption in the face of rising mental health issues and dangerous behaviors in youth give way to ethical concerns associated with AI-generated content feeds.

 

Discussion Questions

  1. To what extent, if any, are social media corporations responsible for harmful content that appears on children’s social media feed? If they are not responsible, who is?
  2. Who bears ethical responsibility when an algorithm promotes harmful content to social media users?
  3. What are some strategies that social media sites could put into place to avoid the damaging effects of AI-generated algorithms?
  4. At what point, if at all, does censorship of social media content encroach on freedom of speech? In particular, does censorship of a minor’s feed cross a boundary relating to freedom of speech?

 

Further Information

Golino, M. A. (2021, April 24). “Algorithms in Social Media Platforms.” Institute for Internet and the Just Society. Available at: https://www.internetjustsociety.org/algorithms-in-social-media-platforms

Hill, A. (2023, January 1). “Social media triggers children to dislike their own bodies, says study.” The Guardian. Available at: https://www.theguardian.com/society/2023/jan/01/social-media-triggers-children-to-dislike-their-own-bodies-says-study

Jaffer, M. (2019). “Challenges and Risks in Designing Algorithms and Platforms for Children.” Journal of Design and Science. Available at: https://jods.mitpress.mit.edu/pub/m21ninrj

Kim, S. A. (2020, August 27). “Social Media Algorithms: Why you see what you see.” Georgetown Law Technology Review. Available at: https://georgetownlawtechreview.org/social-media-algorithms-why-you-see-what-you-see/GLTR-12-2017/

Najib, S. (2022, October 2). “Coroner rules British teen Molly Russell died by suicide after suffering from 'effects of online content'.” Peoplemag. Available at: https://people.com/health/coroner-rules-british-teenager-molly-russell-died-by-suicide-after-suffering-effects-online-content/

 

Image by Nik on Unsplash

 

This case was supported by funding from the John S. and James L. Knight Foundation. These cases can be used in unmodified PDF form in classroom or educational settings. For use in publications such as textbooks, readers, and other works, please contact the Center for Media Engagement.