Consumption: Your Futurist the Internet
With an increasing number of children on the internet, anyone who has or knows a child today is fully aware of how limited parental controls are. We are far from having no moderation for children’s content, but plenty slips through the cracks. The internet exposes children to media that is borderline, if not totally, inappropriate, even when accessing it through controlled means, such as YouTube Kids. This says nothing for young teens who, although ready for a degree of more serious content, are released from the loose limits set by parental controls to access content far above what is appropriate for their age group. There are also issues with who children should communicate. Currently, on many social media apps, there is no limit to who can connect with who, especially before the fact. As it stands, pre-teens can (and often are) put into situations where they are in direct contact with adults they do not know. These children can be coerced into communicating with these adults, leaving lasting harm. Excluded is the lack of strict limits on screen time, something ignored by current parental controls.
There are certainly plenty of things that get in the way of strict parental controls. Often content is appropriate but contains decidedly adult themes, such as what occurred during the “Elsagate” scandal. That says nothing for indecent educational videos, on topics such as sex, which are perfectly fine in general, but not for young children. Currently, the computational (or financial) power to investigate just the trillions of hours uploaded to YouTube does not exist. Of course, one must consider that companies release products teens consume en masse, such as TikTok, Instagram, or Twitter, and may be resistant to limiting screen time. They base their models on retaining users to serve ads or collect information for advertisers. Limiting screentime is limiting revenue, inconceivable for corporations driven by profit.
Thus, I would like to introduce the “internet for kids”. It regulates content in two ways: Before and after producing content. Before publishing, content creators can specifically designate which age group their content is appropriate for. Note, this is all media: Parental controls are currently limited to websites children can visit or videos. In the future, creators will tag images, songs, etc. The tagging system will require an amount of honesty, which can be guaranteed either by legal/financial repercussions or through AI/machine learning which will check the content and rate media. This could combine with existing parameters, information about users/consumers, and custom tags to constantly monitor content released and produced on the internet, thus providing age-appropriate access to media with a high level of accuracy. “Internet for kids” should also control with who (and how) children can communicate. Eventually, providing your accurate age/information on the internet to producers/regulators will become ubiquitous, so collecting data to protect young children from inappropriately interacting with adults becomes easier.
To properly consider the impact of the “internet for kids”, let us turn to the circuit of culture. Created by Stuart Hall in his textbook, Representation, it is a framework for studying culture by breaking it down into five different aspects which interact with each other to describe its use and development. The five aspects of culture are significance, identity, regulation, production, and consumption. For the “internet for kids”, I focus on consumption. Consumption concerns users, how they use a product, the effects/development on an individual and the financial decisions involved in consuming that product.
As it relates to the element of consumption, consumers of the “internet for kids” will primarily be parents who would like to limit the type or amount of content their children experience. As I will discuss later, we can also expect governments or organizations to use this technology for their own gain. In contrast, children and young adults, until about 16-17 years old, would be the ones affected. The service will moderate content for as long as the initial consumer is willing to use it. “Internet for kids” would initially be purchasable, considering the substantial amount of resources that would go into the technology behind it. Depending on capabilities at the time, I expect it to be initially expensive before becoming more affordable as processing power increases. When the cost to provide the service can be offset by the average use of the service, it becomes ubiquitous in children’s online experience. Therefore, I expect marketing of the “internet for kids” to these parents or governments. The release and usage of it will initially be quite limited, but as time goes on, I expect advertisements to a broader range of people.
This implementation makes the internet more accessible for children and allows parents to focus less on moderating the content their children consume. It also allows children access in a way conducive to their development. Rather than potentially accessing very inappropriate content or contacting adults they should be in no contact with, they will enjoy being kids while still participating on the internet. One can also consider a function that lifts parental controls, teaching media/internet literacy skills so children have all the tools they need to be productive netizens. One underrated aspect is content creation without fear of backlash from parents. By preemptively tagging the content, the onus is on the parents and the internet to properly regulate its consumption. For instance, tagged content as age-inappropriate should have no complaints if a child consumes it (say through their parent’s phone).
Of course, as with any technology, there will be drawbacks, especially who decides what is and is not appropriate. One can consider a situation where a parent, who is anti-LGBTQ+, would put checks into place which prevent their children from accessing needing information or resources about the identities/the community. To the parent, the content is morally objectionable and age-inappropriate, even if the information provided is harmless. Inevitably, this unnecessary restriction can bring undue harm to the child. One may think that the solution is passing this off to independent regulators. The concern is who funds/controls these regulators and the decision process. I mentioned earlier to expect governments to use these parental controls as well. In that case, they can use the justification of “inappropriateness” to hide dissent or history. This already happens currently, but with broad content moderation it will not be prevented, just potentially enhanced.
However, I believe that since children are already on the internet and will be for all of their adult lives, it is silly to expect self-regulation of the content they consume. Children are naturally curious and, without proper experience or guidance, will not understand why watching specific videos is inappropriate. Thus, there needs to be some regulatory function allowing children to participate on the internet without opening them up to dire risks.