“AI Needs Regulation” by Chelsea Yangiyoh

Artificial intelligence serves as a convenient tool for many. From AI digital assistants and AI home automation systems to their potential use in assisting healthcare professionals and criminal case workers, it’s a revolutionary piece of technology. And as all technology does, it is advancing. AI was not going to just play music and send text messages forever. It expanded to chatbots and content creation, and these new capabilities blew up immensely. AI generated images, writing, and videos bombarded internet feeds because of how bizarre they were. The power to create imagery on anything from a singular prompt made way for countless memes. Furthermore, commentary on these humorous and unrealistic images racked up views like never before. But as expected, this groundbreaking technology reached the hands of ill-intentioned people and AI content couldn’t stay humorous and light-hearted. The emergence of AI deepfakes is a prime example of why certain AI software should not be available to the general public. 

The issues with AI generated content have far surpassed cheating within educational settings and theft of data, art, and writing. AI’s progress with prompt-based video creation has made way for hyper-realistic deepfakes. Deepfakes are images, videos, or audio recordings that have been manipulated with artificial intelligence. As long as a copy of their likeness is available, you can make anyone say or do absolutely anything. This is a convenient tool for the horrifying uprise of deepfake nude imagery. For instance, a New York Times article featured the story of some 10th grade girls at Westfield High School, New Jersey, who had their AI fabricated nude images shared throughout their school. The devastation these girls must have felt cannot properly be expressed through words, and unfortunately, they aren’t the only ones. The article title describes this issue as an “epidemic” as multiple schools nation and worldwide have attested to having AI generated nude photos and videos of students circulating around their schools. And there are probably a lot more instances that have gone undocumented.  

Deepfakes, unfortunately, are not limited to sexually harassing unsuspecting humans. They are also used for blackmail, fraud, and scams. A CNN article featured a deepfake incident within a British engineering firm in 2024 which led to the transfer of $25 million to bank accounts in Hong Kong. This was because a staff member had a video conference with a false Chief Financial Officer and other deepfake employees that looked and sounded exactly like the colleagues he knew. This incident was possible because multiple people sharing clips of themselves and their voices online, is advantageous for cyber scammers since they can use snippets of a person’s voice to target said individual’s families and friends for money. Another deepfake incident was featured in an AP News article when an athletic director at Maryland’s Pikesville High School used AI to create audio recordings with his principal’s fake voice proclaiming racist and antisemitic comments to get said principal fired because he had commented on the director’s poor work performance. Deepfake generative AI has made way for such horrible incidents that have affected organizations, careers, and people’s personal lives. 

The distress that the people and organizations involved in these incidents must have felt and maybe still feel cannot be expressed through words. And to make matters worse, incidents like the aforementioned instances can rarely be avoided. Because according to security.org, 3 seconds of audio is sometimes all that’s needed to produce an 85% voice match from the original to the clone. The same website states that human subjects identified high quality deepfakes only 24.5% of the time. And experts who worked on the athletic director’s case expressed that with AI becoming more powerful, “the ability to detect it may lag behind without more resources.” Deepfakes are difficult to identify, thus they’re difficult to deter. 

All this goes to show that the capabilities of AI generated content and the people and organizations that have access to this technology need to be regulated. There is no reason why someone should be able to fabricate fake images and videos of another person, especially without their consent. There needs to be regulations concerning generative AI so AI can be the positive and helpful piece of technology it once was.


Author’s Note:
The disregard of AI’s negative use and potential is really concerning and needs to be spoken upon.

Chelsea Yangiyoh | 15 | Kansas, USA