School bullies have a new weapon: artificial intelligence (AI).
Bullying, sadly, is nothing new. But the rise of AI is driving “massive issues with exclusion”, concerned parents have said – while experts warn it could worsen child sexual exploitation.
Some 59% of pupils aged between seven and 17, and 79% of 13 to 17-year-old internet users in the UK, have used a generative AI tool in the last year.
This technology has many positive uses, from homework help to creativity prompts. But it has a dark underbelly.
Kate – a parent who did not want her last name used – told the Big Issue that her 10 year-old daughter came home crying after classmates fed her photo into an AI “looks rater”.
“On the site you can submit a picture of yourself or anyone else and it’ll use AI to rate every aspect of your appearance out of 10 while going into detail on your faults. It’s horrible,” she said.
Advertisement
Advertisement
“If you thought Instagram was doing damage to our kid’s self-image and self-esteem you should see the effect this has. My daughter was crying because some of the boys in her class put her on the app and shared her score. It’s just awful… the number of issues with bullying have sky-rocketed.”
Sadly, Kate’s story is the tip of the iceberg. As AI tools become easier to use, they create new avenues not only for bullying, but for sexual exploitation.
In November, the UK Safer Internet Centre (UKSIC) revealed that it has begun receiving reports from schools that children are using AI image generators to create indecent images of each other.
Such images – which are legally classified as child sexual abuse material – could create an “unprecedented safeguarding issue,” the centre warned. A child may generate an image to taunt a classmate, but it could then spread out of their hands and end up on dedicated abuse sites.
“Young people are not always aware of the seriousness of what they are doing, yet these types of harmful behaviours should be anticipated when new technologies, like AI generators, become more accessible to the public,” said David Wright, director at UKSIC.
“Although the case numbers are currently small, we are in the foothills and need to see steps being taken now, before schools become overwhelmed and the problem grows.”
Advertisement
Photoshopping fake images is nothing new. But advances in generative AI means the images and videos are “more realistic than ever and easier to use”, said Dr Andrew Rogoyski, from the Surrey Institute for People-Centred Artificial Intelligence. “So, misuse of [generative] AI to generate deepfakes is likely to increase in the near-term.”
AI companies have guardrails to prevent misuse, Dr Rogoyski added – but their systems “aren’t perfect”.
“A lot of effort is being expended to improve these safeguards, partly because companies will be held to account and partly because there are reputations risks for companies that allow their systems to be misused,” he explained.
What can we do about bullying and AI abuse in schools?
As AI companies scramble to improve internal safeguards, advocates are calling for schools and government to take action
The Safer Internet Centre urged schools to update monitoring systems to block illegal material on school devices. More broadly, they want to see government implement “more regulatory oversight of AI models.”
“We must see measures put in place to prevent the abuse of this technology,” said Emma Hardy, director at the Safer Internet Centre. “Right now, unchecked, unregulated AI is making children less safe.”
Advertisement
The Anti-Bullying Alliance echoed this call, calling on the government to compel companies to consider children before they progress new tech.
“AI and deep fakes present new challenges, but also opportunities for proactive solutions,” said Martha Evans, director of the Anti-Bullying Alliance.
“We urge the government through Ofcom to embed robust children’s safety Code of Practice in wake of the Online Safety Act, forcing companies to consider children’s safety in developing new technologies.”
Unfortunately, the problem goes beyond the AI companies themselves. The pictures may be generated using AI, but they are spread by older internet technology that has existed for decades now.
“There is a continuing argument about whether social media companies are responsible for what appears on their platforms, and should be treated as a publisher, or whether they are just a platform. Undoubtedly tech companies could and should do more to prevent sharing of abusive images,” said Rogoyoski.
The nature of the internet makes offences like these hard to detect and prosecute. And broader still is the problem of attitudes in schools, Rogoyoski added.
Advertisement
“Is the problem the digital tools or social attitudes and behaviours? Why are young people in schools using such tools for such purposes?” he asked.
“Is it simply ease of use or is there a more concerning erosion of knowing right from wrong?”