Can this technology put an end to bullying?

A young girl being bullied in class (Credit: Getty Images)

Breaking up with your first love is hard to do, but at the age of 18, it was a particularly traumatic experience for Nikki Mattocks. Rather than the clean break she had hoped for, she found herself being bombarded with hateful messages on social media from her ex-boyfriend’s friends. One even urged her to kill herself.

“I withdrew a lot. The messages made me so depressed and led to me taking an overdose,” says Mattocks. She is just one of millions of people around the world who have found themselves the victim of bullying. Even in our modern, progressive society, it is too often overlooked and commonly dismissed as a rite of passage, but bullying affects between a fifth and a third of children at school. Adults suffer similar rates of harassment at work.

You might also like:
• The signs of disease no one can see
• How your voice betrays a doomed romance
• What single word defines who you are?

Yet research has shown that bullying can leave a lasting scar on people’s lives, causing long-term damage to their future health, wealth and relationships. And the increasing amount of time we spend online exposes us to forms of bullying that, while faceless, can be just as devastating. Young people subjected to cyberbullying suffer more from depression than non-victims and are at least twice as likely to self-harm and to attempt suicide.

Luckily for Mattocks, her break up and the subsequent cyberbullying occurred just as she was about to start university. In this new environment she was able to make new friends who helped her.

“It [cyber bullying] changed my outlook,” she says. “It  made me a kinder, stronger person.” Mattocks now works as a mental health campaigner, helping others who face bullying. She believes more needs to be done to curb bullying online.

An upset woman using her phone (Credit: Getty Images)

Social media has provided bullies with a new way to target their victims, but often those who they attack do not report these incidents (Credit: Getty Images)

But while our access to technology is increasing the potential for bullying – 59% of US teens say they have been bullied online – it could also help to stamp it out. Computers powered by artificial intelligence are now being deployed to spot and deal with cases of harassment.

“It is nearly impossible for human moderators to go through all posts manually to determine if there is a problem,” says Gilles Jacobs, a language researcher at Ghent University in Belgium. “AI is key to automating detection and moderation of bullying and trolling.”

His team trained a machine learning algorithm to spot words and phrases associated with bullying on social media site AskFM, which allows users to ask and answer questions. It managed to detect and block almost two-thirds of insults within almost 114,000 posts in English and was more accurate than a simple keyword search. Still, it did struggle with sarcastic remarks.

Abusive speech is notoriously difficult to detect because people use offensive language for all sorts of reasons, and some of the nastiest comments do not use offensive words. Researchers at McGill University in Montreal, Canada, are training algorithms to detect hate speech by teaching them how specific communities on Reddit target women, black people and those who are overweight by using specific words.

The tool was able to pinpoint less obvious abuse, such as words like “animals” which can be intended to have a dehumanising effect

“My findings suggest that we need individual hate-speech filters for separate targets of hate speech,” says Haji Saleem, who is one of those leading the research. Impressively, the tool was more accurate than one simply trained to spot keywords and was also able to pinpoint less obvious abuse, such as words like “animals” which can be intended to have a dehumanising effect.

The exercise in detecting online bullying is far from merely academic. Take social media giants like Instagram. One survey in 2017 found that 42% of teenage users have experienced bullying on Instagram, the highest rate of all social media sites assessed in the study. In some extreme cases, distressed users have killed themselves. And it isn’t just teenagers who are being targeted – Queen guitarist Brian May is among those to have been attacked on Instagram.

“It’s made me look again at those stories of kids being bullied to the point of suicide by social media posts from their ‘friends’, who have turned on them,” May said at the time. “I now know firsthand what it’s like to feel you’re in a safe place, being relaxed and open and unguarded, and then, on a word, to be suddenly be ripped into.”

Instagram is now using AI-powered text and image recognition to detect bullying in photos, videos and captions. While the company has been using a “bullying filter” to hide toxic comments since 2017, it recently began using machine learning to detect attacks on users’ appearance or character, in split-screen photographs, for example. It also looks for threats against individuals that appear in photographs and captions.

A girl using her mobile phone (Credit: Getty Images)

Online bullies can target people from behind a veil of anonymity provided by the internet while victims can face hatred from complete strangers (Credit: Getty Images)

Instagram says that actively identifying and removing this material is a crucial measure as many victims of bullying do not report it themselves. It also allows action to be taken against those who repeatedly post offending content. Even with these measures, however, the most scheming bullies can still create anonymous “hate pages” to target their victims and send hurtful direct messages.

But bullying exists offline too, and in many forms. Recent revelations of sexual harassment within major technology firms in Silicon Valley have shone a light on how bullying and discrimination can impact people in the workplace. Almost half of women have experienced some form of discrimination while working in the European tech industry. Can technology offer a solution here too?

A timestamped record of an incident around when it occurred could make it harder to cast doubt upon evidence later drawn from memory

One attempt to do this is Spot – an intelligent chatbot that aims to help victims report their accounts of workplace harassment accurately and securely. It produces a time-stamped interview that a user can keep for themselves or submit to their employer, anonymously if necessary. The idea is to “turn a memory into evidence”, says Julia Shaw, a psychologist at University College London and co-creator of Spot.

A time-stamped record of an incident around when it occurred could make it harder to cast doubt upon evidence later drawn from memory, as critics of Christine Blasey Ford attempted to do after she gave testimony against Brett Kavanaugh.

Another tool named Botler AI goes one step further by providing advice to people who have been sexually harassed. Trained on more than 300,000 US and Canadian court case documents, it uses natural language processing to assess whether a user has been a victim of sexual harassment in the eyes of the law, and generates an incident report, which a user can hand over to human resources or the police. The first version was live for six months and achieved 89% accuracy.

“One of our users was sexually assaulted by a politician and said the tool gave her the confidence she needed and empowered her to take action,” says Amir Moravej, Botler AI founder. “She began legal proceedings and the case is ongoing.”

Protests against sexual harassement in the technology industry (Credit: Getty Images)

Revelations about sexual harassment and discrimination at high profile technology companies in Silicon Valley have triggered protests and walk outs by staff (Credit: Getty Images)

AI could not only help to stamp out bullying, it could save lives too. Some 3,000 people around the world take their own lives each day. That’s one death every 40 seconds. But predicting if someone is at risk of suicide is notoriously difficult.

While factors such as someone’s background might offer some clues, there is no single risk factor that is a strong predictor of suicide. What makes it even more challenging to predict is that mental health practitioners often have to look at evidence and assess risk in a five-minute phone call. But intelligent machines could help.

The algorithms were able to predict whether a patient would attempt to end their life in the week following an instance of self-harm

“AI can gather a lot of information and put it together quickly, which could be helpful in looking at multiple risk factors,” says Martina Di Simplicio, a clinical senior lecturer in psychiatry at Imperial College London in the UK.

Scientists at Vanderbilt University Medical Center and Florida State University trained machine learning algorithms to look at the health records of patients who self-harm. The algorithms were able to predict whether a patient would attempt to end their life in the week following an instance of self-harm, with an accuracy of 92%.

“We can develop algorithms that rely only on data already collected routinely at point of care to predict risk of suicidal thoughts and behaviours,” says Colin Walsh, assistant professor of biomedical informatics at Vanderbilt University Medical Center in Nashville, Tennessee, who led the study.

While the study offers hope that mental health specialists will have another tool to help them protect those at risk in the future, there is work to be done.

“The algorithms developed in this study can fairly accurately address the question of who will attempt suicide, but not when someone will die,” say the researchers. “Although accurate knowledge of who is at risk of eventual suicide attempt is still critically important to inform clinical decisions about risk, it is not sufficient to determine imminent risk.”

An unkind comment about a photograph (Credit: Getty Images)

Algorithms that can search for the signs of bullying on social media can help to highlight perpetrators so they can be stopped (Credit: Getty Images)

Another study by researchers at Carnegie Mellon University in Pittsburgh, Pennsylvania, was able to identify individuals who were having suicidal thoughts with 91% accuracy. They asked 34 participants to think of 30 specific concepts relating to positive or negative aspects of life and death while they scanned their brains using an fMRI machine. They then used a machine learning algorithm to spot “neural signatures” for these concepts.

The researchers discovered differences in how healthy and suicidal people thought about concepts including “death” and being “carefree”. The computer was able to discriminate with 94% accuracy between nine people experiencing suicidal thoughts who had made a suicide attempt and eight who had not, by looking at these differences.

“People with suicidal ideation have an activation of the emotion of shame, but that wasn’t so for the controls [healthy participants],” says Marcel Just, director of the Center for Cognitive Brain Imaging at Carnegie Mellon University. He believes that one day therapists could use this information to design a personalised treatment for someone having suicidal thoughts, perhaps working with them to feel less shame associated with death.

While such tailored treatments might sound futuristic, search and social media giants are trying to identify people in crisis. For example, when someone types a query into Google related to attempting suicide, the search engine offers them the help line of a charity such as The Samaritans instead of what they were looking for.

Facebook last year began to use AI to identify posts from people who might be at risk of suicide. Other social media sites, including Instagram, have also begun exploring how AI can tackle the sharing of images of self-harm and suicide-related posts.

In serious cases, Facebook may contact local authorities and has worked with first responders to carry out more than 1,000 wellness checks so far

Facebook trained their algorithms to identify patterns of words in both the main post and the comments that follow to help confirm cases of suicidal expression. These are combined with other details such as whether messages are posted in the early hours of the morning. All this data is funnelled into another algorithm that is able to work out whether a Facebook user’s post should be reviewed by Facebook’s Community Operations team, which can raise the alarm if it thinks someone is at risk.

A boy using a mobile phone (Credit: Getty Images)

Social media posts and mobile phone data can identify mood changes that could help to predict if someone might try to take their own life (Credit: Getty Images)

In serious cases, Facebook may contact local authorities and has worked with first responders to carry out more than 1,000 wellness checks so far.

“We’re not doctors, and we’re not trying to make a mental health diagnosis,” explains Dan Muriello, an engineer on the team that produced the tools. “We’re trying to get information to the right people quickly.”

Facebook is not the only one analysing text and behaviour to predict whether someone may be experiencing mental health problems. Maria Liakata, an associate professor at the UK’s University of Warwick, is working on detecting mood changes from social media posts, text messages and mobile phone data.

“The idea is to be able to passively monitor… and predict mood changes and people at risk reliably,” she says. She hopes that the technology could be incorporated into an app that’s able to read messages on a user’s phone.

While this approach could raise privacy concerns, thousands of people are already willingly sharing their deepest thoughts with AI apps in a bid to tackle depression, which is one predictor of suicide. Mobile apps like Woebot and Wysa allow users to talk through their problems with a bot that responds in ways that have been approved for treatments such as cognitive behavioural therapy.

In the same way, machines may also be able to help intervene and stamp out bullying too. But until AI perfects a way of detecting the most subtle and cunning bullying tactics, the responsibility will still lie with us.

“It can’t just be computers doing the whole fight,” says Nikki Mattocks.

 

[“source=bbc”]