Using the chatbot is more direct and perhaps more engaging, says Donald Findlater, director of the Stop It Now Helpline run by the Lucy Faithful Foundation. After the chatbot appeared more than 170,000 times in March, 158 people clicked through to the helpline website. While the number is “modest,” Findlater says, those folks have taken an important step. “They’ve overcome a lot of hurdles to do that,” says Findlater. “Anything that stops people just beginning the journey is a measure of success,” adds the IMF’s Hargreaves. “We know that people are using it. We know they’re making referrals, we know they’re accessing services.”
Pornhub has a proven reputation for moderating videos on its website, and reports have detailed how women and girls had uploaded videos of themselves without your consent. december 2020 Pornhub removed more than 10 million videos of its website and began requiring people upload content to verify your identity. last year, 9,000 pieces of CSAM were removed from Pornhub.
“The IWF chatbot is another layer of protection to ensure users are informed that they will not find illegal material on our platform and will refer them to Stop It Now to help them change their behavior,” says a spokesperson for Pornhub, adding that has “zero tolerance” for illegal material and has clear policies around the MASI. Those involved in the chatbot project say that Pornhub volunteered to participate, are not being paid to do so, and that the system will run on Pornhub’s UK website for the next year before being evaluated by external academics. .
John Perrino, a policy analyst at the Stanford Internet Observatory who is not connected to the project, says there has been a surge in recent years to build new tools that use “security by design” to combat online harm. “It is an interesting collaboration, in line with politics and public perception, to help users and direct them towards healthy resources and healthy habits”, says Perrino. He adds that he has never seen a tool exactly like this developed for a porn website before.
There is already some evidence that this type of technical intervention can make a difference in diverting people from potential child sexual abuse material and reducing the number of CSAM searches online. For example, like back in 2013, Google worked with the Lucy Faithfull Foundation to introduce warning messages when people search for terms that could be linked to the MASI. There was a “thirteen-fold reduction” in the number of searches for child sexual abuse material as a result of the warnings, Google said in 2018.
A seperation study in 2015 found Search engines that implemented blocking measures against terms related to child sexual abuse saw the number of searches drastically decrease, compared to those that did not implement measures. A set of ads designed to direct people searching for CSAM to helplines in Germany. saw 240,000 website clicks and over 20 million impressions over a three-year period. A studies 2021 which analyzed warning pop-ups on betting websites found that jostling had “limited impact”.
Those involved with the chatbot stress that they don’t see it as the only way to prevent people from finding child sexual abuse material online. “The solution is not a magic bullet that is going to stop the demand for child sexual abuse on the Internet. It is implemented in a particular environment,” says Sexton. However, if the system is successful, she adds, it could be rolled out to other websites or online services.
“There are other places they’ll look as well, whether it’s on various social media sites, whether it’s on various gaming platforms,” says Findlater. However, if this were to happen, you would need to assess the triggers that cause it to appear and rebuild your system for the specific website it is on. The search terms used by Pornhub, for example, would not work in a Google search. “We can’t transfer a set of warnings to another context,” says Findlater.