Study by group headed by former Facebook executive says scale, complexity and danger of the threat has escalatedBy
Wider access to smartphones and super-fast internet access are giving rise to a disturbing trend: a surge in on-demand, live-streaming online child sexual abuse, according to a report published by a children’s welfare group founded by former Facebook Inc. executive Joanna Shields.
The report by WeProtect published Wednesday, shows such advances have led to a drop in the cost of accessing a live video stream of child abuse to as little as $15 from about $50 just a few years ago.
“These criminals are feeling emboldened,” said Shields, who founded WeProtect after serving as the head of Facebook’s Europe, Middle East and Africa arm and as a digital adviser to the British government. “They can speak to communities of other people like them and they feel safe.”
The report was compiled with input from Interpol, the U.S. Department of Justice, the U.K.’s National Crime Agency and Swedish software firm NetClean.
The circulation of child sexual exploitation imagery has risen enormously as a result of consumer technology. In 1990, the Internet Watch Foundation reported that the U.K. was estimated to have about 7,000 images in circulation; in 2017, it wasn’t uncommon for police to seize hundreds of thousands of images from individuals.
WeProtect’s report highlights that a system deployed by the Canadian Centre for Child Protection to automatically identify abuse images on the web — Project Arachnid — now identifies 80,000 unique images worldwide every month.
Images analyzed in a separate report by the IWF that depict children aged 10 years or younger has declined — from 80 percent in 2014 to 53 percent in 2016. But the number has risen for children 11 years to 15 years old — from 18 percent in 2014 to 45 percent in 2016. This is partly attributed to the greater likelihood that people will report the abuse of much younger children, but also that teenagers are “self-producing” content.
“There’s an epidemic of young people sharing sexual images of themselves,” said Shields. “There’s a strange phenomenon where young people sort of test each other out through messaging and share images before they even bother to go out on a date,” she said.
Shields said this has resulted in pedophiles increasingly creating fake profiles on networks that appeal to children in order to pose as a minor. “There are organized crime entities that do this and extort money from children,” she said, adding that these images are often now used as a form of “currency” to exchange for entry into hidden pedophile rings hosted on the dark web.
Microsoft Inc.’s PhotoDNA technology uses a digital fingerprinting system that lets companies instantly match images uploaded to their platforms to known duplicates. Twitter Inc. and Facebook both use it to identify and remove any known photographs of graphic abuse, and in 2016, Facebook, Microsoft, Twitter and Google’s YouTube partnered to use similar methods to identify terrorist content as well.
But Shields ays the world’s biggest technology companies are not doing enough.
“These companies embrace communities of advertisers and developers,” she said, “and they need to embrace the community of charities and support organizations that are dealing with the problems that these children have.”
Shields says she believes the companies have good intentions, but that “these are some of the most profitable companies in the world so in some sense we can’t really give them a pass.” She suggested that if the issue became a top concern of shareholders, “it would be an existential crisis” for any of these businesses.
The issue of objectionable or hateful content hasn’t been lost on advertising executives. On Feb. 12, Keith Weed, the chief marketing officer of the Unilever brand of products, threatened to pull advertising from Facebook and Alphabet Inc.’s Google if they don’t address the “division” and promotion of “anger and hate” their platforms permit. In 2017, Diageo Plc, Adidas AG, Deutsche Bank AG and other brands withdrew advertising from YouTube after they discovered their promotions were appearing next to videos that appeared to sexualize children.
“The reality is these are commercial products that are owned by companies, and companies have to make a decision whether they want criminals or terrorists to weaponize their platforms,” said Shields.