According to research, up to 1 billion children aged 2 to 17 have experienced some form of abuse in 2014. While technological advances have brought about many benefits, the boom of the internet, social networking platforms, and mobile computing has also created channels, through which child abuse and predatory acts on children can happen.
Five Eyes governments—Australia, Canada, New Zealand, the UK, and the US—are leading the charge to compel tech industry stakeholders to increase involvement and step up in the fight against abusive acts and practices online. In a press conference at the US Department of Justice, the group unveiled eleven voluntary principles to help prevent and mitigate online child sexual exploitation and abuse. Representatives from tech giants Facebook, Google, and Microsoft were present at the event.
Fortunately, the problem is also in the sights of certain initiatives in the tech sector.
L1ght is among the few ventures focused on protecting children's safety online. Positioned as an anti-toxicity company, L1ght is developed as a platform that can accurately identify and predict harmful behavior and content online, using artificial intelligence (AI). This capability can help online platforms to quickly take action against abusive users and content before they cause harm to young internet users.
Fighting abuse online
Online platforms have become a means for criminals and predators to consume and share content involving abused children. A New York Times report found that 45 million online photos and videos of children being abused were reported by tech companies in 2018. The same report indicated that Facebook's ecosystem was used in 12 million cases that included such images.
Most platforms have in place measures to prevent the spread of harmful content. Typically, content can be subjected to review, moderation, and filtering. Facebook and YouTube already have mechanisms in place to remove and tag abusive content and prevent them from being reuploaded. Facebook has dedicated moderation agents who review reported and flagged content. Unfortunately, the gaps in these measures still allow the spread of harmful content, especially new content shared by tech-savvy violators.
Among the principles promoted by the effort include: thoroughly reviewing safety processes, understanding the nature of online child abuse, identifying areas of high risk on platforms, identifying gaps in existing measures, investing in innovative tools and solutions, and responding to the evolving threats.
How AI can help
It seems fortuitous that as calls for better solutions are made, efforts that aim to provide them are also emerging. L1ght's platform is poised to become a game-changer in the fight against harmful content and child exploitation. The company has proprietary algorithms that are able to analyze and tag harmful text, images, audio, and video content in real-time, which can be deployed and integrated with social media platforms, communication, and messaging applications, and online video games. This would help further boost the capabilities of filtering and moderation measures so that platform can instantly take action against such violations.
Its AI can also continuously learn and adapt to changing methods by perpetrators. Conventional dictionary-based text filtering, for example, can readily prevent known abusive words or expressions, but they can be circumvented by using code or spelling.
L1ght recently received $15 million seed funding to fast-track its development.
A concerted effort
What is truly needed in the fight against this still-growing crisis is a concerted effort among all stakeholders. The establishment of global efforts, such as the WePROTECT Global Alliance which is made up of governments, law enforcement agencies, private institutions, and nonprofit organizations which aim to adopt the principles and advance the fight against child abuse, is a step in the right direction.
Tech giants and their massive online platforms do bear much of the burden in establishing measures that will make it difficult for harmful content to make their way on their platforms. Stronger ties between private institutions and law enforcement agencies should also bridge the gap between identifying perpetrators and victims and being dealt with appropriately. The emergence of new solutions specifically aimed at addressing the problem is most definitely a welcome development.
It is possible to send loved ones' ashes into space, as Star Trek actress Nichelle Nichols appears to have done when she died in July. In fact, an entire Star Trek Reunion flight is underway.