The past few years have not been kind to Facebook’s reputation. The social media giant has enjoyed impressive success – and its corresponding financial rewards – but recently has been embroiled in scandal. Whether under fire for privacy concerns, fake news, discrimination, or other indiscretions, Facebook is no longer receiving the benefit of the doubt from the public about its behavior.
Not helping matters has been its perceived slow progress tackling controversial issues – and the serious consequences that have resulted. Facebook is experiencing explosive growth around the world but seemingly lacks the resources to keep up when it comes to policing its content. The platform has, for example, been a breeding ground for anti-Rohingya propaganda in Myanmar, where the Muslim minority group has long been under fire in the country. Manufactured Facebook content stoked the flames for weeks-worth of government-approved mass murders, to the point that a United Nations report found the company to have “played a ‘determining role’ in the crisis.”
The open question is how Facebook will solve these issues at scale – massive growth demands proportional increases in manpower to oversee the ensuing content. But in a 2018 congressional hearing, Facebook CEO Mark Zuckerberg offered a decidedly non-human solution to reviewing content from 2.3 billion regular users: artificial intelligence. “Building AI tools is going to be the scalable way to identify and root out most of this harmful content,” he said. Wired reported chief technology officer Mike Schroepfer as saying AI “is the best tool to implement the policy – I actually don’t know what the alternative is.”
On paper, AI is an ideal solution. The company already employs algorithms to recognize and remove pornographic content – after all, asking humans to review every post is a daunting, expensive, and maybe impossible solution. But as Wired points out, “training software to reliably decode text is much more difficult than categorizing images.” Facebook needs AI that can not only detect hate speech, conspiracy theories, and more in “the shifting nuances of more than 100 different languages,” but does a good enough job that it can rely on its “roughly 15,000 human reviewers” to just catch the things that fall through the cracks.
Machine learning is good at “sorting images into categories,” and advances in quality mean “significant jumps in the accuracy of automatic translations.” But text and its nuances continue to present sizeable hurdles for AI. Context and tone are especially important with social media posts, lending additional layers of complexity, and as Wired notes, “today’s machine learning algorithms must be trained with narrow, specific data” – something Facebook tried to rectify with changes to the way its human moderators interact with content, in part to label data to train its algorithms.
Facebook’s AI moderation will need to improve before it can accurately interact with text at scale. But with significant strides becoming the norm in AI and machine learning, that day may be closer than it appears. AI is not yet the solution to the challenges facing Facebook, but there is a reason to believe it is part of a promising future.
The Investment and Financial Industry Faces the Same A.I.-Driven Revolution
Hedge funds and large institutional investors have been using Artificial Intelligence to analyze large data sets for investment opportunities, and they have also unleashed A.I. on charts to discover patterns and trends. Not only can the A.I. scan thousands of individual securities and cryptocurrencies for patterns and trends, and it generates trade ideas based on what it finds. Hedge funds have had a leg-up on the retail investor for some time now.
Not anymore. Tickeron has launched a new investment platform, and it is designed to give retail investors access to sophisticated AI for a multitude of functions:
And much more. No longer is AI just confined to the biggest hedge funds in the world. It can now be accessed by everyday investors. Learn how on Tickeron.com.