Facebook is indisputably one of technology’s biggest companies, boasting an average of 1.56 billion daily active users across it and its Instagram, WhatsApp, and Messenger family of apps in March 2019, with sixty-six percent of total users being considered daily, rather than monthly, active users. That massive user base contributed to a 61 percent increase in profits during Q4 2018, to $6.9 billion.
While the company is more profitable than ever, it is not immune to criticism. Recent times have seen Facebook taken to task for a string of issues, including data impropriety, controversial and dangerous user-posted content that results in real-world consequences, and privacy concerns. Facebook has always employed moderators to evaluate flagged content, but manually sifting through and vetting mountains of material from its worldwide user base has proven an impossible task, leading Facebook to double down on artificial intelligence to achieve better serve their platform.
The adoption process has not been without friction. Artificial intelligence relies on human-programmed algorithms to perform specific functions, making it susceptible to bias, intended or otherwise. A recent Wired report detailed Facebook program manager Lade Obamehinti “[discovering] that a prototype of the company's video chat device, Portal, had a problem seeing people with darker skin tones” before rectifying the problem – a byproduct of underrepresentation of “women and people with darker skin…in the training data.” This led to the AI algorithm misidentifying those groups in greater numbers than those from a larger data set.
AI bias is on the radar of leading researchers, who have “raised the alarm about the risk of biased AI systems as they are assigned more critical and personal roles.” The mitigating bias remains vitally important for a company like Facebook, which needs AI to work at scale while being conscious of its real-world repercussions. The company recently “deployed a content filtering system to identify posts that may be spreading political misinformation during India’s month-long national election,” flagging posts “in several of the country’s many languages” for human moderators to review. It and similar fake news-curbing initiatives using crowdsourcing are especially susceptible to uniformity of opinion and background, raising the stakes to get things right.
AI may be advancing in efficacy and usefulness, but it still needs human guidance and oversight to work ethically, eliminate bias, and identify its shortcomings. “When AI meets people,” explained Obamehinti, “there’s an inherent risk of marginalization.” Though the process lacks “simple answers” (in the words of Facebook CTO Mike Schroepfer), efforts are underway at Facebook to tackle potential issues. Obamehinti’s discoveries have spurred “new tools and processes to fend off problems created by AI,” with subsequent new “[processes] for inclusive AI…being adopted by several product development groups at Facebook.” With increased awareness, Facebook hopes to continue reaping the benefits of AI without promulgating its downsides.
If You’re Wondering When A.I. Will Start Making Market Predictions…
Guess what – it already is. Hedge funds and large institutional investors have been using Artificial Intelligence to analyze large data sets for investment opportunities, and they have also unleashed A.I. on charts to discover patterns and trends. Not only can the A.I. scan thousands of individual securities and cryptocurrencies for patterns and trends, and it generates trade ideas based on what it finds. Hedge funds have had a leg-up on the retail investor for some time now.
Not anymore. Tickeron has launched a new investment platform, and it is designed to give retail investors access to sophisticated AI for a multitude of functions:
And much more. No longer is AI just confined to the biggest hedge funds in the world. It can now be accessed by everyday investors. Learn how on Tickeron.com.