Posts Tagged ‘Chrome extension’

Algorithms, Rating Systems, and Artificial Intelligence

December 6, 2019

The title of this post is identical to he title of one of the fixes needed for the internet proposed in Richard Stengel’s informative work, Information Wars. Currently, the algorithms that decide what story goes to the top of Google’s search results or Facebook’s newsfeed rely in large part on how viral a story is, meaning how often it was linked to or shared by other users. It correlates popularity with value. Here the working assumption is that the more popular a story is the more valuable it is. So stories about a Kardashian quarrel might likely outrank one about nuclear weapons in Pakistan being insecure. Research shows that stories that are emotional or sensational, which also are stories that are more likely to be filled with misinformation, are shared much more widely than stories less emotional, less sensational. Consequently, these algorithms are boosting deceptive stories over factual ones. This also incentivizes people to create stories that are emotional and misleading because such stories produce more advertising revenue.

Currently, the algorithms that do this are black boxes that no one can see into. Platform companies should be compelled to be more transparent about their algorithms. If people had to publicly explain their formulas for relevance and importance, people would be able to make intelligent choices about the search engines they use. Wouldn’t you like to know the priorities of the search engine(s) you use?

Stengel notes that there has been a valuable movement toward offering ratings systems for news. These systems allow users to evaluate the trustworthiness of individual stories and the news organizations themselves. A study by the Knight Foundation found that when a news rating tool marked a site as reliable, readers’ belief in its accuracy went up. A negative rating for a story or brand made users less likely to use the information.

The Trust Project posts “Trust Indicators” for news sites, providing details of an organization’s ethics and standards. Slate has a Chrome extension called “This is Fake,” which puts a red banner over content that has been debunked, as well as on sites that are recognized as “serial fabricators.” Factmata is a start-up that is attempting to build a community-driven fact-checking system in which people can correct news articles. Stengel is on the board of advisors of NewsGuard, which labels news sites as trustworthy or not as determined by extensive research and a rigorous set of criteria.

Stengel writes that the greatest potential for detecting and deleting disinformation and “junk news” online is through artificial intelligence and machine learning. This involves using computer systems to perform human tasks such as visual perception, speech recognition, decision-making, and reasoning to detect and then delete false and misleading content.  Pattern recognition finds collections of groupings of dubious content. Data-based network analysis can distinguish between online networks that are formed by actual human beings and those that are artificially constructed by bots. Companies can adjust their algorithms to favor human-created networks over artificial ones. The platforms can even offer a predictor, based on sourcing, data. and precedent, as to whether a certain piece of content is likely to be false.

Of course, the bad guys can use it too. Stengel writes, “they are bleary developing their own systems to understand how their target audiences behave online and how to tailor disinformation for them so that they will share it. Platforms can help advertisers and companies find and reach their best audiences, and this works for bad guys as well as good. Platforms have to work to stay one step ahead of the disinformationist by developing more nuanced AI systems to protect their users from disinformation and information they do not want.