HomeLatest ThreadsGreatest ThreadsForums & GroupsMy SubscriptionsMy Posts
DU Home » Latest Threads » Metaphorical » Journal
Page: 1

Metaphorical

Profile Information

Name: Kurt Cagle
Gender: Male
Hometown: Cascadia
Member since: Sat Dec 3, 2016, 02:02 AM
Number of posts: 1,353

About Me

Contributing Writer, Forbes Magazine

Journal Archives

A Secured Fairness Doctrine

I work at the intersection of AI and journalism, and have been thinking long and hard about the Fairness Doctrine.

The Fairness Doctrine was originally established as a way of ensuring that "yellow journalism" - the practice of using the media to disseminate misinformation - was blunted, primarily by fining those publishers who knowingly printed libelous content and outright lies. One of the first things that Reagan did when he came into office was to scrap the Fairness Doctrine, paving the way for Murdoch, Fox News, and the breakdown of trust and rise in propaganda in the media.

One of the biggest challenges in defending the Fairness Doctrine initially was that absolute truth is generally an illusion - news by its very nature will have an observer, and that observer will always be biased. The Fairness Doctrine as it existed didn't really change the way that news was reported, it just made it painful to tell egregious lies by fining the media that violated it. It was a gentleman's agreement and one that had the potential to be a tool for censorship.

Today, the biggest problem we face is the deliberate move towards fascism through the manipulation of lies and fake news. Today, arguably the Fairness Doctrine would have been unenforceable. However, one thing that has since changed actually has its rise on the web, the use of Certificates of Authority. The idea behind a CA is relatively simple. Security on the web reasonably can only happen if you trust that the person you are dealing with is the person they claim to be. This is what makes it possible to run credit cards on the web with some degree of trust, because when you get information from a site, that site must acquire a certificate from another site in order to be considered trusted, and that organization in turn needs a certificate from another trusted source.

What this does is essential create a chain of trust, and by extension, a record of parties that can be sued if a certificate was given and a given site broke the law or provided falsity of content. If you're a company selling a product and not delivering, an issuing authority has the right to revoke the authentication certificate so that they were no longer liable.

The same kind of concept can be applied to content being published on the web. Right now, there's really nothing stopping a Facebook or Twitter from buying advertisement from a company with disinformation bots, because there's no meaningful legislation that would fine either the company with the bots or Facebook (or open them to enforceable litigation). A similar type of authentication chain could, however, be set up that would provide what amounted to an escrow requirement for publishers - provide a chain of provenance that indicated the source of that would force a rebroadcaster (such as Facebook) to reveal the provenance of the their generated content in a computer legible form. Any advertiser generated content would then have to identify the issuing authority, and could make them liable.

It wouldn't stop legitimate trolls (as odious as they might be), but it would make it much harder for generated content to be passed anonymously, and the social media source could then be sued if such a publisher did engage in such practices. It would also allow for tools that would be able to rate the likelihood that such content is fake. Not surprisingly, the GOP in the Senate has been fighting this tooth and nail.

This process is a non-censorship based approach to controlling a very real problem, and it should be something that, in a new Democratic should be a high priority.
Go to Page: 1