Credibility on the web

There are a finite number of sources from which anyone receives information. The most prominent among them are media houses (incl. newspapers, news channels, radio stations, etc.) and scientific journals (at least w.r.t. the subjects I work with).

Seen one way, these establishments generate the information that we receive. Without them, stories would remain localized, centralized, away from the ears that could accord them gravity.

Seen another way, these establishments are also motors: sans their motive force, information wouldn’t move around as it does, although this is assuming that they don’t mess with the information itself.

With more such “motors” in the media mix, the second perspective is becoming the norm of things. Even if information isn’t picked up by one house, it could be set sailing through a blog or a CJ initiative. The means through which we learn something, or stumble upon it for that matter, are growing to be more overlapped, lines crossing each others’ paths more often.

Veritably, it’s a maze. In such a labyrinthine setup, the entity that stands to lose the most is faith of a reader/viewer/consumer in the credibility of the information received.

In many cases, with a more interconnected web – the largest “supermotor” – the credibility of one bit of information is checked in one location, by one entity. Then, as it moves around, all following entities inherit that credibility-check.

For instance, on Wikipedia, credibility is established by citing news websites, newspaper/magazine articles, journals, etc. Jimmy Wales’ enterprise doesn’t have its own process of verification in place. Sure, there are volunteers who almost constantly police its millions of pages, but all they can do is check if the citation is valid, and if there are any contrarious reports, too, to the claims being staked.

One way or another, if a statement has appeared in a publication, it can be used to have the reader infer a fact.

In this case, Wikipedia has inherited the credibility established by another entity. If the verification process had failed in the first place, the error would’ve been perpetrated by different motors, each borrowing from the credibility of the first.

Moreover, the more strata that the information percolates through, the harder it will be to establish a chain of accountability.


My largest sources of information are:

  1. Wikipedia
  2. Journals
  3. Newspapers
  4. Blogs

(The social media is just a popular aggregator of news from these sources.)

Wikipedia cites news reports and journal articles.

News reports are compiled with the combined efforts of reporters and editors. Reporters verify the information they receive by checking if it’s repeated by different sources under (if possible) different circumstances. Editors proofread the copy and are (or must remain) sensitive to factual inconsistencies.

Journals have the notorious peer-reviewing mechanism. Each paper is subject to a thorough verification process intended to wean out all mistakes, errors, information “created” by lapses in the scientific method, and statistical manipulations and misinterpretations.

Blogs borrow from such sources and others.

Notice: Even in describing the passage of information through these ducts, I’ve vouched for reporters, editors, and peer-reviews. What if they fail me? How would I find out?


The point of this post was to illustrate

  1. The onerous yet mandatory responsibility that verifiers of information must assume,
  2. That there aren’t enough of them, and
  3. That there isn’t a mechanism in place that periodically verifies the credibility of some information across its lifetime.

How would you ensure the credibility of all the information you receive?