Meta-design: An invisible bias
Puja Mehra wrote an excellent breakdown of the Narendra Modi government’s economic performance of the last four years, the duration for which it has been in office. You should read it if you’re interested in this sort of thing (and you should be).
However, the article’s layout bothers me:
1. The references are listed at the end instead of in-line. The former is a vestige of print publishing, where it’s not possible to display layered text, so printers listed a citation in the page’s footer, or at the end of each chapter, for those would need to refer to it. In-line references, on the other hand, are far more convenient because they don’t require the reader to jump across the page, or pages, to find the citation; it’s inserted proximate to the claim itself.
2. For all the numbers, and the “empirical analysis” that the article is being lauded for, there’s not a single chart in it.
The first issue in particular is something I sense a lot of people conflate with “serious” and “solemn”: an article laid out such that it meets print’s publishing standards, which have been improvised over hundreds of years, and as if not concerned about digital publishing, which has been around in its current form for less than a decade.
Another concern is whether the publisher of Mehra’s article, The Hindu Centre for Politcs and Public Policy, is tracking the number of people whose eyeballs rest on the references portion of the page for a (statistically) significant period and the number of people who click on the links. (Of course, there’s a qualitative funnel here whereby a reader clicks a reference not to verify if it contains the claim to which it has been attached but to learn.
Excluding these people) I suspect a majority of readers will rest easier knowing that a specific claim _has been_ referenced (“blah blah gurgle nyah21“), and not bother to validate it themselves. That’s how we all read Wikipedia: we trust the platform to have robust rules for maintaining reliability and we trust volunteers to want to apply those rules.
When we take the existence as well as trustworthiness of this relationship for granted, we sow the seeds for a meta-design to take effect on the page we’re reading: where the mere presence of certain elements encourages us to interpret the substance on the page in this way or that. Put another way, because of its ubiquity and its heritage, print publushing brings with it an attendant set of processes that must be followed before a book, article, review, etc. can be published. When the published content contains symbols that suggest these processes have been followed, we assume due diligence has been done on the publisher’s part to check and prepare the content (especially since words once printed can’t be unprinted).
What’s curious here is that we believe we can trust an article more if it contains these symbols _irrespective_ of whether it has been published offline or online. For example, when you see a superscripted [?] next to a claim on Wikipedia (“blah blah gurgle[?] nyah”), your mind immediately works to discard it from memory (at least mine does) – in much the same way I sit up and pay more attention when I read an article is laid out in two columns on the page with references strewn around it, because it’s likely a scientific paper.
Similarly, detecting the presence of such meta-design markers on Mehra’s article and trusting in the validity of the substance of those markers, we’re encouraged to conclude that the article is trustworthy and reliable. I would be interested in any scientific studies conducted to determine the strength of this encouragement and how readers’ impression of the article changes as a result, measured the extent of the article they’ve already read.
Featured image credit: Geraldine Lewa/Unsplash.