Keep the crap going

a woman with number code on her face while looking afar

Have you seen the new ads for Google Gemini?

In one version, just as a young employee is grabbing her fast-food lunch, she notices her snooty boss get on an elevator. So she drops her sandwich, rushes to meet her just as the doors are about to close, and submits her proposal in the form of a thick dossier. The boss asks her for a 500-word summary to consume during her minute-long elevator ride. The employee turns to Google Gemini, which digests the report and spits out the gist, and which the employee regurgitates to the boss’s approval. The end.


Isn’t this unsettling? Google isn’t alone either. In May this year, Apple released a tactless ad for its new iPad Pro. From Variety:

The “Crush!” ad shows various creative and cultural objects — including a TV, record player, piano, trumpet, guitar, cameras, a typewriter, books, paint cans and tubes, and an arcade game machine — getting demolished in an industrial press. At the end of the spot, the new iPad Pro pops out, shiny and new, with a voiceover that says, “The most powerful iPad ever is also the thinnest.”

After the backlash, Apple bactracked and apologised — and then produced two ads in November for its Apple Intelligence product showcasing how it could help thoughtless people continue to be thoughtless.



The second video is additionally weird because it seems to suggest reaching all the way for an AI tool makes more sense than setting a reminder on the calendar that comes in all smartphones these days.

And they are now joined in spirit by Google, because bosses can now expect their subordinates to Geminify their way through what could otherwise have been tedious work or just impossible to do on punishingly short deadlines — without the bosses having to think about whether their attitudes towards what they believe is reasonable to ask of their teammates need to change. (This includes a dossier of details that ultimately won’t be read.)

If AI is going to absorb the shock that comes of someone being crappy to you, will we continue to notice that crappiness and demand they change or — as Apple and Google now suggest — will we blame ourselves for not using AI to become crappy ourselves? To quote from a previous post:

When machines make decisions, the opportunity to consider the emotional input goes away. This is a recurring concern I’m hearing about from people working with or responding to AI in some way. … This is Anna Mae Duane, director of the University of Connecticut Humanities Institute, in The Conversation: “I fear how humans will be damaged by the moral vacuum created when their primary social contacts are designed solely to serve the emotional needs of the ‘user’.”

The applications of these AI tools have really blossomed and millions of people around the world are using them for all sorts of tasks. But even if the ads don’t pigeonhole these tools, they reveal how their makers — Apple and Google — are thinking about what the tools bring to the table and what these tech companies believe to be their value. To Google’s credit at least, its other ads in the same series are much better (see here and here for examples), but they do need to actively cut down on supporting or promoting the idea that crappy behaviour is okay.

Feel the pain

Emotional decision making is in many contexts undesirable – but sometimes it definitely needs to be part of the picture, insofar as our emotions hold a mirror to our morals. When machines make decisions, the opportunity to consider the emotional input goes away. This is a recurring concern I’m hearing about from people working with or responding to AI in some way. Here are two recent examples I came across that set this concern out in two different contexts: loneliness and war.

This is Anna Mae Duane, director of the University of Connecticut Humanities Institute, in The Conversation:

There is little danger that AI companions will courageously tell us truths that we would rather not hear. That is precisely the problem. My concern is not that people will harm sentient robots. I fear how humans will be damaged by the moral vacuum created when their primary social contacts are designed solely to serve the emotional needs of the “user”.

And this is from Yuval Abraham’s investigation for +972 Magazine on Israel’s chilling use of AI to populate its “kill lists”:

“It has proven itself,” said B., the senior source. “There’s something about the statistical approach that sets you to a certain norm and standard. There has been an illogical amount of [bombings] in this operation. This is unparalleled, in my memory. And I have much more trust in a statistical mechanism than a soldier who lost a friend two days ago. Everyone there, including me, lost people on October 7. The machine did it coldly. And that made it easier.”

The AI trust deficit predates AI

There are alien minds among us. Not the little green men of science fiction, but the alien minds that power the facial recognition in your smartphone, determine your creditworthiness and write poetry and computer code. These alien minds are artificial intelligence systems, the ghost in the machine that you encounter daily.

But AI systems have a significant limitation: Many of their inner workings are impenetrable, making them fundamentally unexplainable and unpredictable. Furthermore, constructing AI systems that behave in ways that people expect is a significant challenge.

If you fundamentally don’t understand something as unpredictable as AI, how can you trust it?

Trust plays an important role in the public understanding of science. The excerpt above – from an article by Mark Bailey, chair of Cyber Intelligence and Data Science at the National Intelligence University, Maryland, in The Conversation about whether we can trust AI – showcases that.

Bailey treats AI systems as “alien minds” because of their, rather their makers’, inscrutable purposes. They are inscrutable not just because they are obscured but because, even under scrutiny, it is difficult to determine how an advanced machine-based logic makes decisions.

Setting aside questions about the extent to which such a claim is true, Bailey’s argument as to the trustworthiness of such systems can be stratified based on the people to whom it is addressed: AI experts and non-AI-experts, and I have a limited issue with the latter vis-à-vis Bailey’s contention. That is, to non-AI-experts – which I take to be the set of all people ranging from those not trained as scientists (in any field) to those trained as such but who aren’t familiar with AI – the question of trust is more wide-ranging. They already place a lot of their trust in (non-AI) technologies that they don’t understand, and probably never will. Should they rethink their trust in these systems? Or should we taken their trust in these systems to be ill-founded and requiring ‘improvement’?

Part of Bailey’s argument is that there are questions about whether we can or should trust AI when we don’t understand it. Aside from AI in a generic sense, he uses the example of self-driving cars and a variation of the trolley problem. While these technologies illustrate his point, they also give the impression that AI systems not making decisions aligned with human expectations and their struggle to incorporate ethics is a problem restricted to high technologies. It isn’t. The trust deficit vis-à-vis predates AI. Many of the technologies that non-experts trust but which don’t uphold that (so to speak) are not high-tech; examples from India alone include biometric scanners (for Aadhaar), public transport infrastructure, and mechanisation in agriculture. This is because people’s use of any technology beyond their ability to understand is mediated by social relationships, economic agency, and cultural preferences, and not technical know-how.

For the layperson, trust in a technology is really trust in some institution, individuals or even some organisational principle (traditions, religion, etc.), and this is as it should be – perhaps even for more-sophisticated AI systems of the future. Many of us will never fully understand how a deep-learning neural network works, nor should we be expected to, but that doesn’t implicitly make AI systems untrustworthy. I expect to be able to trust scientists in government and in respectable scientific institutions to discharge their duties in a public-spirited fashion and with integrity, so that I can trust their verdict on AI, or anything else in similar vein.

Bailey also writes later in the article that some day, AI systems’ inner workings could become so opaque that scientists may no longer be able to connect their inputs with their outputs in a scientifically complete way. According to Bailey: “It is important to resolve the explainability and alignment issues before the critical point is reached where human intervention becomes impossible.” This is fair but it also misses the point a little bit by limiting the entities that can intervene to individuals and built-in technical safeguards, like working an ethical ‘component’ into the system’s decision-making framework, instead of taking a broader view that keeps the public institutions, including policies, that will be responsible for translating the AI systems’ output into public welfare in the picture. Even today in India, that’s what’s failing us – not the technologies themselves – and therein lies the trust deficit.

Featured image credit: Cash Macanaya/Unsplash.

Why everyone should pay attention to Stable Diffusion

Many of the people in my circles hadn’t heard of Stable Diffusion until I told them, and I was already two days late. Heralds of new technologies have a tendency to play up every new thing, however incremental, as the dawn of a new revolution – but in this case, their cries of wolf may be real for once.

Stable Diffusion is an AI tool produced by Stability.ai with help from researchers at the Ludwig Maximilian University of Munich and the Large-scale AI Open Network (LAION). It accepts text or image prompts and converts them into artwork based on, but not necessarily understand, what it ‘sees’ in the input. It created the image below with my prompt “desk in the middle of the ocean vaporwave”. You can create your own here.

But it strayed into gross territory with a different prompt: “beautiful person floating through a colourful nebula”.

Stable Diffusion is like OpenAI’s DALL-E 1/2 and Google’s Imagen and Parti but with two crucial differences: it’s capable of image-to-image (img2img) generation as well and it’s open source.

The img2img feature is particularly mind-blowing because it allows users to describe the scene using text and then guide the Stable Diffusion AI by using a little bit of their own art. Even a drawing on MS Paint with a few colours will do. And while OpenAI and Google hold their cards very close to their chests, with the latter even refusing to release Imagen or Parti in private betas, Stability.ai has – in keeping with its vision to democratise AI – opened Stable Diffusion for tinkering and augmentation by developers en masse. Even the ways in which Stable Diffusion has been released are important: trained developers can work directly with the code while untrained users can access the model in their browsers, without any code, and start producing images. In fact, you can download and run the underlying model on your system, requiring some slightly higher-end specs. Users have already created ways to plug it into photo-editing software like Photoshop.

Stable Diffusion uses a diffusion model: a filter (essentially an algorithm) that takes noisy data and progressively de-noises it. In incredibly simple terms, researchers take an image and in a step-wise process add more and more noise to it. Next they feed this noisy image to the filter, which then removes the noise from the image in a similar step-wise process. You can think of the image as a signal, like the images you see on your TV, which receives broadcast signals from a transmitter located somewhere else. These broadcast signals are basically bundles of electromagnetic waves with information encoded into the waves’ properties, like their frequency, amplitude and phase. Sometimes the visuals aren’t clear because some other undesirable signal has become mixed up with the broadcast signal, leading to grainy images on your TV screen. This undesirable information is called noise.

When the noise waveform resembles that of a bell curve, a.k.a. a Gaussian function, it’s called Gaussian noise. Now, if we know the manner in which noise has been added to the image in each step, we can figure out what the filter needs to do to de-noise the image. Every Gaussian function can be characterised by two parameters, the mean and the variance. Put another way, you can generate different bell-curve-shaped signals by changing the mean and the variance in each case. So the filter effectively only needs to figure out what the mean and the variance in the noise of the input image are, and once it does, it can start de-noising. That is, Stable Diffusion is (partly) the filter here. The input you provide is the noisy image. Its output is the de-noised image. So when you supply a text prompt and/or an accompanying ‘seed’ image, Stable Diffusion just shows off how well it has learnt to de-noise your inputs.

Obviously, when millions of people use Stable Diffusion, the filter is going to be confronted with too many mean-variance combinations for it to be able to directly predict them. This is where an artificial neural network (ANN) helps. ANNs are data-processing systems set up to mimic the way neurons work in our brain, by combining different pieces of information and manipulating them according to their knowledge of older information. The team that built Stable Diffusion trained its model on 5.8 billion image-text pairs found around the internet. An ANN is then programmed to learn from this dataset as to how texts and images correlate as well as how images and images correlate.

To keep this exercise from getting out of hand, each image and text input is broken down into certain components, and the machine is instructed to learn correlations only between these components. Further, the researchers used an ANN model called an autoencoder. Here, the ANN encodes the input in its own representation, using only the information that it has been taught to consider important. This intermediate is called the bottleneck layer. The network then decodes only the information present in this layer to produce the de-noised output. This way, the network also learns what about the input is most important. Finally, researchers also guide the ANN by attaching weights to different pieces of information: that is, the system is informed that some pieces are to be emphasised more than others, so that it acquires a ‘sense’ of less and more desirable.

By snacking on all those text-image pairs, the ANN effectively acquires its own basis to decide when it’s presented a new bit of text and/or image what the mean and variance might be. Combine this with the filter and you get Stable Diffusion. (I should point out again that this is a very simple explanation and that parts of it may well be simplistic.)

Stable Diffusion also comes with an NSFW filter built-in, a component called Safety Classifier, which will stop the model from producing an output that it deems harmful in some way. Will it suffice? Probably not, given the ingenuity of trolls, goblins and other bad-faith actors on the internet. More importantly, it can be turned off, meaning Stable Diffusion can be run without the Safety Classifier to produce deepfakes that are various degrees of disturbing.

Recommended here: Deepfakes for all: Uncensored AI art model prompts ethics questions.

But the problems with Stable Diffusion don’t lie only in the future, immediate or otherwise. As I mentioned earlier, to create the model, Stability.ai & co. fed their machine 5.8 billion text-image pairs scraped from the internet – without the consent of the people who created those texts and images. Because Stability.ai released Stable Diffusion in toto into the public domain, it has been experimented with by tens of thousands of people, at least, and developers have plugged it into a rapidly growing number of applications. This is to say that even if Stability.ai is forced to pull the software because it didn’t have the license to those text-image pairs, the cat is out of the bag. There’s no going back. A blog post by LAION only says that the pairs were publicly available and that models built on the dataset should thus be restricted to research. Do you think the creeps on 4chan care? Worse yet, the jobs of the very people who created those text-image pairs are now threatened by Stable Diffusion, which can – with some practice to get your prompts right – produce exactly what you need, no illustrator or photographer required.

Recommended here: Stable Diffusion is a really big deal.

The third interesting thing about Stable Diffusion, after its img2img feature + “deepfakes for all” promise and the questionable legality of its input data, is the license under which Stability.ai has released it. AI analyst Alberto Romero wrote that “a state-of-the-art AI model” like Stable Diffusion “available for everyone through a safety-centric open-source license is unheard of”. This is the CreativeML Open RAIL-M license. Its preamble says, “We believe in the intersection between open and responsible AI development; thus, this License aims to strike a balance between both in order to enable responsible open-science in the field of AI.” Attachment A of the license spells out the restrictions – that is, what you can’t do if you agree to use Stable Diffusion according to the terms of the license (quoted verbatim):

“You agree not to use the Model or Derivatives of the Model:

  • In any way that violates any applicable national, federal, state, local or international law or regulation;
  • For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
  • To generate or disseminate verifiably false information and/or content with the purpose of harming others;
  • To generate or disseminate personal identifiable information that can be used to harm an individual;
  • To defame, disparage or otherwise harass others;
  • For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation;
  • For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics;
  • To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
  • For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories;
  • To provide medical advice and medical results interpretation;
  • To generate or disseminate information for the purpose to be used for administration of justice, law enforcement, immigration or asylum processes, such as predicting an individual will commit fraud/crime commitment (e.g. by text profiling, drawing causal relationships between assertions made in documents, indiscriminate and arbitrarily-targeted use).”

As a result of these restrictions, law enforcement around the world has incurred a heavy burden, and I don’t think Stability.ai took the corresponding stakeholders into confidence before releasing Stable Diffusion. It should also go without saying that the license choosing to colour within the lines of the laws of respective countries means, say, a country that doesn’t recognise X as a crime will also fail to recognise harm in the harrassment of victims of X – now with the help of Stable Diffusion. And the vast majority of these victims are women and children, already disempowered by economic, social and political inequities. Is Stability.ai going to deal with these people and their problems? I think not. But as I said, the cat’s already out of the bag.

Injustice ex machina

There are some things I think about but struggle to articulate, especially in the heat of an argument with a friend. Cory Doctorow succinctly captures one such idea here:

Empiricism-washing is the top ideological dirty trick of technocrats everywhere: they assert that the data “doesn’t lie,” and thus all policy prescriptions based on data can be divorced from “politics” and relegated to the realm of “evidence.” This sleight of hand pretends that data can tell you what a society wants or needs — when really, data (and its analysis or manipulation) helps you to get what you want.

If you live in a country ruled by a nationalist government tending towards the ultra-nationalist, you’ve probably already encountered the first half of what Doctorow describes: the championship of data, and quantitative metrics in general, the conflation of objectivity with quantification, the overbearing focus on logic and mathematics to the point of eliding cultural and sociological influences.

Material evidence of the latter is somewhat more esoteric, yet more common in developing countries where the capitalist West’s influence vis-à-vis consumption and the (non-journalistic) media are distinctly more apparent, and which is impossible to unsee once you’ve seen it.

Notwithstanding the practically unavoidable consequences of consumerism and globalisation, the aspirations of the Indian middle and upper classes are propped up chiefly by American and European lifestyles. As a result, it becomes harder to tell the “what society needs” and the “get what you want” tendencies apart. Those developing new technologies to (among other things) enhance their profits arising from this conflation are obviously going to have a harder time seeing it and an even harder time solving for it.

Put differently, AI/ML systems – at least those in Doctorow’s conception, in the form of machines adept at “finding things that are similar to things the ML system can already model” – born in Silicon Valley have no reason to assume a history of imperialism and oppression, so the problems they are solving for are off-target by default.

But there is indeed a difference, and not infrequently the simplest way to uncover it is to check what the lower classes want. More broadly, what do the actors with the fewest degrees of freedom in your organisational system want, assuming all actors already want more freedom?

They – as much as others, and at the risk of treating them as a monolithic group – may not agree that roads need to be designed for public transportation (instead of cars), that the death penalty should be abolished or that fragmenting a forest is wrong but they are likely to determine how a public distribution system, a social security system or a neighbourhood policing system can work better.

What they want is often what society needs – and although this might predict the rise of populism, and even anti-intellectualism, it is nonetheless a sort of pragmatic final check when it has become entirely impossible to distinguish between the just and the desirable courses of action. I wish I didn’t have to hedge my position with the “often” but I remain unable with my limited imagination to design a suitable workaround.

Then again, I am also (self-myopically) alert to the temptation of technological solutionism, and acknowledge that discussions and negotiations are likely easier, even if messier, to govern with than ‘one principle to rule them all’.

The climate and the A.I.

A few days ago, the New York Times and other major international publications sounded the alarm over a new study that claimed various coastal cities around the world would be underwater to different degrees by 2050. However, something seemed off; it couldn’t have been straightforward for the authors of the study to plot how much the sea-level rise would affect India’s coastal settlements. Specifically, the numbers required to calculate how many people in a city would be underwater aren’t readily available in India, if at all they do exist. Without this bit of information, it’s easy to disproportionately over- or underestimate certain outcomes for India on the basis of simulations and models. And earlier this evening, as if on cue, this thread appeared:

This post isn’t a declaration of smugness (although it is tempting) but to turn your attention to one of Palanichamy’s tweets in the thread:

One of the biggest differences between the developed and the developing worlds is clean, reliable, accessible data. There’s a reason USAfacts.org exists whereas in India, data discovery is as painstaking a part of the journalistic process as is reporting on it and getting the report published. Government records are fairly recent. They’re not always available at the same location on the web (data.gov.in has been remedying this to some extent). They’re often incomplete or not machine-readable. Every so often, the government doesn’t even publish the data – or changes how it’s obtained, rendering the latest dataset incompatible with previous versions.

This is why attempts to model Indian situations and similar situations in significantly different parts of the world (i.e. developed and developing, not India and, say, Mexico) in the same study are likely to deviate from reality: the authors might have extrapolated the data for the Indian situation using methods derived from non-native datasets. According to Palanichamy, the sea-level rise study took AI’s help for this – and herein lies the rub. With this study itself as an example, there are only going to be more – and potentially more sensational – efforts to determine the effects of continued global heating on coastal assets, whether cities or factories, paralleling greater investments to deal with the consequences.

In this scenario, AI, and algorithms in general, will only play a more prominent part in determining how, when and where our attention and money should be spent, and controlling the extent to which people think scientists’ predictions and reality are in agreement. Obviously the deeper problem here lies with the entities responsible for collecting and publishing the data – and aren’t doing so – but given how the climate crisis is forcing the world’s governments to rapidly globalise their action plans, the developing world needs to inculcate the courage and clarity to slow down, and scrutinise the AI and other tools scientists use to offer their recommendations.

It’s not a straightforward road from having the data to knowing what it implies for a city in India, a city in Australia and a city in Canada.

If AI is among us, would we know?

Our machines could become self-aware without our knowing it. We need a better way to define and test for consciousness.

… an actual AI might be so alien that it would not see us at all. What we regard as its inputs and outputs might not map neatly to the system’s own sensory modalities. Its inner phenomenal experience could be almost unimaginable in human terms. The philosopher Thomas Nagel’s famous question – ‘What is it like to be a bat?’ – seems tame by comparison. A system might not be able – or want – to participate in the classic appraisals of consciousness such as the Turing Test. It might operate on such different timescales or be so profoundly locked-in that, as the MIT cosmologist Max Tegmark has suggested, in effect it occupies a parallel universe governed by its own laws.

The first aliens that human beings encounter will probably not be from some other planet, but of our own creation. We cannot assume that they will contact us first. If we want to find such aliens and understand them, we need to reach out. And to do that we need to go beyond simply trying to build a conscious machine. We need an all-purpose consciousness detector.

Interesting perspective by George Musser – that of a “consciousness creep”. In the larger scheme of things (of very-complex things in particular), isn’t the consciousness creep statistically inevitable? Musser himself writes that “despite decades of focused effort, computer scientists haven’t managed to build an AI system intentionally”. As a result, perfectly comprehending the composition of the subsystem that confers intelligence upon the whole is likelier to happen gradually – as we’re able to map more of the system’s actions to their stimuli. In fact, until the moment of perfect comprehension, our knowledge won’t reflect a ‘consciousness creep’ but a more meaningful, quantifiable ‘cognisance creep’ – especially if we already acknowledge that some systems have achieved self-awareness and are able to think compute intelligently.

The Sea

Big Fish walked into a wall. His large nose tried to penetrate the digital concrete first. Of course, it went in for a second, but Marcus recomputed the algorithm, and it jumped back out. The impact of its return threw Big Fish’s head back, and with it, his body stumbled back, too. The wall hadn’t been there before. Its appearance was, as far as Big Fish was concerned, inexplicable. And so, he turned around to check if other walls had been virtualized as well. Nope. Just this one. What business does a wall have being where it shouldn’t belong? But here it was.

He turned into the door on his left and looked around. Nothing was amiss. He walked back out and tried another door on the opposite. All desks were in place, computers were beeping, people were walking around, not minding his intrusion. It was surreal, but Big Fish didn’t mind. Surreal was normal. That’s how he liked them to be. He walked back out. There the wall was again. Has Marcus got something wrong? He poked a finger into the smooth white surface. It was solid, just like all walls were.

He turned back and walked the way he had come. Right, left, right, left, right, left, left, down a flight of stairs, straight out, left, left, left, straight out once more, left, right… and there the canteen was. The building was the way it had once been. Marcus was alright, which meant the wall had to be, too. But it couldn’t be – it didn’t belong there. He walked back up once more to check. Left, right, straight, right, right, right, straight, up a flight of stairs, right, right, left, right, left, right… and there’s the bloody wall again!

Big Fish had to log out. He walked into the Dump. The room was empty. No queues were present ahead of the Lovers, no bipolar behavior, no assurances being whispered to the new kids or hysterical religious clerks talking about being born again. Just him, so he walked up to the first of the two Lovers, and stood under it. When he decided he was ready, Big Fish pushed the green button next to him. The green guillotine came singing down.

The blade of the machine was so sharp, it whistled as it parted an invisible curtain of air. The screech, however, was music to Big Fish’s ears. It meant exiting the belly of Marcus. It meant reality was coming. As soon as the edge touched his head, Marcus came noiselessly to life in the Dump. His thoughts, memories, feelings, emotions, scars, scalds, bruises, cuts, posture, and many other state-properties besides, were simultaneously and almost instantaneously recorded as a stream of numbers. Once the input had been consummated with an acknowledgment, he vanished.

When he stepped out of his booth, Big Fish saw Older Fish staring at him from across the road. His stare was blank, hollow, waiting for the first seed of doubt from Big Fish. Big Fish, however, didn’t say anything. Older Fish stared for a minute more, and then walked away. Big Fish continued to watch Older Fish, even as he walked away. Had he seen the wall, too? Just to make sure, he began to follow the gaunt, old man. The stalking didn’t last long, however.

He watched as Older Fish turned around and pointed a gun at Big Fish’s temple. The barrel of the weapon was made of silver. My gun. How did Older Fish find my gun? A second later, Older Fish pointed the weapon into his own mouth and fired. Flecks of flesh, shards of bone, shavings of hair, dollops of blood… all that later, Older Fish fell to the ground. In a daze, Big Fish ran up to the still figure and stared out. Older Fish’s eyes were open, the skin around them slowly loosening, the wrinkles fading.

Big Fish saw them gradually droop off. Time had ended. The world was crucified to the splayed form of Older Fish. The commotion around him happened in a universe all of its own. The lights flashed around him, seemed to bend away from his bent form, curving along the walls of their reality, staying carefully away from his arrested one. The sounds came and went, like stupid matadors evading raging bulls, until the tinnitus came, silencing everything else but the sound of his thoughts. Only silence prevailed.

When darkness settled, Big Fish was able to move again. My friend, he lamented. He opened his eyes and found himself seating in a moving ambulance. Where are we going? There was no answer. Big Fish realized he was thinking his questions. When he tried, though, his tongue refused to loosen, to wrap itself around the vacant bursts of air erupting out his throat. Am I mute? He tried again.

“Where are… we…”

“To the Marxis HQ.”

Marxis HQ. The cradle of Marcus. The unuttered mention of that name brought him back. What were the chances of walking into a wall-that-shouldn’t-have-been-there and Older Fish killing himself? The van swung this way and that. Big Fish steadied himself by holding on to the railing running underneath the windows. His thoughts, however, were firmly anchored to the wall. Big Fish was sure it had something to do with Older Fish’s suicide.

Had Older Fish seen the wall? If he had, why would he have killed himself? Did it disturb him? When was the last time a wall disturbed anyone to their death? Could Older Fish have seen anything on the other side of the wall? Did Older Fish walk into the space on the other side of the wall? What could have been on the other side of the wall? Had Marcus done something it shouldn’t have? Was that why Big Fish was being ferried to the Marxis?

“I don’t know.”

“Huh?”

“Mr. ——-, the reasons behind your presence being required at Marxis HQ were not divulged to us.”

I’m not mute, then. Big Fish laughed. He didn’t know himself to be thinking out loud. The others all looked at him. Big Fish didn’t bother. He settled back to think of Marcus once more. At first, his thoughts strained to comprehend why Marcus was the focus of their attention. Simultaneously, Older Fish’s death evaded the grasp of his consciousness. In the company of people, he felt he had to maintain composure. Composure be damned. Yet, tears refused to flow. Sorrow remained reluctant.

The van eased to a halt. A nurse stepped up and opened the door, Big Fish got down. One of the medics held on to his forearm and led him inside a large atrium. After a short walk that began with stepping inside a door and ended with stepped out of another – What was that? Did I just step through a wall? – Big Fish was left alone outside a door: “Armada” it said. He opened the door and looked inside. A long, severely rectangular hall yawned in front of him. At the other end, almost a hundred feet away, sat a man in a yellow chair, most of his body hidden behind a massive table.

“Please come in. My name is Marxis Maccord. I apologise for this inconvenience, but your presence here today is important to us. I know what you’re thinking, Mr. ———, but before you say anything, let me only say this: what happened had both nothing and everything to do with Marcus. It had nothing to do with Marcus because it wasn’t Marcus’ fault you walked into a wall and almost broke your virtual nose. It had nothing to do with Marcus because it wasn’t Marcus that precipitated in Mr. ———-‘s death. At the same time, it had everything to do with Marcus because, hadn’t it been for Marcus, you wouldn’t have walked into a wall. Hadn’t it been for Marcus, Mr. ———- wouldn’t have killed himself.”

Silence. What is this dolt trying to tell me? That they’re not going to take responsibility for what Marcus did? Why can’t they just get to the point, the idiots?! Bah! “I understand what you’re saying, Mr. Maccord. You’re saying you’re going to let Marxis Corp. be held responsible for Marcus’s actions, and that’s fine by–”

“Oh, Mr. ————, I’m not saying that all! In fact, I’m not going to assume responsibility either. You see, Mr. ————, I’m going to let you decide. I’m going to let you decide on the basis of what you hear in this room as to who’s culpable. Then… well, then, we’ll take things from there, shall we?”

Ah! There it is! Blah, blah, blah! We didn’t do this, we didn’t do that! Then again, we know this could’ve been done, that could’ve been done. Then, shit happens, let us go. Your call now. Bullshit! “Mr. Maccord, if you will excuse me, I have made my decision and would like for you to listen to it. I don’t care what Marcus did or didn’t do… and even if I want to figure it out, I don’t think I want to start here.”

Big Fish turned to leave. “Mr. ———–, your friend put the wall there because it scared him that someone might find something out.” Big Fish stopped just before the door. “Mr. ————, the wall wasn’t there a second before you walked into it. It was computed into existence by your friend because you were trespassing into his thoughts. If you had crossed over into the other side, you would have witnessed something… something we can only imagine would have been devastating for him in some way.”

Marxis Maccord stood up. With a start, Big Fish noticed that the man wasn’t standing on his legs. Instead, his torso, his neck and his head were floating in the air. From the other end of the hall, they looked like a macabre assemblage of body parts, a jigsaw held upright by simple equilibrium, the subtle cracks visible along the seam of their contours in the light borrowed from the city that towered around Marxis Corp. Him? It? It. “Mr. ————, you are downstairs, standing in booth SP-8742, your thoughts logged out of reality and into this virtual one.”

Big Fish hadn’t said anything for a while. The transition had been so smooth. Big Fish hadn’t noticed a thing when we entered the first door. It was like walking through, past, a veil. It was an effortless endeavour, a flattering gesture that drew the mind out of its body. Maccord continued to talk. “Say hello to Marcus II, or, as we call it, MarQ. When you stepped into that first door, your reality was suspended just as ours took over. Once the switch was complete, your limp body was lain on a bed and transferred down a shaft 3,000 feet deep, under this building. You are now lying sound asleep, dreaming about this conversation… if that.”

“In a world where moving in and out of reality is so easy, picking one over the other simply on the basis of precedence will gradually, but surely, turn a meaningless argument. It is antecedence that will make sense, more and more sense. Your friend, Mr. ————, understood that.”

Big Fish finally had something to say. “And why is that important, Mr. Maccord?” He felt stupid about asking a question, after having asked it, the answer to which might have come his way anyway. However, Big Fish was being left with a growing sense of loneliness. He was feeling like a grain of salt in the sea, moving with currents both warm and cold, possessing only a vintage power to evoke memories that lay locked up somewhere in the folds of the past. The sea couldn’t taste him, Big Fish couldn’t comprehend the sea. They had devoured each other. They were devouring each other.

Maccord responded quickly. “Marcus is the supercomputer that computes the virtual reality of your old organization into existence. You log in and out everyday doing work that exists only as electromagnetic wisps in the air, shooting to and fro between antennae, materialised only when called upon. Marcus tracks all your virtual initiatives, transactions, and assessments. You know all this. However, what you don’t know is that the reality Marcus computes is not based on extant blueprints or schematics. It is based on your memories.”

At that moment, it hit Big Fish. He had wondered many a time about how Marcus knew everything about the place where he worked. The ability to log in and out of reality – or realities? – gave the machine access to people’s memories. This means the architecture is the least common denominator of all our memories of the place. “You’re right.” Maccord’s observation startled him. “You see, Mr. ———–, MarQ has computed me, and MarQ has computed you. However, I own MarQ, which means it answers to me. Before it transliterates your thoughts into sounds, they are relayed to me.”

He can read my thoughts! “Oh yes, Mr. ————, I well can. And now that I know that you know that the place is the least common denominator of all your knowledge, the wall could’ve been there only if all of you had known about it. However, the wall hadn’t been there in the first place. Which meant Marcus had computed something that had happened fairly recently. Then again, if the LCD hypothesis is anything to go by, then the wall shouldn’t have been there because you continue to be surprised about its presence. Ergo, on the other side of the wall was something you already knew about, but not yet as the source of a problem.”

It was hard for Big Fish to resist thinking anything at all at first, but he did try. When he eventually failed, questions flowed into his head like water seeping through cracks in a bulging dam, simply unable to contain a flooding river. The questions, at first, cascaded through in streamlined sheets, and then as gurgling fountains, and then as jets that frayed into uncertainty, and then as a coalition that flooded his mind.

Big Fish understood this was the end of the “interaction”, that Marxis Maccord had been waiting for this to happen since the beginning. Everyone would have wanted to know why Older Fish killed himself. To get to the bottom of that, and to exculpate Marcus, a reason had to be found. Marcus had known we’d come to this. He let me hit the wall late. He let me know that none else found it odd because they’d been used to it. Marcus had let me be surprised. Marcus knew something was going to happen. And when it did, Marcus knew I’d be brought into its hungry womb to be judged… to be devoured by the sea.

“Mr. Maccord?”

“Yes, Mr. ———-?”

“Take what you need.”

“I already am, Mr. ———-.”