(Optional reading before you begin: ‘The billionaires’ eugenics project: how Epstein infiltrated Harvard, muzzled the humanities and preached master-race science’, The Nerve)
Science writer Philip Ball describes “nerd tunnel vision” as the rationalisation scientists who maintained ties with Jeffrey Epstein after his 2008 conviction for soliciting underage sex offer, hinting at something more calculated than just oversight. “Nerd tunnel vision is a defining feature of much of the Edge discourse,” Ball writes, referring to Epstein consort John’s Brockman’s salon for “third culture” intellectualism: “moral obtuseness; a determination to win the argument rather than to listen and ponder; a tendency to fabulate improbable futures from narrow ‘rational’ logic; ignorance of and contempt for other ways of seeing the world.” Ball is in effect describing not a people that’s unaware or who failed to notice but a people who deliberately, with eyes wide open, chose what matters to them.
“But they were just socially awkward, you say, caught up in their work, unable to see the forest for the trees.” That’s the original description of what it meant to be a ‘nerd’. But when evolutionary biologist Robert Trivers emailed registered sex offender and human trafficker Jeffrey Epstein in 2012 about “a wonderful lunch, a REAL pleasure… quite apart from the bevy of beauties”, when cosmologist Lawrence Krauss persistently begged Epstein for legal advice on sexual harassment charges he faced or even when MIT Media Lab director Joichi Ito concealed funding from Epstein, are we really witnessing naïveté or are we watching sophisticated actors make sophisticated calculations about what they can get away with?
As Epstein’s ties to more and more scientists, technologists, and venture capitalists become apparent, there’s also a diagnosis doing the rounds that Silicon Valley’s techno-elites, a.k.a. the “tech-bros”, are simply mistaken in their embrace of transhumanism, longevity research, and what increasingly looks like repackaged eugenics. This diagnosis flatters us — the diagnosticians — by positioning us as the clear-eyed ones who saw the cautionary tales from a bygone era for what they were. But I think the unscrupulous scientists and the tech-bros saw them too, and proceeded to run a different cost-benefit analysis from what others did.
To see why, it’s important to see first that the story Silicon Valley tells about itself — of garage startups and disruption — obscures a more deliberate genealogy. Computer scientist Timnit Gebru and philosopher Émile Torres coined the label ‘TESCREAL’ as a critical construct (not a self-identification; many in Silicon Valley reject the bundling) to describe an overlapping cluster of ideologies: transhumanism, extropianism, singularitarianism, cosmism, (Bay Area internet) rationalism, effective altruism, and longtermism — many of which sank roots in the 1990s.
Extropianism and organised transhumanism have been adjacent to the Bay Area for a while, with newsletters, salons, and institutes linking human enhancement and “self-transformation” to an explicitly technologist ethos liberated from worrying about limits, whether material or social. This worldview fit neatly with a Valley culture already comfortable with narratives of radical innovation and libertarian politics. In the 2000s, singularitarian ideas also moved from niche futurist conclaves into mainstream tech discourse via high-profile evangelists and Silicon Valley institutions. The 2010s saw rationalist and effective altruist networks overlap with AI labs, venture capital, and philanthropy, specialising in translating moral philosophy and speculative technical futures into funding priorities and institutional agendas. By the 2020s, once frontier AI became the Valley’s central product, these once-semi-separate strands of thought started to resemble a unified milieu.
This baleful setup further has a prehistory that complicates any argument that contends it’s just been stumbling in the dark when it makes choices we won’t. One part of the prehistory is the mid-20th-century scientific elitism that shaded into eugenics. William Shockley helped invent the first semiconductors and transistors and is a foundational figure tied to Silicon Valley’s early industrial formation. Shockley also later became publicly associated with racist and eugenicist claims while at Stanford University. While Shockley’s views were on the fringe even then, transhumanism’s closest antecedent is Anglo-American eugenics; the term was first used by British eugenicist Julian Huxley. So when Nick Bostrom — a central figure in these movements and whose work at Oxford University’s Future of Humanity Institute explicitly notes its pull on Silicon Valley elites and funders — was revealed in 2023 to have sent emails in 1996 stating his belief in racial differences in intelligence, should we treat this as an aberration or as a data point in a larger pattern?
A second wave of the prehistory, from the late 1980s to the 2000s, is the Silicon Valley’s own flavour of transhumanism, especially in the form of cryonics, what it called “morphological freedom”, brain-computer futures, garnished with the explicit belief that technology should let individuals redesign their bodies and minds. The Extropy Institute was an early node in this movement and its idioms fit Silicon Valley’s entrepreneurial culture of working without constraints. By the mid- to late-2000s, ‘singularity’ — the hypothetical moment in future when AI surpasses human intelligence and triggers rapid, uncontrollable technological growth — also became a popular rallying point. The third and final wave kicked on in the 2010s and is still surging: from just talking about living forever, the tech bros set up labs, biotech pipelines, and ecosystems for “consumer biohacking”, emblematised by Alphabet’s Calico Labs. In the 2020s, finally, conversations about reproduction and eugenics moved from being fringe rhetoric to ‘gray-zone’ products and venture-backed firms.
Today there are companies marketing expanded embryo screening not just for severe disease risks but for probabilistic traits, including — controversially — cognitive outcomes. One October 2024 investigation by The Guardian described a US startup selling embryo screening framed around gains in IQ and “liberal eugenics” concerns. The product: making dubious genetic advantages selectable for those who could pay. A November 2025 report in the Wall Street Journal described another San Francisco startup pursuing embryo gene-editing research despite legal prohibitions, backed by prominent tech investors and looking for permissive jurisdictions.
None of these are mistaken decisions, to be sure. The tech-bros may believe what appears dystopian through the moral lens of the 2020s could become normalised or even celebrated by society of the 2040s. Or they understand these are cautionary tales but believe the aspects warranting caution are either exaggerated or can be managed with better execution. Many of these figures are also staunch materialists and techno-determinists who don’t harbour the humanistic assumptions underlying most science fiction writing. When Aldous Huxley warns them about a society engineering away suffering using pleasure, surveillance, control, and punishment, they may genuinely see that as solving a problem rather than creating one. This is because the caution depends on valuing things like struggle, authenticity, and inefficiency, of which they’re usually dismissive. Which is why Mark Zuckerberg spending $10 billion to attempt to create the ‘Metaverse’ isn’t a failure of imagination but the success of a different imagination.
Some tech-bros also recognise exactly where this leads and view the resulting instability, disruption, and concentration of power as features rather than bugs because periods of chaos create opportunities for those positioned to profit from it. The might also believe that by being aware of the cautionary tales they’ve inoculated themselves against the specific failure modes the tales came with. “We’ve read 1984,” you might hear them say, “so obviously we won’t make those mistakes.” This is of course hubris but importantly it’s not stupidity.
Which brings us back to Jeffrey Epstein and the scientists who orbited him. Ball wrote that Epstein liked to surround himself with “a certain type of male scientific ‘intellectual’: arrogant, entitled, ‘anti-woke’ and often misogynist, typically late middle-aged and Ivy League and on the lookout for young women to impress and sleep with.” Many of Epstein’s pet scientists were supplied by literary agent John Brockman, who in the 1990s turned scientists into literary superstars that commanded large book advances and wrote authoritative-sounding op-eds. Brockman styled this crew as heralding a ‘Third Culture’ centred on the Edge Foundation, of which Epstein was the major funder.
Some of those in Brockman’s orbit were and remain very insightful intellectuals and by no means all had Epstein connections. Others severed those links after Epstein’s first conviction. But we can’t ignore the overlapping themes as well as personnel between Edge and Epstein Island. As Ball put it:
Celebrity culture always has a coarsening effect on scientific discourse itself. Flashy simplicity trumps thoughtful complexity: these ‘thought leaders’ often make claims that leave real experts with their heads in their hands. Considered views on history and ethics become distractions. And there’s a politicized element: Edge culture intersects with the technofascist futurism of Silicon Valley libertarians, and laments about #MeToo, wokeism, and pushy feminists are a constant refrain in the email exchanges.
Epstein’s connections, the TESCREAL ideologies, investments in longevity and embryo selection startups, the pronatalist conferences, the eugenicist discourse — none of them was a separate issue. They’re one and the same problem, or perhaps different manifestations of the same underlying orientation, which is to treat human ‘limits’ as engineering problems, then fund private bets to overcome them, with little regard for what social harms they accrue along the way.
Critics have also argued that these philosophies have encouraged tech-bros to shift attention away from solving present humanitarian issues and towards speculative futures. This appeal works on Silicon Valley elites who fund institutes dedicated to this thinking because it allows them to frame their anxieties about death, intelligence, biological limits, and control as moral imperatives that transcend democratic deliberation. The longtermism of MacAskill of which Musk is so fond contextualises efforts in terms of billions of humans not yet born: how convenient, then, that those billions can’t vote, can’t organise, can’t contradict the projections made on their behalf.
The pitfall in believing the tech-bros are making the choices that they are simply because they don’t know what you know is that it excuses us from confronting the possibility that Silicon Valley’s brightest minds have concluded that the engineered, surveilled, controlled, stratified future they envision is in fact not the entire point. Perhaps they’ve decided that when history is written by the posthuman victors, today’s cautionary tales will look like Luddite panic. Perhaps they’ve calculated that by the time the negative externalities become undeniable, they’ll already have captured enough of the gains to insulate themselves from the consequences. Or maybe they genuinely believe they’re doing good, that longer lifespans and enhanced intelligence and space colonies are moral imperatives, that anyone who can’t see this is simply thinking too small. In which case we’re not dealing with cynicism but with a totalising ideology that has convinced itself it holds the keys to human flourishing, and the fact that this ideology concentrates benefits among people who already have the most power is treated as a happy coincidence.
That doesn’t mean we must ban all life extension research or stop developing AI — but we should stop treating these as purely technical pursuits and instead recognise that every choice about what to study is also a choice about what kind of future we want and who gets to decide. We should insist that technological sovereignty isn’t just the capacity to build things but the capacity to deliberate together about whether we should build them at all. And, finally, we should stop giving these actors the benefit of the doubt. They’re not naïve. They’re not mistaken. They understand perfectly well what they’re building and they’re building it anyway.








