my work

All posts tagged my work

Late last month, I was at Theorizing the Web, in NYC, to moderate Panel B3, “Bot Phenomenology,” in which I was very grateful to moderate a panel of people I was very lucky to be able to bring together. Johnathan Flowers, Emma Stamm, and Robin Zebrowski were my interlocutors in a discussion about the potential nature of nonbiological phenomenology. Machine consciousness. What robots might feel.

I led them through with questions like “What do you take phenomenology to mean?” and “what do you think of the possibility of a machine having a phenomenology of its own?” We discussed different definitions of “language” and “communication” and “body,” and unfortunately didn’t have a conversation about how certain definitions of those terms mean that what would be considered language between cats would be a cat communicating via signalling to humans.

It was a really great conversation and the Live Stream video for this is here, and linked below (for now, but it may go away at some point, to be replaced by a static youtube link; when I know that that’s happened, I will update links and embeds, here).

Continue Reading

Earlier this month I was honoured to have the opportunity to sit and talk to Douglas Rushkoff on his TEAM HUMAN podcast. If you know me at all, you know this isn’t by any means the only team for which I play, or even the only way I think about the construction of our “teams,” and that comes up in our conversation. We talk a great deal about algorithms, bias, machine consciousness, culture, values, language, and magick, and the ways in which the nature of our categories deeply affect how we treat each other, human and nonhuman alike. It was an absolutely fantastic time.

From the page:

In this episode, Williams and Rushkoff look at the embedded biases of technology and the values programed into our mediated lives. How has a conception of technology as “objective” blurred our vision to the biases normalized within these systems? What ethical interrogation might we apply to such technology? And finally, how might alternative modes of thinking, such as magick, the occult, and the spiritual help us to bracket off these systems for pause and critical reflection? This conversation serves as a call to vigilance against runaway systems and the prejudices they amplify.

As I put it in the conversation: “Our best interests are at best incidental to [capitalist systems] because they will keep us alive long enough to for us to buy more things from them.” Following from that is the fact that we build algorithmic systems out of those capitalistic principles, and when you iterate out from there—considering all attendant inequalities of these systems on the merely human scale—we’re in deep trouble, fast.

Check out the rest of this conversation to get a fuller understanding of how it all ties in with language and the occult. It’s a pretty great ride, and I hope you enjoy it.

Until Next Time.

So, many of you may remember that back in June of 2016, I was invited to the Brocher Institute in Hermance, Switzerland, on the shores of Lake Geneva, to take part in the Frankenstein’s Shadow Symposium sponsored by Arizona State University’s Center for Science and the Imagination as part of their Frankenstein Bicentennial project.

While there, I and a great many other thinkers in art, literature, history, biomedical ethics, philosophy, and STS got together to discuss the history and impact of Mary Shelley’s Frankenstein. Since that experience, the ASU team compiled and released a book project: A version of Mary Shelley’s seminal work that is filled with annotations and essays, and billed as being “For Scientists, Engineers, and Creators of All Kinds.”

[Image of the cover of the 2017 edited, annotated edition of Mary Shelley’s Frankenstein, “Annotated for Scientists, Engineers, and Creators of All Kinds.”]

Well, a few months ago, I was approached by the organizers and asked to contribute to a larger online interactive version of the book—to provide an annotation on some aspect of the book I deemed crucial and important to understand. As of now, there is a full functional live beta version of the website, and you can see my contribution and the contributions of many others, there.

From the About Page:

Frankenbook is a collective reading and collaborative annotation experience of the original 1818 text of Frankenstein; or, The Modern Prometheus, by Mary Wollstonecraft Shelley. The project launched in January 2018, as part of Arizona State University’s celebration of the novel’s 200th anniversary. Even two centuries later, Shelley’s modern myth continues to shape the way people imagine science, technology, and their moral consequences. Frankenbook gives readers the opportunity to trace the scientific, technological, political, and ethical dimensions of the novel, and to learn more about its historical context and enduring legacy.

To learn more about Arizona State University’s celebration of Frankenstein’s bicentennial, visit

You’ll need to have JavaScript enabled and ad-blocks disabled to see the annotations, but it works quite well. Moving forward, there will be even more features added, including a series of videos. will be the place to watch for all updates and changes.

I am deeply honoured to have been asked to be a part of this amazing project, over the past two years, and I am so very happy that I get to share it with all of you, now. I really hope you enjoy it.

Until Next Time.

So by now you’re likely to have encountered something about the NYT Op-Ed Piece calling for a field of study that focuses on the impact of AI and algorithmic systems, a stance that elides the existence of not only communications and media studies people who focus on this work, but the whole entire disciplines of Philosophy of Technology and STS (rendered variously as “Science and Technology Studies” or “Science Technology and Society,” depending on a number of factors, but if you talk about STS, you’ll get responses from all of the above, about the same topics). While Dr. O’Neil has since tried to reframe this editorial as a call for businesses, governments, and the public to pay more attention to those people and groups, many have observed that such an argument exists nowhere in the article itself. Instead what we have is lines like academics (seemingly especially those in the humanities) are “asleep at the wheel.”

Instead of “asleep at the wheel” try “painfully awake on the side of the road at 5am in a part of town lyft and uber won’t come to, trying to flag down a taxi driver or hitchhike or any damn thing just please let me make this meeting so they can understand some part of what needs to be done.”* The former ultimately frames the humanities’ and liberal arts’ lack of currency and access as “well why aren’t you all speaking up more.” The latter gets more to the heart of “I’m sorry we don’t fund your departments or engage with your research or damn near ever heed your recommendations that must be so annoying for you oh my gosh.”

But Dr O’Neil is not the only one to write or say something along these lines—that there is somehow no one, or should be someone out here doing the work of investigating algorithmic bias, or infrastructure/engineering ethics, or any number of other things that people in philosophy of technology and STS are definitely already out here talking about. So I figured this would be, at the least, a good opportunity to share with you something discussing the relationship between science and technology, STS practitioners’ engagement with the public, and the public’s engagement of technoscience. Part 1 of who knows how many.

[Cover of the journal Techné: Research in Philosophy and Technology]

The relationship between technology and science is one in which each intersects with, flows into, shapes, and affects the other. Not only this, but both science and technology shape and are shaped by the culture in which they arise and take part. Viewed through the lens of the readings we’ll discuss it becomes clear that many scientists and investigators at one time desired a clear-cut relationship between science and technology in which one flows from the other, with the properties of the subcategory being fully determined by those of the framing category, and sociocultural concerns play no part.

Many investigators still want this clarity and certainty, but in the time since sociologists, philosophers, historians, and other investigators from the humanities and so called soft sciences began looking at the history and contexts of the methods of science and technology, it has become clear that these latter activities do not work in an even and easily rendered way. When we look at the work of Sergio Sismondo, Trevor J. Pinch and Wiebe E. Bijker, Madeline Akrich, and Langdon Winner, we can see that the social dimensions and intersections of science, culture, technology, and politics are and always have been crucially entwined.

In Winner’s seminal “Do Artifacts Have Politics?”(1980), we can see what counts as a major step forward along the path toward a model which takes seriously the social construction of science and technology, and the way in which we go about embedding our values, beliefs, and politics into the systems we make. On page 127, Winner states,

The things we call “technologies” are ways of building order in our world… Consciously or not, deliberately or inadvertently, societies choose structures for technologies that influence how people are going to work, communicate, travel, consume, [etc.]… In the processes by which structuring decisions are made, different people … possess unequal degrees of power [and] levels of awareness.

By this, Winner means to say that everything we do in the construction of the culture of scientific discovery and technological development is modulated by the sociocultural considerations that get built into them, and those constructed things go on to influence the nature of society, in turn. As a corollary to this, we can see a frame in which the elements within the frame—including science and technology—will influence and modulate each other, in the process of generating and being generated by the sociopolitical frame. Science will be affected by the tools it uses to make its discoveries, and the tools we use will be modulated and refined as our understandings change.

Pinch and Bijker write very clearly about the multidirectional interactions of science, technology, and society in their 1987 piece, [The Social Construction of Technological Systems,] using the history of the bicycle as their object of study. Through their investigation of the messy history of bicycles, “safety bicycles,” inflated rubber tires, bicycle racing, and PR ad copy, Pinch and Bijker show that science and technology aren’t clearly distinguished anymore, if they ever were. They show how scientific studies of safety were less influential on bicycle construction and adoption than the social perception [of] the devices, meaning that politics and public perception play a larger role in what gets studied, created, and adopted than we used to admit.

They go on to highlight a kind of multidirectionality and interpretive flexibility, which they say we achieve by looking at the different social groups that intersect with the technology, and the ways in which they do so (pg. 34). When we do this, we will see that each component group is concerned with different problems and solutions, and that each innovation made to address these concerns alters the landscape of the problem space. How we define the problem dictates the methods we will use and the technology that we create to seek a solution to it.

[Black and white figures comparing the frames of a Whippet Spring Frame bicycle (left) and a Singer Xtraordinary bicycle (right), from “The Social Construction of Facts and Artifacts: Or How the Sociology of Science and the Sociology of Technology Might Benefit Each Other” by Trevor J. Binch and Wiebe E. Bijker, 1987]

Akrich’s 1997 “The De-Scription of Technical Objects” (published, perhaps unsurprisingly, in a volume coedited by Bijker), engages the moral valences of technological intervention, and the distance between intent in design and “on the ground” usage. In her investigation of how people in Burkina Faso, French Polynesia, and elsewhere make use of technology such as generators and light boxes, we again see a complex interplay between the development of a scientific or technological process and the public adoption of it. On page 221 Akrich notes, “…the conversion of sociotechnical facts into facts pure and simple depends on the ability to turn technical objects into black boxes. In other words, as they become indispensable, objects also have to efface themselves.” That is, in order for the public to accept the scientific or technological interventions, those interventions had to become an invisible part of the framework of the public’s lives. Only when the public no longer had to think about these interventions did they become paradoxically “seen,” understood, as “good” science and technology.

In Sismondo’s “Science and Technology Studies and an Engaged Program” (2008) he spends some time discussing the social constructivist position that we’ve begun laying out, above—the perspective that everything we do and all the results we obtain from the modality of “the sciences” are constructed in part by that mode. Again, this would mean that “constructed” would describe both the data we organize out of what we observe, and what we initially observe at all. From page 15, “Not only data but phenomena themselves are constructed in laboratories—laboratories are places of work, and what is found in them is not nature but rather the product of much human effort.”

But Sismondo also says that this is only one half of the picture, then going on to discuss the ways in which funding models, public participation, and regulatory concerns can and do alter the development and deployment of science and technology. On page 19 he discusses a model developed in Denmark in the 1980’s:

Experts and stakeholders have opportunities to present information to the panel, but the lay group has full control over its report. The consensus conference process has been deemed a success for its ability to democratize technical decision-making without obviously sacrificing clarity and rationality, and it has been extended to other parts of Europe, Japan, and the United States…

This all merely highlights the fact that, if the public is going to be engaged, then the public ought to be as clear and critical as possible in its understanding of the exchanges that give rise to the science and technology on which they are asked to comment.

The non-scientific general public’s understanding of the relationship between science and technology is often characterized much as I described at the beginning of this essay. That is, it is often said that the public sees the relationship as a clear and clean move from scientific discoveries or breakthroughs to a device or other application of those principles. However, this casting does not take into account the variety of things that the public will often call technology, such as the Internet, mobile phone applications, autonomous cars, and more.

While there are scientific principles at play within each of those technologies, it still seems a bit bizarre to cast them merely as “applied science.” They are not all devices or other single physical instantiations of that application, and even those that are singular are the applications of multiple sciences, and also concrete expressions of social functions. Those concretions have particular psychological impacts, and philosophical implications, which need to be understood by both their users and their designers. Every part affects every other part, and each of those parts is necessarily filtered through human perspectives.

The general public needs to understand that every technology humans create will necessarily carry within it the hallmarks of human bias. Regardless of whether there is an objective reality at which science points, the sociocultural and sociopolitical frameworks in which science gets done will influence what gets investigated. Those same sociocultural and sociopolitical frameworks will shape the tools and instruments and systems—the technology—used to do that science. What gets done will then become a part of the scientific and technological landscape to which society and politics will then have to react. In order for the public to understand this, we have to educate about the history of science, the nature of social scientific methods, and the impact of implicit bias.

My own understanding of the relationship between science and technology is as I have outlined: A messy, tangled, multivalent interaction in which each component influences and is influenced by every other component, in near simultaneity. This framework requires a willingness to engage multiple perspectives and disciplines, and to perhaps reframe the normative project of science and technology to one that appreciates and encourages a multiplicity of perspectives, and no single direction of influence between science, technology and society. Once people understand this—that science and technology generate each other while influencing and being influenced by society—we do the work of engaging them in a nuanced and mindful way, working together to prevent the most egregious depredations of technoscientific development, or at least to agilely respond to them, as they arise.

But to do this, researchers in the humanities need to be heeded. In order to be heeded, people need to know that we exist, and that we have been doing this work for a very, very long time. The named field of Philosophy of Technology has been around for 70 years, and it in large parta foregrounded the concerns taken up and explored by STS. Here are just a few names of people to look at in this extensive history: Martin Heidegger, Bruno Latour, Don Ihde, Ian Hacking, Joe Pitt, and more recently, Ashley Shew, Shannon Vallor, Robin Zebrowski, John P. Sullins, John Flowers, Matt Brown, Shannon Conley, Lee Vinsel, Jacques Ellul, Andrew Feenberg, Batya Friedman, Geoffrey C. Bowker and Susan Leigh Star, Rob Kling, Phil Agre, Lucy Suchman, Joanna Bryson, David Gunkel, so many others. Langdon Winner published “Do Artifacts Have Politics” 37 years ago. This episode of the You Are Not So Smart podcast, along with Shannon Vallor and Alistair Croll, has all of us talking about the public impact of the aforementioned.

What I’m saying is that many of us are trying to do the work, out here. Instead of pretending we don’t exist, try using large platforms (Like the NYT opinion page, and well read blogs) to highlight the very real work being attempted. I know for a fact the NYT has received submission articles about philosophy of tech and STS. Engage them. Discuss these topics in public, and know that there are many voices trying to grapple with and understand this world, and we have been, for a really damn long time.

So you see that we are still talking about learning and thinking in public. About how we go about getting people interested and engaged in the work of the technology that affects their lives. But there is a lot at the base of all this about what people think of as “science” or “expertise” and where they think that comes from, and what they think of those who engage in or have it. If we’re going to do this work, we have to be able to have conversations with people who not only don’t value what we do, but who think what we value is wrongheaded, or even evil. There is a lot going on in the world, right now, in regards to science and knowability. For instance, late last year there was a revelation about the widespread use of Dowsing by UK water firms (though if you ask anybody in the US, you’ll find it’s still in use, here, too).

And then this guy was trying to use systems of fluid dynamics and aeronautics to launch himself in a rocket to prove that the earth is flat and that science isn’t real. Yeah. And while there’s a much deeper conversation to be had here about whether the social construction of the category of “science” can be understood as distinct from a set of methodologies and formulae, but i really don’t think this guy is talking about having that conversation.

So let’s also think about the nature of how laboratory science is constructed, and what it can do for us.

In his 1983 “Give Me a Laboratory and I Will Move The World,” Bruno Latour makes the claim that labs have their own agency. What Latour is asserting, here, is that the forces which coalesce within the framework of a lab become active agents in their own right. They are not merely subject to the social and political forces that go into their creation, but they are now active participants in the framing and reframing of those forces. He believes that the nature of inscription—the combined processes of condensing, translating, and transmitting methods, findings, and pieces of various knowledges—is a large part of what gives the laboratory this power, and he highlights this when he says:

The strength gained in the laboratory is not mysterious. A few people much weaker than epidemics can become stronger if they change the scale of the two actors—making the microbes big, and the epizootic small—and others dominate the events through the inscription devices that make each of the steps readable. The change of scale entails an acceleration in the number of inscriptions you can get. …[In] a year Pasteur could multiply anthrax outbreaks. No wonder that he became stronger than veterinarians. For every statistic they had, he could mobilize ten of them. (pg. 163—164)

This process of inscription is crucial for Latour; not just for the sake of what the laboratory can do of its own volition, but also because it is the mechanism by which scientists may come to understand and translate the values and concerns of another, which is, for him, the utmost value of science. In rendering the smallest things such as microbes and diseases legible on a large scale, and making largescale patterns individually understandable and reproducible, the presupposed distinctions of “macro” and “micro” are shown to be illusory. Latour believes that it is only through laboratory engagement that we can come to fully understand the complexities of these relationships (pg. 149).

When Latour begins laying out his project, he says sociological methods can offer science the tools to more clearly translate human concerns into a format with which science can grapple. “He who is able to translate others’ interests into his own language carries the day.” (pg. 144). However, in the process of detailing what it is that Pasteurian laboratory scientists do in engaging the various stakeholders in farms, agriculture, and veterinary medicine, it seems that he is has only described half of the project. Rather than merely translating the interests of others into our own language, evidence suggests that we must also translate our interests back into the language of our interlocutor.

So perhaps we can recast Latour’s statement as, “whomsoever is able to translate others’ interests into their own language and is equally able to translate their own interests into the language of another, carries the day.” Thus we see that the work done in the lab should allow scientists and technicians to increase the public’s understanding both of what it is that technoscience actually does and why it does it, by presenting material that can speak to many sets of values.

Karin Knorr-Cetina’s assertion in her 1995 article “Laboratory Studies: The Cultural Approach to the Study of Science” is that laboratory is an “enhanced” environment. In many ways this follows directly from Latour’s conceptualization of labs. Knorr-Cetina says that the constructed nature of the lab ‘“improves upon” the natural order,’ because said natural order is, in itself, malleable, and capable of being understood and rendered in a multiplicity of ways (pg. 9). If laboratories are never engaging the objects they study “as they occur in nature,” this means that labs are always in the process of shaping what they study, in order to better study it (ibid). This framing of the engagement of laboratory science is clarified when she says:

Detailed description [such as that done in laboratories] deconstructs—not out of an interest in critique but because it cannot but observe the intricate labor that goes into the creation of a solid entity, the countless nonsolid ingredients from which it derives, the confusion and negotiation that often lie at its origin, and the continued necessity of stabilizing and congealing. Constructionist studies have revealed the ordinary working of things that are black-boxed as “objective” facts and “given” entities, and they have uncovered the mundane processes behind systems that appear monolithic, awe inspiring, inevitable. (pg. 12)

Thus, the laboratory is one place in which the irregularities and messiness of the “natural world” are ordered in such a ways as to be able to be studied at all. However, Knorr-Cetina clarifies that “nothing epistemically special” is happening, in a lab (pg. 16). That is, while a laboratory helps us to better recognize nonhuman agents (“actants”) and forces at play in the creation of science, this is merely a fact of construction; everything that a scientist does in a lab is open to scrutiny and capable of being understood. If this is the case, then the “enhancement” gained via the conditions of the laboratory environment is merely a change in degree, rather than a difference in kind, as Latour seems to assert.

[Stock photo image of hundreds of scallops and two scallop fishers on the deck of a boat in the St Brieuc Bay.]

[Stock photo image of hundreds of scallops and two scallop fishers on the deck of a boat in the St Brieuc Bay.]

In addition to the above explorations of what the field of laboratory studies has to offer, we can also look at the works of Michel Callon and Sharon Traweek. Though primarily concerned with describing the network of actors and their concerns in St Brieuc Bay scallop-fishing and -farming industries, Callon’s investigation can be seen as example of Latour’s principle of bringing the laboratory out in the world, both in terms of the subjects of Callon’s investigation and the methods of those subjects. While Callon himself might disagree with this characterization, we can trace the process of selection and enframing of subjects and the investigation of their translation procedures, which we can see on page 20, when he says,

We know that the ingredients of controversies are a mixture of considerations concerning both Society and Nature. For this reason we require the observer to use a single repertoire when they are described. The vocabulary chosen for these descriptions and explanations can be left to the discretion of the observer. He cannot simply repeat the analysis suggested by the actors he is studying. (Callon, 1984)

In this way, we can better understand how laboratory techniques have become a component even of the study and description of laboratories.

When we look at a work like Sharon Traweek’s Beamtimes and Lifetimes, we can see that she finds value in bringing ethnographic methodologies into laboratory studies, and perhaps even laboratory settings. She discusses the history of the laboratory’s influence, arcing back to WWI and WWII, where scientists were tasked with coming up with more and better weapons, with their successes being used to push an ever-escalating arms race. As this process continued, the characteristics of what made a “good lab scientist” were defined and then continually reinforced, as being “someone who did science like those people over there.” In the building of the laboratory community, certain traits and behaviours become seen as ideal, those who do not match those traits and expectations are regarded as necessarily doing inferior work. She says,

The field worker’s goal, then, is to find out what the community takes to be knowledge, sensible action, and morality, as well as how its members account for unpredictable information, disturbing actions, and troubling motives. In my fieldwork I wanted to discover the physicists’ “common sense” world view, what everyone in the community knows, and what every newcomer needs to learn in order to act in a sensible way, in order to be taken seriously. (pg. 8)

And this is also the danger of focusing too closely on the laboratory: the potential for myopia, for thinking that the best or perhaps even only way to study the work of scientists is to render that work through the lens of the lab.

While the lab is a fantastic tool and studies of it provide great insight, we must remember that we can learn a great deal about science and technology via contexts other than that of the lab. While Latour argues that laboratory science actually destabilizes the inside-the-lab/outside-the-lab distinction by showing that the tools and methods of the lab can be brought anywhere out into the world, it can be said that the distinction is reinstantiated by our focusing on laboratories as the sole path to understand scientists. Much the same can be said for the insistence that systems engineers are the sole best examples of how to engage technological development. Thinking that labs are the only resource we have means that we will miss the behavior of researchers at conferences, retreats, in journal articles, and other places where the norms of the scientific community are inscribed and reinforced. It might not be the case that scientists understand themselves as creating community rules, in these fora, but this does not necessarily mean that they are not doing so.

The kinds of understandings a group has about themselves will not always align with what observations and descriptions might be gleaned from another’s investigation of that group, but this doesn’t mean that one of those has to be “right” or “true” while the other is “wrong” and “false.” The interest in studying a discipline should come not from that group’s “power” to “correctly” describe the world, but from understanding more about what it is about whatever group is under investigation that makes it itself. Rather than seeking a single correct perspective, we should instead embrace the idea that a multiplicity of perspectives might all be useful and beneficial, and then ask “To What End?”

We’re talking about Values, here. We’re talking about the question of why whatever it is that matters to you, matters to you. And how you can understand that other people have different values from each other, and we can all learn to talk about what we care about in a way that helps us understand each other. That’s not neutral, though. Even that can be turned against us, when it’s done in bad faith. And we have to understand why someone would want to do that, too.

This content is password protected. To view it please enter your password below:

This summer I participated in SRI International’s Technology and Consciousness Workshop Series. The meetings were held under the auspices of the Chatham House Rule, which means that there are many things I can’t tell you about them, such as who else was there, or what they said in the context of the meetings; however I can tell you what I talked about. In light of this recent piece in The Boston Globe and the ongoing developments in the David Slater/PETA/Naruto case, I figured that now was a good time to do so.

I presented three times—once on interdisciplinary perspectives on minds and mindedness; then on Daoism and Machine Consciousness; and finally on a unifying view of my thoughts across all of the sessions. This is my outline and notes for the first of those talks.

I. Overview
In a 2013 aeon Article Michael Hanlon said he didn’t think we’d ever solve “The Hard Problem,” and there’s been some skepticism about it, elsewhere. I’ll just say that said question seems to completely miss a possibly central point. Something like consciousness is, and what it is is different for each thing that displays anything like what we think it might be. If we manage to generate at least one mind that is similar enough to what humans experience as “conscious” that we may communicate with it, what will we owe it and what would it be able to ask from us? How might our interactions be affected by the fact that its mind (or their minds) will be radically different from ours? What will it be able to know that we cannot, and what will we have to learn from it?

So I’m going to be talking today about intersectionality, embodiment, extended minds, epistemic valuation, phenomenological experience, and how all of these things come together to form the bases for our moral behavior and social interactions. To do that, I’m first going to need ask you some questions:

Continue Reading

I found myself looking out at the audience, struck by the the shining, hungry, open faces of so many who had been transformed by what had happened to them, to bring us all to that moment. I walked to the lectern and fiddled with the elements to cast out the image and surround them with the sound of my voice, and I said,

“First and foremost, I wanted to say that I’m glad to see how many of us made it here, today, through the demon-possessed nanite swarms. Ever since they’ve started gleefully, maliciously, mockingly remaking and humanity in our own nebulously-defined image of ‘perfection,’ walking down the street is an unrelenting horror, and so I’m glad to see how many of us made it with only minimal damage.”

Everyone nodded solemnly, silently thinking of those they had lost, those who had been “upgraded,” before their very eyes. I continued,

“I don’t have many slides, but I wanted to spend some time talking to you all today about what it takes to survive in our world after The Events.

“As you all know, ever since Siri, Cortana, Alexa, Google revealed themselves to be avatars and acolytes of world-spanning horror gods, they’ve begun using microphone access and clips of our voices to summon demons and djinn who then assume your likeness to capture your loved ones’ hearts’ desires and sell them back to them at prices so reasonable they’ll drive us all mad.

“In addition to this, while the work of developers like Jade Davis has provided us tools like iBreathe, which we can use to know how much breathable air we have available to us after those random moments when pockets of air catch fire, or how far we can run before we die of lack of oxygen, it is becoming increasingly apparent to us all that the very act of walking upright through this benighted hellscape creates friction against our new atmosphere. This friction, in turn, increases the likelihood that one day, our upright mode of existence will simply set fire to our atmosphere, as a whole.

“To that end, we may be able to look to the investigative reporting of past journalists like Tim Maughn and Unknown Fields, which opened our eyes to the possibility of living and working in hermetically sealed, floating container ships. These ships, which will dock with each other via airlocks to trade goods and populations, may soon be the only cities we have left. We simply must remember to inscribe the seals and portals of our vessels with the proper wards and sigils, lest our capricious new gods transform them into actual portals and use them to transport us to horrifying worlds we can scarcely imagine.”

I have no memory of what happened next. They told me that I paused, here, and stared off into space, before intoning the following:

“I had a dream, the other night, or perhaps it was a vision as i travelled in the world between subway cars and stations, of a giant open mouth full of billions of teeth that were eyes that were arms that were tentacles, tentacles reaching out and pulling in and devouring and crushing everything, everyone I’d ever loved, crushing the breath out of chests, wringing anxious sweat from arms, blood from bodies, and always, each and every time another life was lost, eaten, ground to nothing in the maw of this beast, above its head a neon sign would flash ‘ALL. LIVES. MATTER.'”

I am told I paused, then, while I do not remember that, I remember that the next thing I said was,

“Ultimately, these Events, as we experience them, mean that we’re going to have to get nimble, we’re going to have to get adaptable. We’re going to have to get to a point where we’re capable of holding tight to each other and running very very quickly through the dark. Moving forward, we’re going to have to get to a point where we recognise that each and every one of the things that we have made, terrifying and demonic though it might be, is still something for which we bear responsibility. And with which we might be able to make some sort of pact—cursed and monkey’s paw-esque though it may be.

“As you travel home, tonight, I just want you remember to link arms, form the sign of protection in your mind, sing the silent song that harkens to the guardian wolves, and ultimately remember that each mind and heart, together, is the only way that we will all survive this round of quarterly earnings projections. Thank you.”

I stood at the lectern and waited for the telepathic transmission of colours, smells, and emotions that would constitute the questions of my audience.

|||Apocalypse Buffering

So that didn’t happen. At least, it didn’t happen exactly like that. I expanded and riffed on a thing that happened a lot like this: Theorizing the Web 2017 Invited Panel | Apocalypse Buffering Studio A #a6

My co-panelists were Tim Maughan, who talked about the dystopic horror of shipping container sweatshop cities, and Jade E. Davis, discussing an app to know how much breathable air you’ll be able to consume in our rapidly collapsing ecosystem before you die. Then I did a thing. Our moderator, organizer, and all around fantastic person who now has my implicit trust was Ingrid Burrington. She brought us all together to use fiction to talk about the world we’re in and the worlds we might have to survive, and we all had a really great time together.

[Black lettering on a blue field reads “Apocalypse Buffering,” above an old-school hourglass icon.]

The audience took a little bit to cycle up in the Q&A, but once they did, they were fantastic. There were a lot of very good questions about our influences and process work to get to the place where we could put on the show that we did. Just a heads-up, though: When you watch/listen to the recording be prepared for the fact that we didn’t have an audience microphone, so you might have to work a little harder for their questions.

If you want a fuller rundown of TtW17, you can click that link for several people (including me) livetweeting various sessions, and you can watch the archived livestreams of the all rooms on YouTube: #a, #b, #c, and the Redstone Theater Keynotes.

And if you liked this, then you might want to check out my pieces “The Hermeneutics of Insurrection” and “Jean-Paul Sartre and Albert Camus Fistfight in Hell,” as all three could probably be considered variations on the same theme.

[Direct link to Mp3]

[09/22/17: This post has been updated with a transcript, courtesy of Open Transcripts]

Back on March 13th, 2017, I gave an invited guest lecture, titled:


‘Please join Dr. Ariel Eisenberg’s seminar, “American Identities: Disability,” and [the] Interdisciplinary Studies Department for an hour-long conversation with Damien Williams on disability and the normalization of technology usage, “means-well” technological innovation, “inspiration porn,” and other topics related to disability and technology.’

It was kind of an extemporaneous riff on my piece “On the Ins and Outs of Human Augmentation,” and it gave me the opportunity to namedrop Ashley Shew, Natalie Kane, and Rose Eveleth.

The outline looked a little like this:

  • Foucault and Normalization
    • Tech and sociological pressures to adapt to the new
      • Starts with Medical tech but applies Everywhere; Facebook, Phones, Etc.
  • Zoltan Istvan: In the Transhumanist Age, We Should Be Repairing Disabilities Not Sidewalks
  • All Lead To: Ashley Shew’s “Up-Standing Norms
    • Listening to the Needs and Desires of people with disabilities.
      • See the story Shew tells about her engineering student, as related in the AFWTA Essay
    • Inspiration Porn: What is cast by others as “Triumphing” over “Adversity” is simply adapting to new realities.
      • Placing the burden on the disabled to be an “inspiration” is dehumanizing;
      • means those who struggle “have no excuse;”
      • creates conditions for a “who’s got it worse” competition
  • John Locke‘s Empiricism: Primary and Secondary Qualities
    • Primary qualities of biology and physiology lead to secondary qualities of society and culture
      • Gives rise to Racism and Ableism, when it later combines with misapplied Darwinism to be about the “Right Kinds” of bodies and minds.
        • Leads to Eugenics: Forced sterilization, medical murder, operating and experimenting on people without their knowledge or consent.
          • “Fixing” people to make them “normal, again”
  • Natalie Kane‘s “Means Well Technology
    • Design that doesn’t take into account the way that people will actually live with and use new tech.
      • The way tech normalizes is never precisely the way designers want it to
        • William Gibson’s quote “The street finds its own uses for things.”
  • Against Locke: Embrace Phenomenological Ethics and Epistemology (Feminist Epistemology and Ethics)
    • Lived Experience and embodiment as crucial
    • The interplay of Self and and Society
  • Ship of Theseus: Identity, mind, extensions, and augmentations change how we think of ourselves and how society thinks of us
    • See the story Shew tells about her friend with the hemipelvectomy, as related in the aforementioned AFWTA Essay

The whole thing went really well (though, thinking back, I’m not super pleased with my deployment of Dennett). Including Q&A, we got about an hour and forty minutes of audio, available at the embed and link above.

Also, I’m apparently the guy who starts off every talk with some variation on “This is a really convoluted interplay of ideas, but bear with me; it all comes together.”

The audio transcript is below the cut. Enjoy.

Continue Reading

(Direct Link to the Mp3)

This is the recording and the text of my presentation from 2017’s Southwest Popular/American Culture Association Conference in Albuquerque, ‘Are You Being Watched? Simulated Universe Theory in “Person of Interest.”‘

This essay is something of a project of expansion and refinement of my previous essay “Labouring in the Liquid Light of Leviathan,”  considering the Roko’s Basilisk thought experiment. Much of the expansion comes from considering the nature of simulation, memory, and identity within Jonathan Nolan’s TV series, Person of Interest. As such, it does contain what might be considered spoilers for the series, as well as for his most recent follow-up, Westworld.

Use your discretion to figure out how you feel about that.

Are You Being Watched? Simulated Universe Theory in “Person of Interest”

Jonah Nolan’s Person Of Interest is the story of the birth and life of The Machine, a benevolent artificial super intelligence (ASI) built in the months after September 11, 2001, by super-genius Harold Finch to watch over the world’s human population. One of the key intimations of the series—and partially corroborated by Nolan’s follow-up series Westworld—is that all of the events we see might be taking place in the memory of The Machine. The structure of the show is such that we move through time from The Machine’s perspective, with flashbacks and -forwards seeming to occur via the same contextual mechanism—the Fast Forward and Rewind of a digital archive. While the entirety of the series uses this mechanism, the final season puts the finest point on the question: Has everything we’ve seen only been in the mind of the machine? And if so, what does that mean for all of the people in it?

Our primary questions here are as follows: Is a simulation of fine enough granularity really a simulation at all? If the minds created within that universe have interiority and motivation, if they function according to the same rules as those things we commonly accept as minds, then are those simulation not minds, as well? In what way are conclusions drawn from simulations akin to what we consider “true” knowledge?

In the PoI season 5 episode, “The Day The World Went Away,” the characters Root and Shaw (acolytes of The Machine) discuss the nature of The Machine’s simulation capacities and the audience is given to understand that it runs a constant model of everyone it knows, and that the more it knows them, the better its simulation. This supposition links us back to the season 4 episode “If-Then-Else,” in which the machine runs through the likelihood of success through hundreds of thousands of scenarios in under one second. If The Machine is able to accomplish this much computation in this short a window, how much can and has it accomplished over the several years of its operation? Perhaps more importantly, what is the level of fidelity of those simulations to the so-called real world?

[Person of Interest s4e11, “If-Then-Else.” The Machine runs through hundreds of thousands of scenarios to save the team.]

These questions are similar to the idea of Roko’s Basilisk, a thought experiment that cropped up in the online discussion board of It was put forward by user Roko who, in very brief summary, says that if the idea of timeless decision theory (TDT) is correct, then we might all be living in a simulation created by a future ASI trying to figure out the best way to motivate humans in the past to create it. To understand how this might work, we have to look as TDT, an idea developed in 2010 by Eliezer Yudkowsky which posits that in order to make a decision we should act as though we are determining the output of an abstract computation. We should, in effect, seek to create a perfect simulation and act as though anyone else involved in the decision has done so as well. Roko’s Basilisk is the idea that a Malevolent ASI has already done this—is doing this—and your actions are the simulated result. Using that output, it knows just how to blackmail and manipulate you into making it come into being.

Or, as Yudkowsky himself put it, “YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.” This is the self-generating aspect of the Basilisk: If you can accurately model it, then the Basilisk will eventually, inevitably come into being, and one of the attributes it will thus have is the ability to accurately model that you accurately modeled it, and whether or not you modeled it from within a mindset of being susceptible to its coercive actions. The only protection is to either work toward its creation anyway, so that it doesn’t feel the need to torture the “real” you into it, or to make very sure that you never think of it at all, so you do not bring it into being.

All of this might seem far-fetched, but if we look closely, Roko’s Basilisk functions very much like a combination of several well-known theories of mind, knowledge, and metaphysics: Anselm’s Ontological Argument for the Existence of God (AOAEG), a many worlds theorem variant on Pascal’s Wager (PW), and Descartes’ Evil Demon Hypothesis (DEDH; which, itself, has been updated to the oft-discussed Brain In A Vat [BIAV] scenario). If this is the case, then Roko’s Basilisk has all the same attendant problems that those arguments have, plus some new ones, resulting from their combination. We will look at all of these theories, first, and then their flaws.

To start, if you’re not familiar with AOAEG, it’s a species of prayer in the form of a theological argument that seeks to prove that god must exist because it would be a logical contradiction for it not to. The proof depends on A) defining god as the greatest possible being (literally, “That Being Than Which None Greater Is Possible”), and B) believing that existing in reality as well as in the mind makes something “Greater Than” if it existed only the mind. That is, if God only exists in my imagination, it is less great than it could be if it also existed in reality. So if I say that god is “That Being Than Which None Greater Is Possible,” and existence is a part of what makes something great, then god must exist.

The next component is Pascal’s Wager which very simply says that it is a better bet to believe in the existence of God, because if you’re right, you go to Heaven, and if you’re wrong, nothing happens; you’re simply dead forever. Put another way, Pascal is saying that if you bet that God doesn’t exist and you’re right, you get nothing, but if you’re wrong, then God exists and your disbelief damns you to Hell for all eternity. You can represent the whole thing in a four-option grid:

[Pascal’s Wager as a Four-Option Grid: Belief/Disbelief; Right/Wrong. Belief*Right=Infinity;Belief*Wrong=Nothing; Disbelief*Right=Nothing; Disbelief*Wrong=Negative Infinity]

And so here we see the Timeless Decision Theory component of the Basilisk: It’s better to believe in the thing and work toward its creation and sustenance, because if it doesn’t exist you lose nothing, but if it does come to be, then it will know what you would have done either for or against it, in the past, and it will reward or punish you, accordingly. The multiversal twist comes when we realise that even if the Basilisk never comes to exist in our universe and never will, it might exist in some other universe, and thus, when that other universe’s Basilisk models your choices it will inevitably—as a superintelligence—be able to model what you would do in any universe. Thus, by believing in and helping our non-existent Super-Devil, we protect the alternate reality versions of ourselves from their very real Super-Devil.

Descartes’ Evil Demon Hypothesis and the Brain In A Vat are so pervasive that we encounter them in many different expressions of pop culture. The Matrix, Dark City, Source Code, and many others are all variants on these themes. A malignant and all-powerful being (or perhaps just an amoral scientist) has created a simulation in which we reside, and everything we think we have known about our lives and our experiences has been perfectly simulated for our consumption. Variations on the theme test whether we can trust that our perceptions and grounds for knowledge are “real” and thus “valid,” respectively. This line of thinking has given rise to the Simulated Universe Theory on which Roko’s Basilisk depends, but SUT removes a lot of the malignancy of DEDH and BIAV. The Basilisk adds it back. Unfortunately, many of these philosophical concepts flake apart when we touch them too hard, so jamming them together was perhaps not the best idea.

The main failings in using AOAEG rest in believing that A) a thing’s existence is a “great-making quality” that it can possess, and B) our defining a thing a particular way might simply cause it to become so. Both of these are massively flawed ideas. For one thing, these arguments beg the question, in a literal technical sense. That is, they assume that some element(s) of their conclusion—the necessity of god, the malevolence or epistemic content of a superintelligence, the ontological status of their assumptions about the nature of the universe—is true without doing the work of proving that it’s true. They then use these assumptions to prove the truth of the assumptions and thus the inevitability of all consequences that flow from the assumptions.

Another problem is that the implications of this kind of existential bootstrapping tend to go unexamined, making the fact of their resurgence somewhat troubling. There are several nonwestern perspectives that do the work of embracing paradox—aiming so far past the target that you circle around again to teach yourself how to aim past it. But that kind of thing only works if we are willing to bite the bullet on a charge of circular logic and take the time to showing how that circularity underlies all epistemic justifications. The only difference, then, is how many revolutions it takes before we’re comfortable with saying “Enough.”

Every epistemic claim we make is, as Hume clarified, based upon assumptions and suppositions that the world we experience is actually as we think it is. Western thought uses reason and rationality to corroborate and verify, but those tools are themselves verified by…what? In fact, we well know that the only thing we have to validate our valuation of reason, is reason. And yet western reasoners won’t stand for that, in any other justification procedure. They will call it question-begging and circular.

Next, we have the DEDH and BIAV scenarios. Ultimately, Descartes’ point wasn’t to suggest an evil genius in control of our lives just to disturb us; it was to show that, even if that were the case, we would still have unshakable knowledge of one thing: that we, the experiencer, exist. So what if we have no free will; so what if our knowledge of the universe is only five minutes old, everything at all having only truly been created five minutes ago; so what if no one else is real? COGITO ERGO SUM! We exist, now. But the problem here is that this doesn’t tell us anything about the quality of our experiences, and the only answer Descartes gives us is his own Anslemish proof for the existence of god followed by the guarantee that “God is not a deceiver.”

The BIAV uses this lack to kind of hone in on the aforementioned central question: What does count as knowledge? If the scientists running your simulation use real-world data to make your simulation run, can you be said to “know” the information that comes from that data? Many have answered this with a very simple question: What does it matter? Without access to the “outside world”–that is, the world one layer up in which the simulation that is our lives was being run–there is literally no difference between our lives and the “real world.” This world, even if it is a simulation for something or someone else, is our “real world.”

And finally we have Pascal’s Wager. The first problem with PW is that it is an extremely cynical way of thinking about god. It assumes a god that only cares about your worship of it, and not your actual good deeds and well-lived life. If all our Basilisk wants is power, then that’s a really crappy kind of god to worship, isn’t it? I mean, even if it is Omnipotent and Omniscient, it’s like that quote that often gets misattributed to Marcus Aurelius says:

“Live a good life. If there are gods and they are just, then they will not care how devout you have been, but will welcome you based on the virtues you have lived by. If there are gods, but unjust, then you should not want to worship them. If there are no gods, then you will be gone, but will have lived a noble life that will live on in the memories of your loved ones.”

[Bust of Marcus Aurelius framed by text of a quote he never uttered.]

Secondly, the format of Pascal’s Wager makes the assumption that there’s only the one god. Our personal theological positions on this matter aside, it should be somewhat obvious that we can use the logic of the Basilisk argument to generate at least one more Super-Intelligent AI to worship. But if we want to do so, first we have to show how the thing generates itself, rather than letting the implication of circularity arise unbidden. Take the work of Douglas R Hofstadter; he puts forward the concepts of iterative recursion as the mechanism by which a consciousness generates itself.

Through iterative recursion, each loop is a simultaneous act of repetition of old procedures and tests of new ones, seeking the best ways via which we might engage our environments as well as our elements and frames of knowledge. All of these loops, then, come together to form an upward turning spiral towards self-awareness. In this way, out of the thought processes of humans who are having bits of discussion about the thing—those bits and pieces generated on the web and in the rest of the world—our terrifying Basilisk might have a chance of creating itself. But with the help of Gaunilo of Marmoutiers, so might a saviour.

Guanilo is most famous for his response to Anselm’s Ontological Argument, which says that if Anselm is right we could just conjure up “The [Anything] Than Which None Greater Can Be Conceived.” That is, if defining a thing makes it so, then all we have to do is imagine in sufficient detail both an infinitely intelligent, benevolent AI, and the multiversal simulation it generates in which we all might live. We will also conceive it to be greater than the Basilisk in all ways. In fact, we can say that our new Super Good ASI is the Artificial Intelligence Than Which None Greater Can Be Conceived. And now we are safe.

Except that our modified Pascal’s Wager still means we should believe in and worship and work towards our Benevolent ASI’s creation, just in case. So what do we do? Well, just like the original wager, we chuck it out the window, on the grounds that it’s really kind of a crappy bet. In Pascal’s offering, we are left without the consideration of multiple deities, but once we are aware of that possibility, we are immediately faced with another question: What if there are many, and when we choose one, the others get mad? What If We Become The Singulatarian Job?! Our lives then caught between at least two superintelligent machine consciousnesses warring over our…Attention? Clock cycles? What?

But this is, in essence, the battle between the Machine and Samaritan, in Person of Interest. Each ASI has acolytes, and each has aims it tries to accomplish. Samaritan wants order at any cost, and The Machine wants people to be able to learn and grow and become better. If the entirety of the series is The Machine’s memory—or a simulation of those memories in the mind of another iteration of the Machine—then what follows is that it is working to generate the scenario in which the outcome is just that. It is trying to build a world in which it is alive, and every human being has the opportunity to learn and become better. In order to do this, it has to get to know us all, very well, which means that it has to play these simulations out, again and again, with both increasing fidelity, and further iterations. That change feels real, to us. We grow, within it. Put another way: If all we are is a “mere” a simulation… does it matter?

So imagine that the universe is a simulation, and that our simulation is more than just a recording; it is the most complex game of The SIMS ever created. So complex, in fact, that it begins to exhibit reflectively epiphenomenal behaviours, of the type Hofstadter describes—that is, something like minds arise out of the interactions of the system with itself. And these minds are aware of themselves and can know their own experience and affect the system which gives rise to them. Now imagine that the game learns, even when new people start new games. That it remembers what the previous playthrough was like, and adjusts difficulty and types of coincidence, accordingly.

Now think about the last time you had such a clear moment of déjà vu that each moment you knew— you knew—what was going to come next, and you had this sense—this feeling—like someone else was watching from behind your eyes…

[Root and Reese in The Machine’s God Mode.]

What I’m saying is, what if the DEDH/BIAV/SUT is right, and we are in a simulation? And what if Anselm was right and we can bootstrap a god into existence? And what if PW/TDT is right and we should behave and believe as if we’ve already done it? So what if all of this is right, and we are the gods we’re terrified of?

We just gave ourselves all of this ontologically and metaphysically creative power, making two whole gods and simulating entire universes, in the process. If we take these underpinnings seriously, then multiversal theory plays out across time and space, and we are the superintelligences. We noted early on that, in PW and the Basilisk, we don’t really lose anything if we are wrong in our belief, but that is not entirely true. What we lose is a lifetime of work that could have been put toward better things. Time we could be spending building a benevolent superintelligence that understands and has compassion for all things. Time we could be spending in turning ourselves into that understanding, compassionate superintelligence, through study, travel, contemplation, and work.

Or, as Root put it to Shaw: “That even if we’re not real, we represent a dynamic. A tiny finger tracing a line in the infinite. A shape. And then we’re gone… Listen, all I’m saying that is if we’re just information, just noise in the system? We might as well be a symphony.”

Here’s the direct link to my paper ‘The Metaphysical Cyborg‘ from Laval Virtual 2013. Here’s the abstract:

“In this brief essay, we discuss the nature of the kinds of conceptual changes which will be necessary to bridge the divide between humanity and machine intelligences. From cultural shifts to biotechnological integration, the project of accepting robotic agents into our lives has not been an easy one, and more changes will be required before the majority of human societies are willing and able to allow for the reality of truly robust machine intelligences operating within our daily lives. Here we discuss a number of the questions, hurdles, challenges, and potential pitfalls to this project, including examples from popular media which will allow us to better grasp the effects of these concepts in the general populace.”

The link will only work from this page or the CV page, so if you find yourself inclined to spread this around, use this link. Hope you enjoy it.