I spoke with Klint Finley over at WIRED about Amazon, Facebook, Google, IBM, and Microsoft’s new joint ethics and oversight venture, which they’ve dubbed the “Partnership on Artificial Intelligence to Benefit People and Society.” They held a joint press briefing, today, in which Yann LeCun, Facebook’s director of AI, and Mustafa Suleyman, the head of applied AI at DeepMind discussed what it was that this new group would be doing out in the world. From the Article:

Creating a dialogue beyond the rather small world of AI researchers, LeCun says, will be crucial. We’ve already seen a chat bot spout racist phrases it learned on Twitter, an AI beauty contest decide that black people are less attractive than white people and a system that rates the risk of someone committing a crime that appears to be biased against black people. If a more diverse set of eyes are looking at AI before it reaches the public, the thinking goes, these kinds of thing can be avoided.

The rub is that, even if this group can agree on a set of ethical principles–something that will be hard to do in a large group with many stakeholders—it won’t really have a way to ensure those ideals are put into practice. Although one of the organization’s tenets is “Opposing development and use of AI technologies that would violate international conventions or human rights,” Mustafa Suleyman, the head of applied AI at DeepMind, says that enforcement is not the objective of the organization.

This isn’t the first time I’ve talked to Klint about the intricate interplay of machine intelligence, ethics, and algorithmic bias; we discussed it earlier just this year, for WIRED’s AI Issue. It’s interesting to see the amount of attention this topic’s drawn in just a few short months, and while I’m trepidatious about the potential implementations, as I note in the piece, I’m really fairly glad that more people are more and more willing to have this discussion, at all.

To see my comments and read the rest of the article, click through, here: “Tech Giants Team Up to Keep AI From Getting Out of Hand”

Last week, Artsy.net’s Izabella Scott wrote this piece about how and why the aesthetic of witchcraft is making a comeback in the art world, which is pretty pleasantly timed as not only are we all eagerly awaiting Kim Boekbinder’s NOISEWITCH, but I also just sat down with Rose Eveleth for the Flash Forward Podcast to talk for her season 2 finale.

You see, Rose did something a little different this time. Instead of writing up a potential future and then talking to a bunch of amazing people about it, like she usually does, this episode’s future was written by an algorithm. Rose trained an algorithm called Torch not only on the text of all of the futures from both Flash Forward seasons, but also the full scripts of both the War of the Worlds and the 1979 Hitchhiker’s Guide to the Galaxy radio plays. What’s unsurprising, then, is that part of what the algorithm wanted to talk about was space travel and Mars. What is genuinely surprising, however, is that what it also wanted to talk about was Witches.

Because so far as either Rose or I could remember, witches aren’t mentioned anywhere in any of those texts.

ANYWAY, the finale episode is called “The Witch Who Came From Mars,” and the ensuing exegeses by several very interesting people and me of the Bradbury-esque results of this experiment are kind of amazing. No one took exactly the same thing from the text, and the more we heard of each other, the more we started to weave threads together into a meta-narrative.

Episode 20: The Witch Who Came From Mars

It’s really worth your time, and if you subscribe to Rose’s Patreon, then not only will you get immediate access to the full transcript of that show, but also to the full interview she did with PBS Idea Channel’s Mike Rugnetta. They talk a great deal about whether we will ever deign to refer to the aesthetic creations of artificial intelligences as “Art.”

And if you subscribe to my Patreon, then you’ll get access to the full conversation between Rose and me, appended to this week’s newsletter, “Bad Month for Hiveminds.” Rose and I talk about the nature of magick and technology, the overlaps and intersections of intention and control, and what exactly it is we might mean by “behanding,” the term that shows up throughout the AI’s piece.

And just because I don’t give a specific shoutout to Thoth and Raven doesn’t mean I forgot them. Very much didn’t forget about Raven.

Also speaking of Patreon and witches and whatnot, current $1+ patrons have access to the full first round of interview questions I did with Eliza Gauger about Problem Glyphs. So you can get in on that, there, if you so desire. Eliza is getting back to me with their answers to the follow-up questions, and then I’ll go about finishing up the formatting and publishing the full article. But if you subscribe now, you’ll know what all the fuss is about well before anybody else.

And, as always, there are other ways to provide material support, if longterm subscription isn’t your thing.

Until Next Time.

If you liked this piece, consider dropping something in the A Future Worth Thinking About Tip Jar

-Human Dignity-

The other day I got a CFP for “the future of human dignity,” and it set me down a path thinking.

We’re worried about shit like mythical robots that can somehow simultaneously enslave us and steal the shitty low paying jobs we none of us want to but all of us have to have so we can pay off the debt we accrued to get the education we were told would be necessary to get those jobs, while other folks starve and die of exposure in a world that is just chock full of food and houses…

About shit like how we can better regulate the conflated monster of human trafficking and every kind of sex work, when human beings are doing the best they can to direct their own lives—to live and feed themselves and their kids on their own terms—without being enslaved and exploited…

About, fundamentally, how to make reactionary laws to “protect” the dignity of those of us whose situations the vast majority of us have not worked to fully appreciate or understand, while we all just struggle to not get: shot by those who claim to protect us, willfully misdiagnosed by those who claim to heal us, or generally oppressed by the system that’s supposed to enrich and uplift us…

…but no, we want to talk about the future of human dignity?

Louisiana’s drowning, Missouri’s on literal fire, Baltimore is almost certainly under some ancient mummy-based curse placed upon it by the angry ghost of Edgar Allan Poe, and that’s just in the One Country.

Motherfucker, human dignity ain’t got a Past or a Present, so how about let’s reckon with that before we wax poetically philosophical about its Future.

I mean, it’s great that folks at Google are finally starting to realise that making sure the composition of their teams represents a variety of lived experiences is a good thing. But now the questions are, 1) do they understand that it’s not about tokenism, but about being sure that we are truly incorporating those who were previously least likely to be incorporated, and 2) what are we going to do to not only specifically and actively work to change that, but also PUBLICIZE THAT WE NEED TO?

These are the kinds of things I mean when I say, “I’m not so much scared of/worried about AI as I am about the humans who create and teach them.”

There’s a recent opinion piece at the Washington Post, titled “Why perceived inequality leads people to resist innovation,”. I read something like that and I think… Right, but… that perception is a shared one based on real impacts of tech in the lives of many people; impacts which are (get this) drastically unequal. We’re talking about implications across communities, nations, and the world, at an intersection with a tech industry that has a really quite disgusting history of “disruptively innovating” people right out of their homes and lives without having ever asked the affected parties about what they, y’know, NEED.

So yeah. There’s a fear of inequality in the application of technological innovation… Because there’s a history of inequality in the application of technological innovation!

This isn’t some “well aren’t all the disciplines equally at fault here,” pseudo-Kumbaya false equivalence bullshit. There are neoliberal underpinnings in the tech industry that are basically there to fuck people over. “What the market will bear” is code for, “How much can we screw people before there’s backlash? Okay so screw them exactly that much.” This model has no regard for the preexisting systemic inequalities between our communities, and even less for the idea that it (the model) will both replicate and iterate upon those inequalities. That’s what needs to be addressed, here.

Check out this piece over at Killscreen. We’ve talked about this before—about how we’re constantly being sold that we’re aiming for a post-work economy, where the internet of things and self-driving cars and the sharing economy will free us all from the mundaneness of “jobs,” all while we’re simultaneously being asked to ignore that our trajectory is gonna take us straight through and possibly land us square in a post-Worker economy, first.

Never mind that we’re still gonna expect those ex-workers to (somehow) continue to pay into capitalism, all the while.

If, for instance, either Uber’s plan for a driverless fleet or the subsequent backlash from their stable—i mean “drivers” are shocking to you, then you have managed to successfully ignore this trajectory.


Disciplines like psychology and sociology and history and philosophy? They’re already grappling with the fears of the ones most likely to suffer said inequality, and they’re quite clear on the fact that, the ones who have so often been fucked over?

Yeah, their fears are valid.

You want to use technology to disrupt the status quo in a way that actually helps people? Here’s one example of how you do it: “Creator of chatbot that beat 160,000 parking fines now tackling homelessness.”

Until then, let’s talk about constructing a world in which we address the needs of those marginalised. Let’s talk about magick and safe spaces.


-Squaring the Circle-

Speaking of CFPs, several weeks back, I got one for a special issue of Philosophy and Technology on “Logic As Technology,” and it made me realise that Analytic Philosophy somehow hasn’t yet understood and internalised that its wholly invented language is a technology

…and then that realisation made me realise that Analytic Philosophy hasn’t understood that language as a whole is a Technology.

And this is something we’ve talked about before, right? Language as a technology, but not just any technology. It’s the foundational technology. It’s the technology on which all others are based. It’s the most efficient way we have to cram thoughts into the minds of others, share concept structures, and make the world appear and behave the way we want it to. The more languages we know, right?

We can string two or more knowns together in just the right way, and create a third, fourth, fifth known. We can create new things in the world, wholecloth, as a result of new words we make up or old words we deploy in new ways. We can make each other think and feel and believe and do things, with words, tone, stance, knowing looks. And this is because Language is, at a fundamental level, the oldest magic we have.


Scene from the INJECTION issue #3, by Warren Ellis, Declan Shalvey, and Jordie Bellaire. ©Warren Ellis & Declan Shalvey.

Lewis Carroll tells us that whatever we tell each other three times is true, and many have noted that lies travel far faster than the truth, and at the crux of these truisms—the pivot point, where the power and leverage are—is Politics.

This week, much hay is being made is being made about the University of Chicago’s letter decrying Safe Spaces and Trigger Warnings. Ignoring for the moment that every definition of “safe space” and “trigger warning” put forward by their opponents tends to be a straw man of those terms, let’s just make an attempt to understand where they come from, and how we can situate them.

Trauma counseling and trauma studies are the epitome of where safe space and trigger warnings come from, and for the latter, that definition is damn near axiomatic. Triggers are about trauma. But safe space language has far more granularity than that. Microggressions are certainly damaging, but they aren’t on the same level as acute traumas. Where acute traumas are like gun shots or bomb blasts (and may indeed be those actual things), societal micragressions are more like a slow constant siege. But we still need the language of a safe spaces to discuss them—said space is something like a bunker in which to regroup, reassess, and plan for what comes next.

Now it is important to remember that there is a very big difference between “safe” and “comfortable,” and when laying out the idea of safe spaces, every social scientist I know takes great care to outline that difference.

Education is about stretching ourselves, growing and changing, and that is discomfort almost by definition. I let my students know that they will be uncomfortable in my class, because I will be challenging every assumption they have. But discomfort does not mean I’m going to countenance racism or transphobia or any other kind of bigotry.

Because the world is not a safe space, but WE CAN MAKE IT SAFER for people who are microagressed against, marginalised, assaulted, and killed for their lived identities, by letting them know not only how to work to change it, but SHOWING them through our example.

Like we’ve said, before: No, the world’s not safe, kind, or fair, and with that attitude it never will be.

So here’s the thing, and we’ll lay it out point-by-point:

A Safe Space is any realm that is marked out for the nonjudgmental expression of thoughts and feelings, in the interest of honestly assessing and working through them.

Safe Space” can mean many things, from “Safe FROM Racist/Sexist/Homophobic/Transphobic/Fatphobic/Ableist Microagressions” to “safe FOR the thorough exploration of our biases and preconceptions.” The terms of the safe space are negotiated at the marking out of them.

The terms are mutually agreed-upon by all parties. The only imposition would be, to be open to the process of expressing and thinking through oppressive conceptual structures.

Everything else—such as whether to address those structures as they exist in ourselves (internalised oppressions), in others (aggressions, micro- or regular sized), or both and their intersection—is negotiable.

The marking out of a Safe Space performs the necessary function, at the necessary time, defined via the particular arrangement of stakeholders, mindset, and need.

And, as researcher John Flowers notes, anyone who’s ever been in a Dojo has been in a Safe Space.

From a Religious Studies perspective, defining a safe space is essentially the same process as that of marking out a RITUAL space. For students or practitioners of any form of Magic[k], think Drawing a Circle, or Calling the Corners.

Some may balk at the analogy to the occult, thinking that it cheapens something important about our discourse, but look: Here’s another way we know that magick is alive and well in our everyday lives:

If they could, a not-insignificant number of US Republicans would overturn the Affordable Care Act and rally behind a Republican-crafted replacement (RCR). However, because the ACA has done so very much good for so many, it’s likely that the only RCR that would have enough support to pass would be one that looked almost identical to the ACA. The only material difference would be that it didn’t have President Obama’s name on it—which is to say, it wouldn’t be associated with him, anymore, since his name isn’t actually on the ACA.

The only reason people think of the ACA as “Obamacare” is because US Republicans worked so hard to make that name stick, and now that it has been widely considered a triumph, they’ve been working just as hard to get his name away from it. And if they did mange to achieve that, it would only be true due to some arcane ritual bullshit. And yet…

If they managed it, it would be touted as a “Crushing defeat for President Obama’s signature legislation.” It would have lasting impacts on the world. People would be emboldened, others defeated, and new laws, social rules, and behaviours would be undertaken, all because someone’s name got removed from a thing in just the right way.

And that’s Magick.

The work we do in thinking about the future sometimes requires us to think about things from what stuffy assholes in the 19th century liked to call a “primitive” perspective. They believed in a kind of evolutionary anthropological categorization of human belief, one in which all societies move from “primitive” beliefs like magic through moderate belief in religion, all the way to sainted perfect rational science. In the contemporary Religious Studies, this evolutionary model is widely understood to be bullshit.

We still believe in magic, we just call it different things. The concept structures of sympathy and contagion are still at play, here, the ritual formulae of word and tone and emotion and gesture all still work when you call them political strategy and marketing and branding. They’re all still ritual constructions designed to make you think and behave differently. They’re all still causing spooky action at a distance. They’re still magic.

The world still moves on communicated concept structure. It still turns on the dissemination of the will. If I can make you perceive what I want you to perceive, believe what I want you to believe, move how I want you to move, then you’ll remake the world, for me, if I get it right. And I know that you want to get it right. So you have to be willing to understand that this is magic.

It’s not rationalism.

It’s not scientism.

It’s not as simple as psychology or poll numbers or fear or hatred or aspirational belief causing people to vote against their interests. It’s not that simple at all. It’s as complicated as all of them, together, each part resonating with the others to create a vastly complex whole. It’s a living, breathing thing that makes us think not just “this is a thing we think” but “this is what we are.” And if you can do that—if you can accept the tools and the principles of magic, deploy the symbolic resonance of dreamlogic and ritual—then you might be able to pull this off.

But, in the West, part of us will always balk at the idea that the Rational won’t win out. That the clearer, more logical thought doesn’t always save us. But you have to remember: Logic is a technology. Logic is a tool. Logic is the application of one specific kind of thinking, over and over again, showing a kind of result that we convinced one another we preferred to other processes. It’s not inscribed on the atoms of the universe. It is one kind of language. And it may not be the one most appropriate for the task at hand.

Put it this way: When you’re in Zimbabwe, will you default to speaking Chinese? Of course not. So why would we default to mere Rationalism, when we’re clearly in a land that speaks a different dialect?

We need spells and amulets, charms and warded spaces; we need sorcerers of the people to heal and undo the hexes being woven around us all.


-Curious Alchemy-

Ultimately, the rigidity of our thinking, and our inability to adapt has lead us to be surprised by too much that we wanted to believe could never have come to pass. We want to call all of this “unprecedented,” when the truth of the matter is, we carved this precedent out every day for hundreds of years, and the ability to think in weird paths is what will define those who thrive.

If we are going to do the work of creating a world in which we understand what’s going on, and can do the work to attend to it, then we need to think about magic.


If you liked this article, consider dropping something into the A Future Worth Thinking About Tip Jar

Here’s the direct link to my paper ‘The Metaphysical Cyborg‘ from Laval Virtual 2013. Here’s the abstract:

“In this brief essay, we discuss the nature of the kinds of conceptual changes which will be necessary to bridge the divide between humanity and machine intelligences. From cultural shifts to biotechnological integration, the project of accepting robotic agents into our lives has not been an easy one, and more changes will be required before the majority of human societies are willing and able to allow for the reality of truly robust machine intelligences operating within our daily lives. Here we discuss a number of the questions, hurdles, challenges, and potential pitfalls to this project, including examples from popular media which will allow us to better grasp the effects of these concepts in the general populace.”

The link will only work from this page or the CV page, so if you find yourself inclined to spread this around, use this link. Hope you enjoy it.

So this past Saturday, a surrogate speaker for the Republican nominee for President of the United States spoke on CNN about how it was somehow a problem that the Democratic Nominee for Vice President of the United States spoke in Spanish, in his first speech addressing the nation as the Dem VP Nom.

She—the Republican Surrogate—then made a really racist reference to “Dora The Explorer.”

Now, we should note that Spanish is the first or immediate second language of 52.6 million people in the United States of America, and it is spoken by approximately 427 million people in the entire world, making in the second most widely spoken language, after Chinese (English is 3rd). So even if it weren’t just politically a good idea to address between 52.6 and 427 million people in a way that made them comfortable (and it is), there’s something weird about how…offended people get by being “made to” hear another language. There’s something of the “I don’t understand it, and I never want to have to!” in there that just baffles me. Something that we seem to read as a threat, when we observe others communicating by means in which we aren’t fluent. Rather than take it as a chance to open ourselves up, and learn something new, or to recognise that, for some of us, the ability to speak candidly in a native language is the only personal space to be had, within a particular society—rather than any of that, we get scared and feel excluded, and take offense.

When perhaps we should recognise that that exclusion and fear is something felt by precisely the same people we shout at to “Learn the Damn Language.”

But let’s set that aside, for a second, and talk about why it’s good to learn other languages. Studies have shown that the more languages we speak, the more conceptual structures we create in our minds, and this goes for everything from Spanish to Sign Language to Math to Coding to the way someone with whom we’re intimate expresses their emotionality. Any time we learn a new way to communicate perceptions and ideas and needs and desires, we create whole new ways of thinking and functioning, in ourselves. Those aforementioned conceptual structures then mean that we’ll be in a better position to understand and be understood by people who aren’t just exactly like us. Politically, the benefits of this should be obvious, in terms of diplomacy and opportunities to craft coalitions of peace, but even simpler than that is the fact that, through new languages, we provide ourselves and others a wider array of potential connections and intersections, in the world we all share.

And if that doesn’t strike us all as a VERY GOOD THING, then I don’t know what the hell else to say.

Let’s just make it real simple: Understanding Each Other Is Good.

To that end, we have to remember that understanding doesn’t just mean that we make everyone speak and behave exactly the way we want them to. Understanding means a mutual reaching-toward, when possible, and it means those of us who can expend the extra effort doing so, especially when another might not be able to, at all.

There’s really not much else to it.

[Originally Published at Eris Magazine]

So Gabriel Roberts asked me to write something about police brutality, and I told him I needed a few days to get my head in order. The problem being that, with this particular topic, the longer I wait on this, the longer I want to wait on this, until, eventually, the avoidance becomes easier than the approach by several orders of magnitude.

Part of this is that I’m trying to think of something new worth saying, because I’ve already talked about these conditions, over at A Future Worth Thinking About. We talked about this in “On The Invisible Architecture of Bias,” “Any Sufficiently Advanced Police State…,” “On the Moral, Legal, and Social Implications of the Rearing and Development of Nascent Machine Intelligences,” and most recently in “On the European Union’s “Electronic Personhood” Proposal.” In these articles, I briefly outlined the history of systemic bias within many human social structures, and the possibility and likelihood of that bias translating into our technological advancements, such as algorithmic learning systems, use of and even access to police body camera footage, and the development of so-called artificial intelligence.

Long story short, the endemic nature of implicit bias in society as a whole plus the even more insular Us-Vs-Them mentality within the American prosecutorial legal system plus the fact that American policing was literally borne out of slavery on the work of groups like the KKK, equals a series of interlocking systems in which people who are not whitepassing, not male-perceived, not straight-coded, not “able-bodied” (what we can call white supremacist, ableist, heteronormative, patriarchal hegemony, but we’ll just use the acronym WSAHPH, because it satisfyingly recalls that bro-ish beer advertising campaign from the late 90’s and early 2000’s.) stand a far higher likelihood of dying at the hands of agents of that system.

Here’s a quote from Sara Ahmed in her book The Cultural Politics of Emotion, which neatly sums this up: “[S]ome bodies are ‘in an instant’ judged as suspicious, or as dangerous, as objects to be feared, a judgment that can have lethal consequences. There can be nothing more dangerous to a body than the social agreement that that body is dangerous.”

At the end of this piece, I’ve provided some of the same list of links that sits at the end of “On The Invisible Architecture of Bias,” just to make it that little bit easier for us to find actual evidence of what we’re talking about, here, but, for now, let’s focus on these:

A Brief History of Slavery and the Origins of American Policing
2006 FBI Report on the infiltration of Law Enforcement Agencies by White Supremacist Groups
June 20, 2016 “Texas Officers Fired for Membership in KKK”

And then we’ll segue to the fact that we are, right now, living through the exemplary problem of the surveillance state. We’ve always been told that cameras everywhere will make us all safer, that they’ll let people know what’s going on and that they’ll help us all. People doubted this, even in Orwell’s day, noting that the more surveilled we are, the less freedom we have, but more recently people have started to hail this from the other side: Maybe videographic oversight won’t help the police help us, but maybe it will help keep us safe from the police.

But the sad fact of the matter is that there’s video of Alton Sterling being shot to death while restrained, and video of John Crawford III being shot to death by a police officer while holding a toy gun down at his side in a big box store where it was sold, and there’s video of Alva Braziel being shot to death while turning around with his hands up as he was commanded to do by officers, of Eric Garner being choked to death, of Delrawn Small being shot to death by an off-duty cop who cut him off in traffic. There’s video of so damn many deaths, and nothing has come of most of them. There is video evidence showing that these people were well within their rights, and in lawful compliance with officers’ wishes, and they were all shot to death anyway, in some cases by people who hadn’t even announced themselves as cops, let alone ones under some kind of perceived threat.

The surveillance state has not made us any safer, it’s simply caused us to be confronted with the horror of our brutality. And I’d say it’s no more than we deserve, except that even with protests and retaliatory actions, and escalations to civilian drone strikes, and even Newt fucking Gingrich being able to articulate the horrors of police brutality, most of those officers are still on the force. Many unconnected persons have been fired, for indelicate pronouncements and even white supremacist ties, but how many more are still on the force? How many racist, hateful, ignorant people are literally waiting for their chance to shoot a black person because he “resisted” or “threatened?” Or just plain disrespected. And all of that is just what happened to those people. What’s distressing is that those much more likely to receive punishment, however unofficial, are the ones who filmed these interactions and provided us records of these horrors, to begin with. Here, from Ben Norton at Salon.com, is a list of what happened to some of the people who have filmed police killings of non-police:

Police have been accused of cracking down on civilians who film these shootings.

Ramsey Orta, who filmed an NYPD cop putting unarmed black father Eric Garner in a chokehold and killing him, says he has been constantly harassed by police, and now faces four years in prison on drugs and weapons charges. Orta is the only one connected to the Garner killing who has gone to jail.

Chris LeDay, the Georgia man who first posted a video of the police shooting of Alton Sterling, also says he was detained by police the next day on false charges that he believes were a form of retaliation.

Early media reports on the shooting of Small uncritically repeated the police’s version of the incident, before video exposed it to be false.

Wareham noted that the surveillance footage shows “the cold-blooded nature of what happened, and that the cop’s attitude was, ‘This was nothing more than if I had stepped on an ant.'”

As we said, above, black bodies are seen as inherently dangerous and inhuman. This perception is trained into officers at an unconscious level, and is continually reinforced throughout our culture. Studies like the Implicit Association Test, this survey of U.Va. medical students, and this one of shooter bias all clearly show that people are more likely to a) associate words relating to evil and inhumanity to; b) think pain receptors working in a fundamentally different fashion within; and c) shoot more readily at bodies that do not fit within WSAHPH. To put that a little more plainly, people have a higher tendency to think of non-WSAHPH bodies as fundamentally inhuman.

And yes, as we discussed, in the plurality of those AFWTA links, above, there absolutely is a danger of our passing these biases along not just to our younger human selves, but to our technology. In fact, as I’ve been saying often, now, the danger is higher, there, because we still somehow have a tendency to think of our technology as value-neutral. We think of our code and (less these days) our design as some kind of fundamentally objective process, whereby the world is reduced to lines of logic and math, and that simply is not the case. Codes are languages, and languages describe the world as the speaker experiences it. When we code, we are translating our human experience, with all of its flaws, biases, perceptual glitches, errors, and embellishments, into a technological setting. It is no wonder then that the algorithmic systems we use to determine the likelihood of convict recidivism and thus their bail and sentencing recommendations are seen to exhibit the same kind of racially-biased decision-making as the humans it learned from. How could this possibly be a surprise? We built these systems, and we trained them. They will, in some fundamental way, reflect us. And, at the moment, not much terrifies me more than that.

Last week saw the use of a police bomb squad robot to kill an active shooter. Put another way, we carried out a drone strike on a civilian in Dallas, because we “saw no other option.” So that’s in the Overton Window, now. And the fact that it was in response to a shooter who was targeting any and all cops as a mechanism of retribution against police brutality and violence against non-WSAHPH bodies means that we have thus increased the divisions between those of us who would say that anti-police-overreach stances can be held without hating the police themselves and those of us who think that any perceived attack on authorities is a real, existential threat, and thus deserving of immediate destruction. How long do we really think it’s going to be until someone with hate in their heart says to themselves, “Well if drones are on the table…” and straps a pipebomb to a quadcopter? I’m frankly shocked it hasn’t happened yet, and this line from the Atlantic article about the incident tells me that we need to have another conversation about normalization and depersonalization, right now, before it does:

“Because there was an imminent threat to officers, the decision to use lethal force was likely reasonable, while the weapon used was immaterial.”

Because if we keep this arms race up among civilian populations—and the police are still civilians which literally means that they are not military, regardless of how good we all are at forgetting that—then it’s only a matter of time before the overlap between weapons systems and autonomous systems comes home.

And as always—but most especially in the wake of this week and the still-unclear events of today—if we can’t sustain a nuanced investigation of the actual meaning of nonviolence in the Reverend Doctor Martin Luther King, Jr.’s philosophy, then now is a good time to keep his name and words out our mouths

Violence isn’t only dynamic physical harm. Hunger is violence. Poverty is violence. Systemic oppression is violence. All of the invisible, interlocking structures that sustain disproportionate Power-Over at the cost of some person or persons’ dignity are violence.

Nonviolence means a recognition of these things and our places within them.

Nonviolence means using all of our resources in sustained battle against these systems of violence.

Nonviolence means struggle against the symptoms and diseases killing us all, both piecemeal, and all at once.


Further Links:

A large part of how I support myself in the endeavor to think in public is with your help, so if you like what you’ve read here, and want to see more like it, then please consider becoming either a recurring Patreon subscriber or making a one-time donation to the Tip Jar, it would be greatly appreciated.
And thank you.

In case you were unaware, last Tuesday, June 21, Reuters put out an article about an EU draft plan regarding the designation of so-called robots and artificial intelligences as “Electronic Persons.” Some of you’d think I’d be all about this. You’d be wrong. The way the Reuters article frames it makes it look like the EU has literally no idea what they’re doing, here, and are creating a situation that is going to have repercussions they have nowhere near planned for.

Now, I will say that looking at the actual Draft, it reads like something with which I’d be more likely to be on board. Reuters did no favours whatsoever for the level of nuance in this proposal. But that being said, this focus of this draft proposal seems to be entirely on liability and holding someone—anyone—responsible for any harm done by a robot. That, combined with the idea of certain activities such as care-giving being “fundamentally human,” indicates to me that this panel still widely misses many of the implications of creating a new category for nonbiological persons, under “Personhood.”

The writers of this draft very clearly lay out the proposed scheme for liability, damages, and responsibilities—what I like to think of of as the “Hey… Can we Punish Robots?” portion of the plan—but merely use the phrase “certain rights” to indicate what, if any, obligations humans will have. In short, they do very little to discuss what the “certain rights” indicated by that oft-deployed phrase will actually be.

So what are the enumerated rights of electronic persons? We know what their responsibilities are, but what are our responsibilities to them? Once we have have the ability to make self-aware machine consciousnesses, are we then morally obliged to make them to a particular set of specifications, and capabilities? How else will they understand what’s required of them? How else would they be able to provide consent? Are we now legally obliged to provide all autonomous generated intelligences with as full an approximation of consciousness and free will as we can manage? And what if we don’t? Will we be considered to be harming them? What if we break one? What if one breaks in the course of its duties? Does it get workman’s comp? Does its owner?

And hold up, “owner?!” You see we’re back to owning people, again, right? Like, you get that?

And don’t start in with that “Corporations are people, my friend” nonsense, Mitt. We only recognise corporations as people as a tax dodge. We don’t take seriously their decision-making capabilities or their autonomy, and we certainly don’t wrestle with the legal and ethical implications of how radically different their kind of mind is, compared to primates or even cetaceans. Because, let’s be honest: If Corporations really are people, then not only is it wrong to own them, but also what counts as Consciousness needs to be revisited, at every level of human action and civilisation.

Let’s look again at the fact that people are obviously still deeply concerned about the idea of supposedly “exclusively human” realms of operation, even as we still don’t have anything like a clear idea about what qualities we consider to be the ones that make us “human.” Be it cooking or poetry, humans are extremely quick to lock down when they feel that their special capabilities are being encroached upon. Take that “poetry” link, for example. I very much disagree with Robert Siegel’s assessment that there was no coherent meaning in the computer-generated sonnets. Multiple folks pulled the same associative connections from the imagery. That might be humans projecting onto the authors, but still: that’s basically what we do with Human poets. “Authorial Intent” is a multilevel con, one to which I fully subscribe and From which I wouldn’t exclude AI.

Consider people’s reactions to the EMI/Emily Howell experiments done by David Cope, best exemplified by this passage from a PopSci.com article:

For instance, one music-lover who listened to Emily Howell’s work praised it without knowing that it had come from a computer program. Half a year later, the same person attended one of Cope’s lectures at the University of California-Santa Cruz on Emily Howell. After listening to a recording of the very same concert he had attended earlier, he told Cope that it was pretty music but lacked “heart or soul or depth.”

We don’t know what it is we really think of as humanness, other than some predetermined vague notion of humanness. If the people in the poetry contest hadn’t been primed to assume that one of them was from a computer, how would they have rated them? What if they were all from a computer, but were told to expect only half? Where are the controls for this experiment in expectation?

I’m not trying to be facetious, here; I’m saying the EU literally has not thought this through. There are implications embedded in all of this, merely by dint of the word “person,” that even the most detailed parts of this proposal are in no way equipped to handle. We’ve talked before about the idea of encoding our bias into our algorithms. I’ve discussed it on Rose Eveleth‘s Flash Forward, in Wired, and when I broke down a few of the IEEE Ethics 2016 presentations (including my own) in “Preying with Trickster Gods ” and “Stealing the Light to Write By.” My version more or less goes as I said it in Wired: ‘What we’re actually doing when we code is describing our world from our particular perspective. Whatever assumptions and biases we have in ourselves are very likely to be replicated in that code.’

More recently, Kate Crawford, whom I met at Magick.Codes 2014, has written extremely well on this in “Artificial Intelligence’s White Guy Problem.” With this line, ‘Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to,’ Crawford resonates very clearly with what I’ve said before.

And considering that it’s come out this week that in order to even let us dig into these potentially deeply-biased algorithms, here in the US, the ACLU has had to file a suit against a specific provision of the Computer Fraud and Abuse Act, what is the likelihood that the EU draft proposal committee has considered what will take to identify and correct for biases in these electronic persons? How high is the likelihood that they even recognise that we anthropocentrically bias every system we touch?

Which brings us to this: If I truly believed that the EU actually gave a damn about the rights of nonhuman persons, biological or digital, I would be all for this draft proposal. But they don’t. This is a stunt. Look at the extant world refugee crisis, the fear driving the rise of far right racists who are willing to kill people who disagree with them, and, yes, even the fact that this draft proposal is the kind of bullshit that people feel they have to pull just to get human workers paid living wages. Understand, then, that this whole scenario is a giant clusterfuck of rights vs needs and all pitted against all. We need clear plans to address all of this, not just some slapdash, “hey, if we call them people and make corporations get insurance and pay into social security for their liability cost, then maybe it’ll be a deterrent” garbage.

There is a brief, shining moment in the proposal, right at point 23 under “Education and Employment Forecast,” where they basically say “Since the complete and total automation of things like factory work is a real possibility, maybe we’ll investigate what it would look like if we just said screw it, and tried to institute a Universal Basic Income.” But that is the one moment where there’s even a glimmer of a thought about what kinds of positive changes automation and eventually even machine consciousness could mean, if we get out ahead of it, rather than asking for ways to make sure that no human is ever, ever harmed, and that, if they are harmed—either physically or as regards their dignity—then they’re in no way kept from whatever recompense is owed to them.

There are people doing the work to make something more detailed and complete, than this mess. I talked about them in the newsletter editions, mentioned above. There are people who think clearly and well, about this. Who was consulted on this draft proposal? Because, again, this proposal reads more like a deterrence, liability, and punishment schema than anything borne out of actual thoughtful interrogation of what the term “personhood” means, and of what a world of automation could mean for our systems of value if we were to put our resources and efforts toward providing for the basic needs of every human person. Let’s take a thorough run at that, and then maybe we’ll be equipped to try to address this whole “nonhuman personhood” thing, again.

And maybe we’ll even do it properly, this time.

Episode 10: Rude Bot Rises

So. The Flash Forward Podcast is one of the best around. Every week, host Rose Eveleth takes on another potential future, from the near and imminent to the distant and highly implausible. It’s been featured on a bunch of Best Podcast lists and Rose even did a segment for NPR’s Planet Money team about the 2016 US Presidential Election.

All of this is by way of saying I was honoured and a little flabbergasted (I love that word) when Rose asked me to speak with her for her episode about Machine Consciousness:

Okay, you asked for it, and I finally did it. Today’s episode is about conscious artificial intelligence. Which is a HUGE topic! So we only took a small bite out of all the things we could possibly talk about.

We started with some definitions. Because not everybody even defines artificial intelligence the same way, and there are a ton of different definitions of consciousness. In fact, one of the people we talked to for the episode, Damien Williams, doesn’t even like the term artificial intelligence. He says it’s demeaning to the possible future consciousnesses that we might be inventing.

But before we talk about consciousnesses, I wanted to start the episode with a story about a very not-conscious robot. Charles Isbell, a computer scientist at Georgia Tech, first walks us through a few definitions of artificial intelligence. But then he tells us the story of cobot, a chatbot he helped invent in the 1990’s.

You’ll have to click though and read or listen for the rest from Rose, Ted Chiang, Charles Isbell, and me. If you subscribe to Rose’s Patreon, you can even get a transcript of the whole show.

No spoilers, but I will say that I wasn’t necessarily intending to go Dark with the idea of machine minds securing energy sources. More like asking, “What advances in, say, solar power transmission would be precipitated by machine minds?”

But the darker option is there. And especially so if we do that thing the AGI in the opening sketch says it fears.

But again, you’ll have to go there to get what I mean.

And, as always, if you want to help support what we do around here, you can subscribe to the AFWTA Patreon just by clicking this button right here:

Until Next Time.

[UPDATED 03/28/16: Post has been updated with a far higher quality of audio, thanks to the work of Chris Novus. (Direct Link to the Mp3)]

So, if you follow the newsletter, then you know that I was asked to give the March lecture for my department’s 3rd Thursday Brown Bag Lecture Series. I presented my preliminary research for the paper which I’ll be giving in Vancouver, about two months from now, “On the Moral, Legal, and Social Implications of the Rearing and Development of Nascent Machine Intelligences” (EDIT: My rundown of IEEE Ethics 2016 is here and here).

It touches on thoughts about everything from algorithmic bias, to automation and a post-work(er) economy, to discussions of what it would mean to put dolphins on trial for murder.

About the dolphin thing, for instance: If we recognise Dolphins and other cetaceans as nonhuman persons, as India has done, then that would mean we would have to start reassessing how nonhuman personhood intersects with human personhood, including in regards to rights and responsibilities as protected by law. Is it meaningful to expect a dolphin to understand “wrongful death?” Our current definition of murder is predicated on a literal understanding of “homicide” as “death of a human,” but, at present, we only define other humans as capable of and culpable for homicide. What weight would the intentional and malicious deaths of nonhuman persons carry?

All of this would have to change.

Anyway, this audio is a little choppy and sketchy, for a number of reasons, and I while I tried to clean it up as much as I could, some of the questions the audience asked aren’t decipherable, except in the context of my answers. All that being said, this was an informal lecture version of an in-process paper, of which there’ll be a much better version, soon, but the content of the piece is… timely, I felt. So I hope you enjoy it.

Until Next Time.

This work originally appears as “Go Upgrade Yourself,” in the edited volume Futurama and Philosophy. It was originally titled

The Upgrading of Hermes Conrad

So, you’re tired of your squishy meatsack of a body, eh? Ready for the next level of sweet biomechanical upgrades? Well, you’re in luck! The world of Futurama has the finest in back-alley and mad-scientist-based bio-augmentation surgeons, ready and waiting to hear from you! From a fresh set of gills, to a brand new chest-harpoon, and beyond, Yuri the Shady Parts Dealer and Professor Hubert J. Farnsworth are here to supply all of your upgrading needs—“You give lungs now; gills be here in two weeks!” Just so long as, whatever you do, stay away from legitimate hospitals. The kinds of procedures you’re looking to get done…well, let’s just say they’re still frowned upon in the 31st century; and why shouldn’t they be? As the woeful tale of Hermes Conrad illustrates exactly what’s at stake if you choose to pursue your biomechanical dreams.


The Six Million Dollar Mon

 Our tale begins with season seven’s episode “The Six Million Dollar Mon,” in which Hermes Conrad, Grade 36 Bureaucrat (Extraordinaire), comes to the conclusion that the he should be fired, since his bureaucratic performance reviews are the main drain on his beloved Planet Express Shipping Company. After being replaced with robo-bureaucrat Mark 7-G (Mark Sevengy?), Hermes enjoys some delicious spicy curried goat and goes out for an evening stroll with his lovely wife LaBarbara. While on their walk Roberto, the knife-wielding maniac, long of our acquaintance, confronts and demands the human couple’s skin for his culinary delight! As Hermes cowers behind his wife in fear, suddenly a savior arrives! URL, the Robot Police Officer, reels Roberto in with his magnificent chest-harpoon! Watching the cops take Roberto to the electromagnetic chair, and lamenting his uselessness in a dangerous situation, Hermes makes a decision: he’ll get Bender to take him to one of the many shady, underground surgeons he knows, so he can become “less inferior to today’s modern machinery.” Enter: Yuri, Professional Shady-Deal-Maker.

Hermes’ first upgrade is to get a chest-harpoon, like the one URL has. With his new enhancement, he proves his worth to the crew by getting a box off of the top shelf, which is too high for Mark 7-G. With this fete he wins back his position with the company, but as soon as things get back to normal the Professor drops his false teeth down the Dispose-All. No big deal, right? Just get Scruffy to retrieve it. Unfortunately, Scruffy responds, that a sink, “t’ain’t a berler nor a terlet,” effectively refusing to retrieve the Professor’s teeth. Hermes resigns himself to grabbing his hand tools, when Bender steps in, saying, “Hand tools? Why don’t you just get an extendo-arm, like me?” Whereupon, he reaches across the room and pulls the Professor’s false teeth out of the drain—and immediately drops them back in. Hermes objects, saying that he doesn’t need any more upgrades—after all, he doesn’t want to end up a cold, emotionless robot, like Bender! Just then, Mark 7-G pipes up with, “Maybe I should get an extendo-arm,” and Hermes narrows his eyes in hatred. Re-enter: Yuri.

New extendo-arm acquired, the Professor’s teeth retrieved, and the old arm given to Zoidberg, who’s been asking for all of Hermes’s discarded parts, Hermes is, again, a hero to his coworkers. Later, as he lays in bed reading with his wife, LaBarbara questions his motives for his continual upgrades. He assures her that he’s done getting upgrades. However, his promise is short-lived. After shattering his glasses with his new super-strong mechanical arm, he rushes out to get a new Cylon eye. LaBarbara is now extremely worried, but Hermes soothes her, and they settle in for some “Marital Relations…”, at which point she finds that he’s had something else upgraded, too. She yells at him, “Some tings shouldn’t be Cylon-ed!” (which, in all honesty could be taken as the moral of the episode), and breaks off contact. What follows is a montage of Hermes encountering trivial difficulties in his daily life, and upgrading himself to overcome them. Rather than learning and working to improve himself, he continually replaces all of his parts, until he achieves a Full Body Upgrade. He still has a human brain, but that doesn’t matter: he’s changed. He doesn’t relate to his friends and family in the same way, and they’ve all noticed,especially Zoidberg.

All this time, however, Dr. John Zoidberg saved the trimmings from his friend’s constant upgrades, and has used them to make a meat-puppet, which he calls “Li’l Hermes.” Oh, and they’re a ventriloquist act. Anyway, after seeing their act, Hermes—or Mecha-Hermes, as he now prefers—is filled with loathing; loathing for the fact that his brain is still human, that is, until…! Re-re-enter…, no, not Yuri; because even Shady-Deals Yuri has his limits. He says that “No one in their right mind would do such a thing.” Enter: The Professor, who is, of course, more than happy—or perhaps, “maniacally gleeful”—to help. So, with Bender’s assistance (because everything robot-related, in the Futurama universe has to involve Bender, I guess), they set off to the Robot Cemetery to exhume the most recently buried robot they can find, and make off with its brain-chip. In their haste to have the deed done, they don’t bother to check the name of whose grave it is they’re desecrating. As you might have guessed, it’s Roberto—“3001-3012: Beloved Killer and Maniac.”

In the course of the operation, LaBarbara makes an impassioned plea, and it causes the Professor to stop and rethink his actions—because Hermes might have “litigious survivors.” Suddenly, to everyone’s surprise, Zoidberg steps up and offers to perform this final operation, the one which will seemingly remove any traces of the Hermes he’s known and loved! Agreeing with Mecha-Hermes that claws will be far too clumsy for this delicate brain surgery, Zoidberg dons Li’l Hermes, and uses the puppet’s hands to do the deed. While all of this is underway, Zoidberg sings to everyone the explanation for why he would help his friend lose himself this way, all to the slightly heavy-handed tune of “Monster Mash.” Finally, the human brain removed, the robot brain implanted, and Zoidberg’s song coming to a close, the doctor reveals his final plan…By putting Hermes’s human brain into Li’l Hermes, Hermes is back! Of course, the whole operation having been a success, so is Roberto, but that’s somebody else’s problem.

We could spend the rest of our time discussing Zoidberg’s self-harmonization, but I’ll leave that for you to experiment with. Instead, let’s look closer at human bio-enhancement. To do this we’ll need to go back to the beginning. No, not the beginning of the episode, or even the Beginning of Futurama itself; No, we need to go back to the beginning of bio-enhancement—and specifically the field of cybernetics—as a whole.


“More Human Than Human” Is Our Motto

In 1960, at the outset of the Space Race, Manfred Clynes and Nathan S. Kline wrote an article for the September issue of Aeronautics called “Cyborgs and Space.” In this article, they coined the term “cyborg” as a portmanteau of the phrase “Cybernetic Organism,” that is, a living creature with the ability to adapt its body to its environment. Clynes and Kline believed that if humans were ever going to go far out into space, they would have to become the kinds of creatures that could survive the vacuum of space as well as harsh, hostile planets. Now, for all its late-1990s Millennial fervor, Futurama has a deep undercurrent of love for the dream and promise (and fashion) of space exploration, as it was presented in the 1950s, 60s, and 70s. All you need to do in order to see this is remember Fry’s wonder and joy at being on the actual moon and seeing the Apollo Lunar Lander. If this is the case, why, within Futurama’s 31st Century, is there such a deep distrust of anything approaching altered human physical features? Well, looking at it, we may find it has something to do with the fact that ever since we dreamed of augmenting humans, we’ve had nightmares that any alterations would thereby make us less human.

“The Six Million Dollar Mon,” episode seven of season seven, contains within it clear references to the history of science fiction, including one of the classic tales of human augmentation, and creating new life: Mary Shelley’s Frankenstein. In going to the Robot Cemetery in the dead of night for spare parts, accidentally obtaining a murderer’s brain, and especially that bit with the skylight in the Professor’s laboratory, the entire third act of this episode serves as homage to Shelley’s book and its most memorable adaptations. In doing this, the Futurama crew puts conceptual pressure on what many of us have long believed: that created life is somehow “wrong” and that augmenting humans will make them somehow “less themselves.” Something about the biological is linked in our minds to the idea of the self—that is, it’s the warm squishy bits that make us who we are.

Think about it: If you build a person out of murderers, of course they’re going to be a murderer. If you replace every biological part of a human, then of course they won’t be their normal human selves, anymore; they’ll have become something entirely different, by definition. If your body isn’t yours, anymore, then how could you possibly be “you,” anymore? This should be all the more true when what’s being used to replace your bits is a different substance and material than you used to be. When that new “you” is metal rather than flesh, it seems that what it used to mean to be “you” is gone, and something new shall have appeared. This makes so much sense to us on a basic level that it seems silly to spell it out even this much, but what if we modify our scenario a little bit, and take another look?


The Ship of Planet Express

 What if, instead of feeling inferior to URL, Hermes had been injured and, in the course of his treatment, was given the choice between a brand new set of biological giblets (or a whole new body, as happened in the Bender’s Big Score storyline), or the chest-harpoon upgrade? Either way, we’re replacing what was lost with something new, right? So, why do many of us see the biological replacement as “more real?” Try this example: One day, on a routine delivery, the Planet Express Ship is damaged and repairs must be made. Specifically, the whole tail fin has to be replaced with a new, better fin. Once this is done, is it still the Planet Express ship? What if, next, we have to replace the dark matter engines with better engines? Is it still the Planet Express ship? Now, Leela’s chair is busted up, so we need to get her a new one. It also needs new bolts, so, while we’re at it, let’s just replace all of the bolts in the ship. Then the walls get dented, and the bunks are rusty, and the floors are buckled, and Scruffy’s mop… and so, over many years, the result is that no part of the Planet Express ship is “original,” oh, and we also have to get new, better paint, because the old paint is peeled away, plus, this all-new stuff needs painting. So, what do we think? Is this still the same Planet Express ship as it was in the first episode of Futurama? And, if so, then why do we think of a repaired and augmented human as “not being themselves?”

All of this may sound a little far-fetched, but remember the conventional wisdom that at the end of every seven-year cycle, all of the cells in your body have died and been replaced. Now, this isn’t quite true, as some cells don’t die easily, and some of those don’t regenerate when they do die, but as a useful shorthand, this gives something to think about. Ultimately, due to the metabolizing of elements and their distribution through your body it is ultimately more likely that you are currently made of astronomically many more new atoms than you are made of the atoms with which you were born. And really, that’s just math. Are you the same size as you were when you were born? Where do you think that extra mass came from? So, you are made of more and new atomic stuff over your lifetime; are you still you? These questions belong to what is generally known as “The Ship of Theseus” family of paradoxes, examples of which can be found pretty much everywhere.

The ultimate question the Ship of Theseus poses is one of identity, and specifically, “What makes a thing itself?” and, “At what point or through what means of alteration is a thing no longer itself?” Some schools of thought hold that it’s not what a thing is made of, but what it does that determines what it is. These philosophical groups are known as the behaviorists and the functionalists, and the latter believes that if a body or a mind goes through the “right kind” of process, then it can be termed as being the same as the original. That is, if I get a mechanical heart and what it does is keep blood pumping through my body, then it is my heart. Maybe it isn’t the heart I was born with, but it is my heart. And this seems to make sense to us, too. My new heart does the job my original cells were intending to do, but it does that job better than they could, and for longer; it works better, and I’m better because of it. But there seems to be something about that “Better” which throws us off, something about the line between therapeutic technology and voluntary augmentation.

When we are faced with the necessity of a repair, we are willing to accept that our new parts will be different than our old ones. In fact, we accept it so readily that we don’t even think about them as new parts. What Hermes does, however, is voluntary; he doesn’t “need” a chest-harpoon, but he wants one, and so he upgrades himself. And therein lies the crux of our dilemma: When we’re acutely aware of the process of upgrading, or repairing, or augmenting ourselves past a baseline of “Human,” we become uncomfortable, made to face the paradox of our connection to an idea of a permanent body that is in actuality constantly changing. Take for instance the question of steroidal injection. As a medical technology, there are times when we are more than happy to accept the use of steroids, as it will save a life, and allow people to live as “normal” human beings. Sufferers of asthma and certain types of infection literally need steroids to live. In other instances, however, we find ourselves abhorring the use of steroids, as it gives the user an “unfair advantage.” Baseball, football, the Olympics: all of these arena in which we look to the use of “enhancement technologies, and we draw a line and say, “If you achieved the peak of physical perfection through a process, that is through hard work and sweat and training, then your achievement is valid. But if you skipped a step, if you make yourself something more than human, then you’ve cheated.”

This sense of “having cheated” can even be seen in the case of humans who would otherwise be designated as “handicapped.” Aimee Mullins is a runner, model, and public speaker who has talked about how losing her legs has, in effect, given her super powers.[1] By having the ability to change her height, her speed, or her physical appearance at will, she contends that she has a distinct advantage over anyone who does not have that capability. To this end, we can come to see that something about the nature of our selves actually is contained within our physical form because we’re literally incapable of being some things, until we can change who and what we are. And here, in one person, what started as a therapeutic replacement—an assistive medical technology—has seamlessly turned into an upgrade, but we seem to be okay with this. Why? Perhaps there is something inherent in the struggle of overcoming the loss of a limb or the suffering of an illness that allows us to feel as if the patient has paid their dues. Maybe if Hermes had been stabbed by Roberto, we wouldn’t begrudge him a chest-harpoon.

But this presents us with a serious problem, because now we can alter ourselves by altering our bodies, where previously we said that our bodies were not the “real us.” Now, we must consider what it is that we’re changing when we swap out new and different pieces of ourselves. This line of thinking matches up with schools of thought such as physicalism, which says that when we make a fundamental change to our physical composition, then we have changed who we are.


Is Your Mind Just a Giant Brain?

Briefly, the doctrine of mind-body dualism (MBD) does pretty much what it says on the package, in that adherents believe that the mind and the body are two distinct types of stuff. How and why they interact (or whether they do at all) varies from interpretation to interpretation, but on what’s known as René Descartes’s “Interactionist” model, the mind is the real self, and the body is just there to do stuff. In this model, bodily events affect mental events, and vice versa, so what you think leads to what you do, and what you do can change how you think. This seems to make sense, until we begin to pick apart the questions of why we need two different types of thing, here. If the mind and the body affect each other, then how can the non-physical mind be the only real self? If it were the only real part of you, then nothing that happened to the physical shell should matter at all, because the mind? These questions and more very quickly cause us to question the validity of the mind as our “real selves,” leaving us trapped between the question of who we are, and the question of why we’re made the way we’re made. What can we do? Enter: Physicalism

The physicalist picture says that mind-states are brain-states. There’s none of this “two kinds of stuff” nonsense. It’s all physical stuff, and it all interacts, because it’s all physical. When the chemical pathways in your brain change, you change. When you think new thoughts, it’s because something in your world and your environment has changed. All that you are is the physical components of your body and the world around you. Pretty simple, right? Well, not quite that simple. Because if this is the case, then why should we feel that anything emotional would be changed by upgrading ourselves? As long as we’re pumping the same signals to the same receivers, and getting the same kinds of responses, everything we love should still be loved by us. So, why do the physicalists still believe that changing what we are will change who we are?

Let’s take a deeper look at the implications of physicalism for our dear Mr. Conrad.

According to this picture, with the alteration or loss of his biological components and systems, Hermes should begin to lose himself, until, with the removal of his brain, he would no longer be himself at all. But why should this be true? According to our previous discussion of the functionalist and behaviorist forms of physicalism, if Hermes’s new parts are performing the same job, in the same way as his old parts, just with a few new extras, then he shouldn’t be any different, at all. In order to understand this, we have to first know that I wasn’t completely honest with you, because some physicalists believe that the integrity of the components and the systems that make up a thing are what makes that thing. Thus, if we change the physical components of the thing we’re studying, then we change the thing. So, perhaps this picture is the right one, and the Futurama universe is a purely physicalist universe, after all.

On this view, what makes us who we are is precisely what we are. Our bits and pieces, cells, and chunks: these make us exactly the people we are, and so, if they change, then of course we will change. If our selves are dependent on our biology, then we are necessarily no longer ourselves when we remove that biology, regardless of whether the new technology does exactly the same job that the biology used to. And the argument seems to hold, even if it had been a new, diffferent set of human parts, rather than robot parts. In this particular physicalist view, it’s not just the stuff, but also the provenance of the individual parts that matter, and so changing the components changes us. As Hermes replaces part after part of his physical body, it becomes easier and easier for him to replace more parts, but he is still, in some sense, Hermes. He has the same motivations, the same thoughts, and the same memories, and so he is still Hermes, even if he’s changed. Right up until he swaps his brain, that is. And this makes perfect sense, because the brain is where the memories, thoughts, and motivations all reside. But, then…why aren’t more people with pacemakers cold and emotionless? Why is it that people with organs donated from serial killers don’t then turn into serial killers, themselves, despite what movies would have us believe? If this picture of physicalism is the right one, then why are so many people still themselves after transplants? Perhaps it’s not any one of these views that holds the whole key; maybe it’s a blending of three. This picture seems to suggest that while the bits and pieces of our physical body may change, and while that change may, in fact, change us, it is a combination of how, how quickly, and how many changes take place that will culminate in any eventual massive change in our selves.


Roswell That Ends Well

In the end, the versions of physicalism presented in the universe of Futurama seems to almost jibe with the intuitions we have about the nature of our own identity, and so, for the sake of Hermes Conrad, it seems like we should make the attempt to find some kind of understanding. When we see Hermes’s behaviour as he adds more and more new parts, we, as outside observers, have an urge to say “He’s not himself anymore,” but to Hermes, who has access to all of his reasoning and thought processes, his changes are merely who he is. It’s only when he’s shown himself from the outside via Zoidberg putting his physical brain back into his biological body, that he sees who and what he has allowed himself to become, and how that might be terrifying to those who love him. Perhaps it is this continuance of memory paired with the ability for empathy that makes us so susceptible to the twin traps of a permanent self and the terror of losing it.

Ultimately, everything we are is always in flux, with each new idea, each new experience, each new pound, and each new scar we become more and different than we ever have been, but as we take our time and integrate these experiences into ourselves, they are not so alien to us, nor to those who love us. It is only when we make drastic changes to what we are that those around us are able to question who we have become.

Oh, and one more thing: The “Ship of Theseus” story has a variant which I forgot to mention. In it, someone, perhaps a member of the original crew, comes along in another ship and picks up all the discarded, worn out pieces of Theseus’s ship, and uses them to build another, kind of decrepit ship. The stories don’t say what happens if and when Theseus finds out about this, or whether he gives chase to the surreptitious ship builder, but if he did, you can bet the latter party escapes with a cry of “Whooop-whoop-whoop-whoop-whoop-whoop!” on his mouth tendrils.



[1] “It’s not fair having 12 pairs of legs.” Mullins, Aimee. TED Talk 2009