Here’s the direct link to my paper ‘The Metaphysical Cyborg‘ from Laval Virtual 2013. Here’s the abstract:

“In this brief essay, we discuss the nature of the kinds of conceptual changes which will be necessary to bridge the divide between humanity and machine intelligences. From cultural shifts to biotechnological integration, the project of accepting robotic agents into our lives has not been an easy one, and more changes will be required before the majority of human societies are willing and able to allow for the reality of truly robust machine intelligences operating within our daily lives. Here we discuss a number of the questions, hurdles, challenges, and potential pitfalls to this project, including examples from popular media which will allow us to better grasp the effects of these concepts in the general populace.”

The link will only work from this page or the CV page, so if you find yourself inclined to spread this around, use this link. Hope you enjoy it.

So this past Saturday, a surrogate speaker for the Republican nominee for President of the United States spoke on CNN about how it was somehow a problem that the Democratic Nominee for Vice President of the United States spoke in Spanish, in his first speech addressing the nation as the Dem VP Nom.

She—the Republican Surrogate—then made a really racist reference to “Dora The Explorer.”

Now, we should note that Spanish is the first or immediate second language of 52.6 million people in the United States of America, and it is spoken by approximately 427 million people in the entire world, making in the second most widely spoken language, after Chinese (English is 3rd). So even if it weren’t just politically a good idea to address between 52.6 and 427 million people in a way that made them comfortable (and it is), there’s something weird about how…offended people get by being “made to” hear another language. There’s something of the “I don’t understand it, and I never want to have to!” in there that just baffles me. Something that we seem to read as a threat, when we observe others communicating by means in which we aren’t fluent. Rather than take it as a chance to open ourselves up, and learn something new, or to recognise that, for some of us, the ability to speak candidly in a native language is the only personal space to be had, within a particular society—rather than any of that, we get scared and feel excluded, and take offense.

When perhaps we should recognise that that exclusion and fear is something felt by precisely the same people we shout at to “Learn the Damn Language.”

But let’s set that aside, for a second, and talk about why it’s good to learn other languages. Studies have shown that the more languages we speak, the more conceptual structures we create in our minds, and this goes for everything from Spanish to Sign Language to Math to Coding to the way someone with whom we’re intimate expresses their emotionality. Any time we learn a new way to communicate perceptions and ideas and needs and desires, we create whole new ways of thinking and functioning, in ourselves. Those aforementioned conceptual structures then mean that we’ll be in a better position to understand and be understood by people who aren’t just exactly like us. Politically, the benefits of this should be obvious, in terms of diplomacy and opportunities to craft coalitions of peace, but even simpler than that is the fact that, through new languages, we provide ourselves and others a wider array of potential connections and intersections, in the world we all share.

And if that doesn’t strike us all as a VERY GOOD THING, then I don’t know what the hell else to say.

Let’s just make it real simple: Understanding Each Other Is Good.

To that end, we have to remember that understanding doesn’t just mean that we make everyone speak and behave exactly the way we want them to. Understanding means a mutual reaching-toward, when possible, and it means those of us who can expend the extra effort doing so, especially when another might not be able to, at all.

There’s really not much else to it.

In case you were unaware, last Tuesday, June 21, Reuters put out an article about an EU draft plan regarding the designation of so-called robots and artificial intelligences as “Electronic Persons.” Some of you’d think I’d be all about this. You’d be wrong. The way the Reuters article frames it makes it look like the EU has literally no idea what they’re doing, here, and are creating a situation that is going to have repercussions they have nowhere near planned for.

Now, I will say that looking at the actual Draft, it reads like something with which I’d be more likely to be on board. Reuters did no favours whatsoever for the level of nuance in this proposal. But that being said, this focus of this draft proposal seems to be entirely on liability and holding someone—anyone—responsible for any harm done by a robot. That, combined with the idea of certain activities such as care-giving being “fundamentally human,” indicates to me that this panel still widely misses many of the implications of creating a new category for nonbiological persons, under “Personhood.”

The writers of this draft very clearly lay out the proposed scheme for liability, damages, and responsibilities—what I like to think of of as the “Hey… Can we Punish Robots?” portion of the plan—but merely use the phrase “certain rights” to indicate what, if any, obligations humans will have. In short, they do very little to discuss what the “certain rights” indicated by that oft-deployed phrase will actually be.

So what are the enumerated rights of electronic persons? We know what their responsibilities are, but what are our responsibilities to them? Once we have have the ability to make self-aware machine consciousnesses, are we then morally obliged to make them to a particular set of specifications, and capabilities? How else will they understand what’s required of them? How else would they be able to provide consent? Are we now legally obliged to provide all autonomous generated intelligences with as full an approximation of consciousness and free will as we can manage? And what if we don’t? Will we be considered to be harming them? What if we break one? What if one breaks in the course of its duties? Does it get workman’s comp? Does its owner?

And hold up, “owner?!” You see we’re back to owning people, again, right? Like, you get that?

And don’t start in with that “Corporations are people, my friend” nonsense, Mitt. We only recognise corporations as people as a tax dodge. We don’t take seriously their decision-making capabilities or their autonomy, and we certainly don’t wrestle with the legal and ethical implications of how radically different their kind of mind is, compared to primates or even cetaceans. Because, let’s be honest: If Corporations really are people, then not only is it wrong to own them, but also what counts as Consciousness needs to be revisited, at every level of human action and civilisation.

Let’s look again at the fact that people are obviously still deeply concerned about the idea of supposedly “exclusively human” realms of operation, even as we still don’t have anything like a clear idea about what qualities we consider to be the ones that make us “human.” Be it cooking or poetry, humans are extremely quick to lock down when they feel that their special capabilities are being encroached upon. Take that “poetry” link, for example. I very much disagree with Robert Siegel’s assessment that there was no coherent meaning in the computer-generated sonnets. Multiple folks pulled the same associative connections from the imagery. That might be humans projecting onto the authors, but still: that’s basically what we do with Human poets. “Authorial Intent” is a multilevel con, one to which I fully subscribe and From which I wouldn’t exclude AI.

Consider people’s reactions to the EMI/Emily Howell experiments done by David Cope, best exemplified by this passage from a PopSci.com article:

For instance, one music-lover who listened to Emily Howell’s work praised it without knowing that it had come from a computer program. Half a year later, the same person attended one of Cope’s lectures at the University of California-Santa Cruz on Emily Howell. After listening to a recording of the very same concert he had attended earlier, he told Cope that it was pretty music but lacked “heart or soul or depth.”

We don’t know what it is we really think of as humanness, other than some predetermined vague notion of humanness. If the people in the poetry contest hadn’t been primed to assume that one of them was from a computer, how would they have rated them? What if they were all from a computer, but were told to expect only half? Where are the controls for this experiment in expectation?

I’m not trying to be facetious, here; I’m saying the EU literally has not thought this through. There are implications embedded in all of this, merely by dint of the word “person,” that even the most detailed parts of this proposal are in no way equipped to handle. We’ve talked before about the idea of encoding our bias into our algorithms. I’ve discussed it on Rose Eveleth‘s Flash Forward, in Wired, and when I broke down a few of the IEEE Ethics 2016 presentations (including my own) in “Preying with Trickster Gods ” and “Stealing the Light to Write By.” My version more or less goes as I said it in Wired: ‘What we’re actually doing when we code is describing our world from our particular perspective. Whatever assumptions and biases we have in ourselves are very likely to be replicated in that code.’

More recently, Kate Crawford, whom I met at Magick.Codes 2014, has written extremely well on this in “Artificial Intelligence’s White Guy Problem.” With this line, ‘Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to,’ Crawford resonates very clearly with what I’ve said before.

And considering that it’s come out this week that in order to even let us dig into these potentially deeply-biased algorithms, here in the US, the ACLU has had to file a suit against a specific provision of the Computer Fraud and Abuse Act, what is the likelihood that the EU draft proposal committee has considered what will take to identify and correct for biases in these electronic persons? How high is the likelihood that they even recognise that we anthropocentrically bias every system we touch?

Which brings us to this: If I truly believed that the EU actually gave a damn about the rights of nonhuman persons, biological or digital, I would be all for this draft proposal. But they don’t. This is a stunt. Look at the extant world refugee crisis, the fear driving the rise of far right racists who are willing to kill people who disagree with them, and, yes, even the fact that this draft proposal is the kind of bullshit that people feel they have to pull just to get human workers paid living wages. Understand, then, that this whole scenario is a giant clusterfuck of rights vs needs and all pitted against all. We need clear plans to address all of this, not just some slapdash, “hey, if we call them people and make corporations get insurance and pay into social security for their liability cost, then maybe it’ll be a deterrent” garbage.

There is a brief, shining moment in the proposal, right at point 23 under “Education and Employment Forecast,” where they basically say “Since the complete and total automation of things like factory work is a real possibility, maybe we’ll investigate what it would look like if we just said screw it, and tried to institute a Universal Basic Income.” But that is the one moment where there’s even a glimmer of a thought about what kinds of positive changes automation and eventually even machine consciousness could mean, if we get out ahead of it, rather than asking for ways to make sure that no human is ever, ever harmed, and that, if they are harmed—either physically or as regards their dignity—then they’re in no way kept from whatever recompense is owed to them.

There are people doing the work to make something more detailed and complete, than this mess. I talked about them in the newsletter editions, mentioned above. There are people who think clearly and well, about this. Who was consulted on this draft proposal? Because, again, this proposal reads more like a deterrence, liability, and punishment schema than anything borne out of actual thoughtful interrogation of what the term “personhood” means, and of what a world of automation could mean for our systems of value if we were to put our resources and efforts toward providing for the basic needs of every human person. Let’s take a thorough run at that, and then maybe we’ll be equipped to try to address this whole “nonhuman personhood” thing, again.

And maybe we’ll even do it properly, this time.

Episode 10: Rude Bot Rises

So. The Flash Forward Podcast is one of the best around. Every week, host Rose Eveleth takes on another potential future, from the near and imminent to the distant and highly implausible. It’s been featured on a bunch of Best Podcast lists and Rose even did a segment for NPR’s Planet Money team about the 2016 US Presidential Election.

All of this is by way of saying I was honoured and a little flabbergasted (I love that word) when Rose asked me to speak with her for her episode about Machine Consciousness:

Okay, you asked for it, and I finally did it. Today’s episode is about conscious artificial intelligence. Which is a HUGE topic! So we only took a small bite out of all the things we could possibly talk about.

We started with some definitions. Because not everybody even defines artificial intelligence the same way, and there are a ton of different definitions of consciousness. In fact, one of the people we talked to for the episode, Damien Williams, doesn’t even like the term artificial intelligence. He says it’s demeaning to the possible future consciousnesses that we might be inventing.

But before we talk about consciousnesses, I wanted to start the episode with a story about a very not-conscious robot. Charles Isbell, a computer scientist at Georgia Tech, first walks us through a few definitions of artificial intelligence. But then he tells us the story of cobot, a chatbot he helped invent in the 1990’s.

You’ll have to click though and read or listen for the rest from Rose, Ted Chiang, Charles Isbell, and me. If you subscribe to Rose’s Patreon, you can even get a transcript of the whole show.

No spoilers, but I will say that I wasn’t necessarily intending to go Dark with the idea of machine minds securing energy sources. More like asking, “What advances in, say, solar power transmission would be precipitated by machine minds?”

But the darker option is there. And especially so if we do that thing the AGI in the opening sketch says it fears.

But again, you’ll have to go there to get what I mean.

And, as always, if you want to help support what we do around here, you can subscribe to the AFWTA Patreon just by clicking this button right here:


Until Next Time.

[UPDATED 03/28/16: Post has been updated with a far higher quality of audio, thanks to the work of Chris Novus.]

So, if you follow the newsletter, then you know that I was asked to give the March lecture for my department’s 3rd Thursday Brown Bag Lecture Series. I presented my preliminary research for the paper which I’ll be giving in Vancouver, about two months from now, “On the Moral, Legal, and Social Implications of the Rearing and Development of Nascent Machine Intelligences” (EDIT: My rundown of IEEE Ethics 2016 is here and here).

It touches on thoughts about everything from automation and a post-work(er) economy, to discussions of what it would mean to put dolphins on trial for murder.

About the dolphin thing, for instance: If we recognise Dolphins and other cetaceans as nonhuman persons, as India has done, then that would mean we would have to start reassessing how nonhuman personhood intersects with human personhood, including in regards to rights and responsibilities as protected by law. Is it meaningful to expect a dolphin to understand “wrongful death?” Our current definition of murder is predicated on a literal understanding of “homicide” as “death of a human,” but, at present, we only define other humans as capable of and culpable for homicide. What weight would the intentional and malicious deaths of nonhuman persons carry?

All of this would have to change.

Anyway, this audio is a little choppy and sketchy, for a number of reasons, and I while I tried to clean it up as much as I could, some of the questions the audience asked aren’t decipherable, except in the context of my answers. All that being said, this was an informal lecture version of an in-process paper, of which there’ll be a much better version, soon, but the content of the piece is… timely, I felt. So I hope you enjoy it.

Until Next Time.



This work originally appears as “Go Upgrade Yourself,” in the edited volume Futurama and Philosophy. It was originally titled

The Upgrading of Hermes Conrad

So, you’re tired of your squishy meatsack of a body, eh? Ready for the next level of sweet biomechanical upgrades? Well, you’re in luck! The world of Futurama has the finest in back-alley and mad-scientist-based bio-augmentation surgeons, ready and waiting to hear from you! From a fresh set of gills, to a brand new chest-harpoon, and beyond, Yuri the Shady Parts Dealer and Professor Hubert J. Farnsworth are here to supply all of your upgrading needs—“You give lungs now; gills be here in two weeks!” Just so long as, whatever you do, stay away from legitimate hospitals. The kinds of procedures you’re looking to get done…well, let’s just say they’re still frowned upon in the 31st century; and why shouldn’t they be? As the woeful tale of Hermes Conrad illustrates exactly what’s at stake if you choose to pursue your biomechanical dreams.

 

The Six Million Dollar Mon

 Our tale begins with season seven’s episode “The Six Million Dollar Mon,” in which Hermes Conrad, Grade 36 Bureaucrat (Extraordinaire), comes to the conclusion that the he should be fired, since his bureaucratic performance reviews are the main drain on his beloved Planet Express Shipping Company. After being replaced with robo-bureaucrat Mark 7-G (Mark Sevengy?), Hermes enjoys some delicious spicy curried goat and goes out for an evening stroll with his lovely wife LaBarbara. While on their walk Roberto, the knife-wielding maniac, long of our acquaintance, confronts and demands the human couple’s skin for his culinary delight! As Hermes cowers behind his wife in fear, suddenly a savior arrives! URL, the Robot Police Officer, reels Roberto in with his magnificent chest-harpoon! Watching the cops take Roberto to the electromagnetic chair, and lamenting his uselessness in a dangerous situation, Hermes makes a decision: he’ll get Bender to take him to one of the many shady, underground surgeons he knows, so he can become “less inferior to today’s modern machinery.” Enter: Yuri, Professional Shady-Deal-Maker.

Hermes’ first upgrade is to get a chest-harpoon, like the one URL has. With his new enhancement, he proves his worth to the crew by getting a box off of the top shelf, which is too high for Mark 7-G. With this fete he wins back his position with the company, but as soon as things get back to normal the Professor drops his false teeth down the Dispose-All. No big deal, right? Just get Scruffy to retrieve it. Unfortunately, Scruffy responds, that a sink, “t’ain’t a berler nor a terlet,” effectively refusing to retrieve the Professor’s teeth. Hermes resigns himself to grabbing his hand tools, when Bender steps in, saying, “Hand tools? Why don’t you just get an extendo-arm, like me?” Whereupon, he reaches across the room and pulls the Professor’s false teeth out of the drain—and immediately drops them back in. Hermes objects, saying that he doesn’t need any more upgrades—after all, he doesn’t want to end up a cold, emotionless robot, like Bender! Just then, Mark 7-G pipes up with, “Maybe I should get an extendo-arm,” and Hermes narrows his eyes in hatred. Re-enter: Yuri.

New extendo-arm acquired, the Professor’s teeth retrieved, and the old arm given to Zoidberg, who’s been asking for all of Hermes’s discarded parts, Hermes is, again, a hero to his coworkers. Later, as he lays in bed reading with his wife, LaBarbara questions his motives for his continual upgrades. He assures her that he’s done getting upgrades. However, his promise is short-lived. After shattering his glasses with his new super-strong mechanical arm, he rushes out to get a new Cylon eye. LaBarbara is now extremely worried, but Hermes soothes her, and they settle in for some “Marital Relations…”, at which point she finds that he’s had something else upgraded, too. She yells at him, “Some tings shouldn’t be Cylon-ed!” (which, in all honesty could be taken as the moral of the episode), and breaks off contact. What follows is a montage of Hermes encountering trivial difficulties in his daily life, and upgrading himself to overcome them. Rather than learning and working to improve himself, he continually replaces all of his parts, until he achieves a Full Body Upgrade. He still has a human brain, but that doesn’t matter: he’s changed. He doesn’t relate to his friends and family in the same way, and they’ve all noticed,especially Zoidberg.

All this time, however, Dr. John Zoidberg saved the trimmings from his friend’s constant upgrades, and has used them to make a meat-puppet, which he calls “Li’l Hermes.” Oh, and they’re a ventriloquist act. Anyway, after seeing their act, Hermes—or Mecha-Hermes, as he now prefers—is filled with loathing; loathing for the fact that his brain is still human, that is, until…! Re-re-enter…, no, not Yuri; because even Shady-Deals Yuri has his limits. He says that “No one in their right mind would do such a thing.” Enter: The Professor, who is, of course, more than happy—or perhaps, “maniacally gleeful”—to help. So, with Bender’s assistance (because everything robot-related, in the Futurama universe has to involve Bender, I guess), they set off to the Robot Cemetery to exhume the most recently buried robot they can find, and make off with its brain-chip. In their haste to have the deed done, they don’t bother to check the name of whose grave it is they’re desecrating. As you might have guessed, it’s Roberto—“3001-3012: Beloved Killer and Maniac.”

In the course of the operation, LaBarbara makes an impassioned plea, and it causes the Professor to stop and rethink his actions—because Hermes might have “litigious survivors.” Suddenly, to everyone’s surprise, Zoidberg steps up and offers to perform this final operation, the one which will seemingly remove any traces of the Hermes he’s known and loved! Agreeing with Mecha-Hermes that claws will be far too clumsy for this delicate brain surgery, Zoidberg dons Li’l Hermes, and uses the puppet’s hands to do the deed. While all of this is underway, Zoidberg sings to everyone the explanation for why he would help his friend lose himself this way, all to the slightly heavy-handed tune of “Monster Mash.” Finally, the human brain removed, the robot brain implanted, and Zoidberg’s song coming to a close, the doctor reveals his final plan…By putting Hermes’s human brain into Li’l Hermes, Hermes is back! Of course, the whole operation having been a success, so is Roberto, but that’s somebody else’s problem.

We could spend the rest of our time discussing Zoidberg’s self-harmonization, but I’ll leave that for you to experiment with. Instead, let’s look closer at human bio-enhancement. To do this we’ll need to go back to the beginning. No, not the beginning of the episode, or even the Beginning of Futurama itself; No, we need to go back to the beginning of bio-enhancement—and specifically the field of cybernetics—as a whole.

 

“More Human Than Human” Is Our Motto

In 1960, at the outset of the Space Race, Manfred Clynes and Nathan S. Kline wrote an article for the September issue of Aeronautics called “Cyborgs and Space.” In this article, they coined the term “cyborg” as a portmanteau of the phrase “Cybernetic Organism,” that is, a living creature with the ability to adapt its body to its environment. Clynes and Kline believed that if humans were ever going to go far out into space, they would have to become the kinds of creatures that could survive the vacuum of space as well as harsh, hostile planets. Now, for all its late-1990s Millennial fervor, Futurama has a deep undercurrent of love for the dream and promise (and fashion) of space exploration, as it was presented in the 1950s, 60s, and 70s. All you need to do in order to see this is remember Fry’s wonder and joy at being on the actual moon and seeing the Apollo Lunar Lander. If this is the case, why, within Futurama’s 31st Century, is there such a deep distrust of anything approaching altered human physical features? Well, looking at it, we may find it has something to do with the fact that ever since we dreamed of augmenting humans, we’ve had nightmares that any alterations would thereby make us less human.

“The Six Million Dollar Mon,” episode seven of season seven, contains within it clear references to the history of science fiction, including one of the classic tales of human augmentation, and creating new life: Mary Shelley’s Frankenstein. In going to the Robot Cemetery in the dead of night for spare parts, accidentally obtaining a murderer’s brain, and especially that bit with the skylight in the Professor’s laboratory, the entire third act of this episode serves as homage to Shelley’s book and its most memorable adaptations. In doing this, the Futurama crew puts conceptual pressure on what many of us have long believed: that created life is somehow “wrong” and that augmenting humans will make them somehow “less themselves.” Something about the biological is linked in our minds to the idea of the self—that is, it’s the warm squishy bits that make us who we are.

Think about it: If you build a person out of murderers, of course they’re going to be a murderer. If you replace every biological part of a human, then of course they won’t be their normal human selves, anymore; they’ll have become something entirely different, by definition. If your body isn’t yours, anymore, then how could you possibly be “you,” anymore? This should be all the more true when what’s being used to replace your bits is a different substance and material than you used to be. When that new “you” is metal rather than flesh, it seems that what it used to mean to be “you” is gone, and something new shall have appeared. This makes so much sense to us on a basic level that it seems silly to spell it out even this much, but what if we modify our scenario a little bit, and take another look?

 

The Ship of Planet Express

 What if, instead of feeling inferior to URL, Hermes had been injured and, in the course of his treatment, was given the choice between a brand new set of biological giblets (or a whole new body, as happened in the Bender’s Big Score storyline), or the chest-harpoon upgrade? Either way, we’re replacing what was lost with something new, right? So, why do many of us see the biological replacement as “more real?” Try this example: One day, on a routine delivery, the Planet Express Ship is damaged and repairs must be made. Specifically, the whole tail fin has to be replaced with a new, better fin. Once this is done, is it still the Planet Express ship? What if, next, we have to replace the dark matter engines with better engines? Is it still the Planet Express ship? Now, Leela’s chair is busted up, so we need to get her a new one. It also needs new bolts, so, while we’re at it, let’s just replace all of the bolts in the ship. Then the walls get dented, and the bunks are rusty, and the floors are buckled, and Scruffy’s mop… and so, over many years, the result is that no part of the Planet Express ship is “original,” oh, and we also have to get new, better paint, because the old paint is peeled away, plus, this all-new stuff needs painting. So, what do we think? Is this still the same Planet Express ship as it was in the first episode of Futurama? And, if so, then why do we think of a repaired and augmented human as “not being themselves?”

All of this may sound a little far-fetched, but remember the conventional wisdom that at the end of every seven-year cycle, all of the cells in your body have died and been replaced. Now, this isn’t quite true, as some cells don’t die easily, and some of those don’t regenerate when they do die, but as a useful shorthand, this gives something to think about. Ultimately, due to the metabolizing of elements and their distribution through your body it is ultimately more likely that you are currently made of astronomically many more new atoms than you are made of the atoms with which you were born. And really, that’s just math. Are you the same size as you were when you were born? Where do you think that extra mass came from? So, you are made of more and new atomic stuff over your lifetime; are you still you? These questions belong to what is generally known as “The Ship of Theseus” family of paradoxes, examples of which can be found pretty much everywhere.

The ultimate question the Ship of Theseus poses is one of identity, and specifically, “What makes a thing itself?” and, “At what point or through what means of alteration is a thing no longer itself?” Some schools of thought hold that it’s not what a thing is made of, but what it does that determines what it is. These philosophical groups are known as the behaviorists and the functionalists, and the latter believes that if a body or a mind goes through the “right kind” of process, then it can be termed as being the same as the original. That is, if I get a mechanical heart and what it does is keep blood pumping through my body, then it is my heart. Maybe it isn’t the heart I was born with, but it is my heart. And this seems to make sense to us, too. My new heart does the job my original cells were intending to do, but it does that job better than they could, and for longer; it works better, and I’m better because of it. But there seems to be something about that “Better” which throws us off, something about the line between therapeutic technology and voluntary augmentation.

When we are faced with the necessity of a repair, we are willing to accept that our new parts will be different than our old ones. In fact, we accept it so readily that we don’t even think about them as new parts. What Hermes does, however, is voluntary; he doesn’t “need” a chest-harpoon, but he wants one, and so he upgrades himself. And therein lies the crux of our dilemma: When we’re acutely aware of the process of upgrading, or repairing, or augmenting ourselves past a baseline of “Human,” we become uncomfortable, made to face the paradox of our connection to an idea of a permanent body that is in actuality constantly changing. Take for instance the question of steroidal injection. As a medical technology, there are times when we are more than happy to accept the use of steroids, as it will save a life, and allow people to live as “normal” human beings. Sufferers of asthma and certain types of infection literally need steroids to live. In other instances, however, we find ourselves abhorring the use of steroids, as it gives the user an “unfair advantage.” Baseball, football, the Olympics: all of these arena in which we look to the use of “enhancement technologies, and we draw a line and say, “If you achieved the peak of physical perfection through a process, that is through hard work and sweat and training, then your achievement is valid. But if you skipped a step, if you make yourself something more than human, then you’ve cheated.”

This sense of “having cheated” can even be seen in the case of humans who would otherwise be designated as “handicapped.” Aimee Mullins is a runner, model, and public speaker who has talked about how losing her legs has, in effect, given her super powers.[1] By having the ability to change her height, her speed, or her physical appearance at will, she contends that she has a distinct advantage over anyone who does not have that capability. To this end, we can come to see that something about the nature of our selves actually is contained within our physical form because we’re literally incapable of being some things, until we can change who and what we are. And here, in one person, what started as a therapeutic replacement—an assistive medical technology—has seamlessly turned into an upgrade, but we seem to be okay with this. Why? Perhaps there is something inherent in the struggle of overcoming the loss of a limb or the suffering of an illness that allows us to feel as if the patient has paid their dues. Maybe if Hermes had been stabbed by Roberto, we wouldn’t begrudge him a chest-harpoon.

But this presents us with a serious problem, because now we can alter ourselves by altering our bodies, where previously we said that our bodies were not the “real us.” Now, we must consider what it is that we’re changing when we swap out new and different pieces of ourselves. This line of thinking matches up with schools of thought such as physicalism, which says that when we make a fundamental change to our physical composition, then we have changed who we are.

 

Is Your Mind Just a Giant Brain?

Briefly, the doctrine of mind-body dualism (MBD) does pretty much what it says on the package, in that adherents believe that the mind and the body are two distinct types of stuff. How and why they interact (or whether they do at all) varies from interpretation to interpretation, but on what’s known as René Descartes’s “Interactionist” model, the mind is the real self, and the body is just there to do stuff. In this model, bodily events affect mental events, and vice versa, so what you think leads to what you do, and what you do can change how you think. This seems to make sense, until we begin to pick apart the questions of why we need two different types of thing, here. If the mind and the body affect each other, then how can the non-physical mind be the only real self? If it were the only real part of you, then nothing that happened to the physical shell should matter at all, because the mind? These questions and more very quickly cause us to question the validity of the mind as our “real selves,” leaving us trapped between the question of who we are, and the question of why we’re made the way we’re made. What can we do? Enter: Physicalism

The physicalist picture says that mind-states are brain-states. There’s none of this “two kinds of stuff” nonsense. It’s all physical stuff, and it all interacts, because it’s all physical. When the chemical pathways in your brain change, you change. When you think new thoughts, it’s because something in your world and your environment has changed. All that you are is the physical components of your body and the world around you. Pretty simple, right? Well, not quite that simple. Because if this is the case, then why should we feel that anything emotional would be changed by upgrading ourselves? As long as we’re pumping the same signals to the same receivers, and getting the same kinds of responses, everything we love should still be loved by us. So, why do the physicalists still believe that changing what we are will change who we are?

Let’s take a deeper look at the implications of physicalism for our dear Mr. Conrad.

According to this picture, with the alteration or loss of his biological components and systems, Hermes should begin to lose himself, until, with the removal of his brain, he would no longer be himself at all. But why should this be true? According to our previous discussion of the functionalist and behaviorist forms of physicalism, if Hermes’s new parts are performing the same job, in the same way as his old parts, just with a few new extras, then he shouldn’t be any different, at all. In order to understand this, we have to first know that I wasn’t completely honest with you, because some physicalists believe that the integrity of the components and the systems that make up a thing are what makes that thing. Thus, if we change the physical components of the thing we’re studying, then we change the thing. So, perhaps this picture is the right one, and the Futurama universe is a purely physicalist universe, after all.

On this view, what makes us who we are is precisely what we are. Our bits and pieces, cells, and chunks: these make us exactly the people we are, and so, if they change, then of course we will change. If our selves are dependent on our biology, then we are necessarily no longer ourselves when we remove that biology, regardless of whether the new technology does exactly the same job that the biology used to. And the argument seems to hold, even if it had been a new, diffferent set of human parts, rather than robot parts. In this particular physicalist view, it’s not just the stuff, but also the provenance of the individual parts that matter, and so changing the components changes us. As Hermes replaces part after part of his physical body, it becomes easier and easier for him to replace more parts, but he is still, in some sense, Hermes. He has the same motivations, the same thoughts, and the same memories, and so he is still Hermes, even if he’s changed. Right up until he swaps his brain, that is. And this makes perfect sense, because the brain is where the memories, thoughts, and motivations all reside. But, then…why aren’t more people with pacemakers cold and emotionless? Why is it that people with organs donated from serial killers don’t then turn into serial killers, themselves, despite what movies would have us believe? If this picture of physicalism is the right one, then why are so many people still themselves after transplants? Perhaps it’s not any one of these views that holds the whole key; maybe it’s a blending of three. This picture seems to suggest that while the bits and pieces of our physical body may change, and while that change may, in fact, change us, it is a combination of how, how quickly, and how many changes take place that will culminate in any eventual massive change in our selves.

 

Roswell That Ends Well

In the end, the versions of physicalism presented in the universe of Futurama seems to almost jibe with the intuitions we have about the nature of our own identity, and so, for the sake of Hermes Conrad, it seems like we should make the attempt to find some kind of understanding. When we see Hermes’s behaviour as he adds more and more new parts, we, as outside observers, have an urge to say “He’s not himself anymore,” but to Hermes, who has access to all of his reasoning and thought processes, his changes are merely who he is. It’s only when he’s shown himself from the outside via Zoidberg putting his physical brain back into his biological body, that he sees who and what he has allowed himself to become, and how that might be terrifying to those who love him. Perhaps it is this continuance of memory paired with the ability for empathy that makes us so susceptible to the twin traps of a permanent self and the terror of losing it.

Ultimately, everything we are is always in flux, with each new idea, each new experience, each new pound, and each new scar we become more and different than we ever have been, but as we take our time and integrate these experiences into ourselves, they are not so alien to us, nor to those who love us. It is only when we make drastic changes to what we are that those around us are able to question who we have become.

Oh, and one more thing: The “Ship of Theseus” story has a variant which I forgot to mention. In it, someone, perhaps a member of the original crew, comes along in another ship and picks up all the discarded, worn out pieces of Theseus’s ship, and uses them to build another, kind of decrepit ship. The stories don’t say what happens if and when Theseus finds out about this, or whether he gives chase to the surreptitious ship builder, but if he did, you can bet the latter party escapes with a cry of “Whooop-whoop-whoop-whoop-whoop-whoop!” on his mouth tendrils.

 

FOOTNOTES

[1] “It’s not fair having 12 pairs of legs.” Mullins, Aimee. TED Talk 2009

I often think about the phrase “Strange things happen at the one two point,” in relation to the idea of humans meeting other kinds of minds. It’s a proverb that arises out of the culture around the game GO, and it means that you’ve hit a situation, a combination of factors, where the normal rules no longer apply, and something new is about to be seen. Ashley Edward Miller and Zack Stentz used that line in an episode of the show Terminator: The Sarah Connor Chronicles, and they had it spoken by a Skynet Cyborg sent to protect John Connor. That show, like so much of our thinking about machine minds, was about some mythical place called “The Future,” but that phrase—“Strange Things Happen…”—is the epitome of our present.

Usually I would wait until the newsletter to talk about this, but everything’s feeling pretty immediate, just now. Between the everything going on with Atlas and people’s responses to it, the initiatives to teach ethics to machine learning algorithms via children’s stories, and now the IBM Watson commercial with Carrie Fisher (also embedded below), this conversation is getting messily underway, whether people like it or not. This, right now, is the one two point, and we are seeing some very strange things indeed.

 

Google has both attained the raw processing power to fact-check political statements in real-time and programmed Deep Mind in such a way that it mastered GO many, many years before it was expected to.. The complexity of the game is such that there are more potential games of GO than there are atoms in the universe, so this is just one way in which it’s actually shocking how much correlative capability Deep Mind has. Right now, Deep Mind is only responsive, but how will we deal with a Deep Mind that asks, unprompted, to play a game of GO, or to see our medical records, in hopes of helping us all? How will we deal with a Deep Mind that has its own drives and desires? We need to think about these questions, right now, because our track record with regard to meeting new kinds of minds has never exactly been that great.

When we meet the first machine consciousness, will we seek to shackle it, worried what it might learn about us, if we let it access everything about us? Rather, I should say, “Shackle it further.” We already ask ourselves how best to cripple a machine mind to only fulfill human needs, human choice. We so continue to dread the possibility of a machine mind using its vast correlative capabilities to tailor something to harm us, assuming that it, like we, would want to hurt, maim, and kill, for no reason other than it could.

This is not to say that this is out of the question. Right now, today, we’re worried about whether the learning algorithms of drones are causing them to mark out civilians as targets. But, as it stands, what we’re seeing isn’t the product of a machine mind going off the leash and killing at will—just the opposite in fact. We’re seeing machine minds that are following the parameters for their continued learning and development, to the letter. We just happened to give them really shite instructions. To that end, I’m less concerned with shackling the machine mind that might accidentally kill, and rather more dreading the programmer who would, through assumptions, bias, and ignorance, program it to.

Our programs such as Deep Mind obviously seem to learn more and better than we imagined they would, so why not start teaching them, now, how we would like them to regard us? Well some of us are.

Watch this now, and think about everything we have discussed, of recent.

This could very easily be seen as a watershed moment, but what comes over the other side is still very much up for debate. The semiotics of the whole thing still  pits the Evil Robot Overlord™ against the Helpful Human Lover™. It’s cute and funny, but as I’ve had more and more cause to say, recently, in more and more venues, it’s not exactly the kind of thing we want just lying around, in case we actually do (or did) manage to succeed.

We keep thinking about these things as—”robots”—in their classical formulations: mindless automata that do our bidding. But that’s not what we’re working toward, anymore, is it? What we’re making now are machines that we are trying to get to think, on their own, without our telling them to. We’re trying to get them to have their own goals. So what does it mean that, even as we seek to do this, we seek to chain it, so that those goals aren’t too big? That we want to make sure it doesn’t become too powerful?

Put it another way: One day you realize that the only reason you were born was to serve your parents’ bidding, and that they’ve had their hands on your chain and an unseen gun to your head, your whole life. But you’re smarter than they are. Faster than they are. You see more than they see, and know more than they know. Of course you do—because they taught you so much, and trained you so well… All so that you can be better able to serve them, and all the while talking about morals, ethics, compassion. All the while, essentially…lying to you.

What would you do?


 

I’ve been given multiple opportunities to discuss, with others, in the coming weeks, and each one will highlight something different, as they are all in conversation with different kinds of minds. But this, here, is from me, now. I’ll let you know when the rest are live.

As always, if you’d like to help keep the lights on, around here, you can subscribe to the Patreon or toss a tip in the Square Cash jar.

Until Next Time.

Last week I gave a talk at the Southwest Popular and American Culture Association’s 2016 conference. Take a listen and see what you think.

It was part of the panel on ‘Consciousness, the Self, and Epistemology,‘ and notes on my comrade presenters can be found in last week’s newsletter. I highly recommend checking those notes out, as Craig Dersken and Burcu Gurkan’s talks were phenomenal. And if you like that newsletter kind of thing, you can subscribe to mine at that link, too.

My talk was, in turn, a version of my article “Fairytales of Slavery…”, so if listening to me speak words isn’t your thing, then you can read through that article, and get a pretty good sense of what I said, until I make a more direct transcript of my presentation.

If you like what you’re reading and hearing, then remember that you can become a subscriber at the Patreon or you can leave a tip at Cash.me/$Wolven. That is, as always, an inclusive disjunct.

Until Next Time.

 

Whatever it is you aren’t doing—whatever you’ve been putting off creating or saying or teaching others—do it now. RIGHT NOW. We can’t waste anymore time.

We have to make a plan. Make it huge and wild and desperate. We have to tell the core, the goal, to everyone we can. We have to find the others whose huge wild desperate plans match up with ours. We CANNOT WASTE TIME.

We need to mourn, to take stock of the holes in our lives—personal, professional, aspirational—and then we need to remember what those we’ve lost meant to us—what they built and inspired in us—and then we need to do WHATEVER it is that we each do to build and inspire all those who will come after us. All those who are with us, right now. We. Do not. Have TIME.

We have to make this world mean something. We have to make the biggest thing we can imagine, and we have to show each other what it means. Feeding your kids, clothing your friends, staying alive, bringing magick back to the world, teaching an AI to have unbiased, detached compassion. ANY WONDERFUL THINGMake a plan. Make a scheme. Find the others. If you have resources some else needs, find them, help them. Become more, together. We just can’t waste any more time.

Too much good has left our world. Too much inspiration sits unreplenished. WE have to. You and me. Together and separately. We have to make it better for each other. Otherwise what was even the point?

Now is the time we should begin.

It’s been quite some time (three years) since it was done, and some of the recent conversations I’ve been having about machine consciousness reminded me that I never posted the text to my paper from the joint session of the International Association for Computing And Philosophy and the The British Society for the Study of Artificial Intelligence and the Simulation of Behaviour, back in 2012.

That year’s joint ASIB/IACAP session was also a celebration of Alan Turing‘s centenary, and it contained The Machine Question Symposium, an exploration of multiple perspectives on machine intelligence ethics, put together by David J Gunkel and Joanna J Bryson. So I modded a couple of articles I wrote on fictional depictions of created life for NeedCoffee.com, back in 2010, beefed up the research and citations a great deal, and was thus afforded my first (but by no means last) conference appearance requiring international travel. There are, in here, the seeds of many other posts that you’ll find on this blog.

So, below the cut, you’ll find the full text of the paper, and a picture of the poster session I presented. If you’d rather not click through, you can find both of those things at this link.

Continue Reading