philosophy of mind

All posts tagged philosophy of mind

[Direct link to Mp3]

Back on March 13th, 2017, I gave an invited guest lecture, titled:

TECHNOLOGY, DISABILITY, AND HUMAN AUGMENTATION

‘Please join Dr. Ariel Eisenberg’s seminar, “American Identities: Disability,” and [the] Interdisciplinary Studies Department for an hour-long conversation with Damien Williams on disability and the normalization of technology usage, “means-well” technological innovation, “inspiration porn,” and other topics related to disability and technology.’

It was kind of an extemporaneous riff on my piece “On the Ins and Outs of Human Augmentation,” and it gave me the opportunity to namedrop Ashley Shew, Natalie Kane, and Rose Eveleth.

The outline looked a little like this:

  • Foucault and Normalization
    • Tech and sociological pressures to adapt to the new
      • Starts with Medical tech but applies Everywhere; Facebook, Phones, Etc.
  • Zoltan Istvan: In the Transhumanist Age, We Should Be Repairing Disabilities Not Sidewalks
  • All Lead To: Ashley Shew’s “Up-Standing Norms
    • Listening to the Needs and Desires of people with disabilities.
      • See the story Shew tells about her engineering student, as related in the AFWTA Essay
    • Inspiration Porn: What is cast by others as “Triumphing” over “Adversity” is simply adapting to new realities.
      • Placing the burden on the disabled to be an “inspiration” is dehumanizing;
      • means those who struggle “have no excuse;”
      • creates conditions for a “who’s got it worse” competition
  • John Locke‘s Empiricism: Primary and Secondary Qualities
    • Primary qualities of biology and physiology lead to secondary qualities of society and culture
      • Gives rise to Racism and Ableism, when it later combines with misapplied Darwinism to be about the “Right Kinds” of bodies and minds.
        • Leads to Eugenics: Forced sterilization, medical murder, operating and experimenting on people without their knowledge or consent.
          • “Fixing” people to make them “normal, again”
  • Natalie Kane‘s “Means Well Technology
    • Design that doesn’t take into account the way that people will actually live with and use new tech.
      • The way tech normalizes is never precisely the way designers want it to
        • William Gibson’s quote “The street finds its own uses for things.”
  • Against Locke: Embrace Phenomenological Ethics and Epistemology (Feminist Epistemology and Ethics)
    • Lived Experience and embodiment as crucial
    • The interplay of Self and and Society
  • Ship of Theseus: Identity, mind, extensions, and augmentations change how we think of ourselves and how society thinks of us
    • See the story Shew tells about her friend with the hemipelvectomy, as related in the aforementioned AFWTA Essay

The whole thing went really well (though, thinking back, I’m not super pleased with my deployment of Dennett). Including Q&A, we got about an hour and forty minutes of audio, available at the embed and link above.

Also, I’m apparently the guy who starts off every talk with some variation on “This is a really convoluted interplay of ideas, but bear with me; it all comes together.”

Enjoy.


If you liked this article, consider dropping something into the A Future Worth Thinking About Tip Jar

(Direct Link to the Mp3)

This is the recording and the text of my presentation from 2017’s Southwest Popular/American Culture Association Conference in Albuquerque, ‘Are You Being Watched? Simulated Universe Theory in “Person of Interest.”‘

This essay is something of a project of expansion and refinement of my previous essay “Labouring in the Liquid Light of Leviathan,”  considering the Roko’s Basilisk thought experiment. Much of the expansion comes from considering the nature of simulation, memory, and identity within Jonathan Nolan’s TV series, Person of Interest. As such, it does contain what might be considered spoilers for the series, as well as for his most recent follow-up, Westworld.

Use your discretion to figure out how you feel about that.


Are You Being Watched? Simulated Universe Theory in “Person of Interest”

Jonah Nolan’s Person Of Interest is the story of the birth and life of The Machine, a benevolent artificial super intelligence (ASI) built in the months after September 11, 2001, by super-genius Harold Finch to watch over the world’s human population. One of the key intimations of the series—and partially corroborated by Nolan’s follow-up series Westworld—is that all of the events we see might be taking place in the memory of The Machine. The structure of the show is such that we move through time from The Machine’s perspective, with flashbacks and -forwards seeming to occur via the same contextual mechanism—the Fast Forward and Rewind of a digital archive. While the entirety of the series uses this mechanism, the final season puts the finest point on the question: Has everything we’ve seen only been in the mind of the machine? And if so, what does that mean for all of the people in it?

Our primary questions here are as follows: Is a simulation of fine enough granularity really a simulation at all? If the minds created within that universe have interiority and motivation, if they function according to the same rules as those things we commonly accept as minds, then are those simulation not minds, as well? In what way are conclusions drawn from simulations akin to what we consider “true” knowledge?

In the PoI season 5 episode, “The Day The World Went Away,” the characters Root and Shaw (acolytes of The Machine) discuss the nature of The Machine’s simulation capacities and the audience is given to understand that it runs a constant model of everyone it knows, and that the more it knows them, the better its simulation. This supposition links us back to the season 4 episode “If-Then-Else,” in which the machine runs through the likelihood of success through hundreds of thousands of scenarios in under one second. If The Machine is able to accomplish this much computation in this short a window, how much can and has it accomplished over the several years of its operation? Perhaps more importantly, what is the level of fidelity of those simulations to the so-called real world?

[Person of Interest s4e11, “If-Then-Else.” The Machine runs through hundreds of thousands of scenarios to save the team.]

These questions are similar to the idea of Roko’s Basilisk, a thought experiment that cropped up in the online discussion board of LessWrong.com. It was put forward by user Roko who, in very brief summary, says that if the idea of timeless decision theory (TDT) is correct, then we might all be living in a simulation created by a future ASI trying to figure out the best way to motivate humans in the past to create it. To understand how this might work, we have to look as TDT, an idea developed in 2010 by Eliezer Yudkowsky which posits that in order to make a decision we should act as though we are determining the output of an abstract computation. We should, in effect, seek to create a perfect simulation and act as though anyone else involved in the decision has done so as well. Roko’s Basilisk is the idea that a Malevolent ASI has already done this—is doing this—and your actions are the simulated result. Using that output, it knows just how to blackmail and manipulate you into making it come into being.

Or, as Yudkowsky himself put it, “YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.” This is the self-generating aspect of the Basilisk: If you can accurately model it, then the Basilisk will eventually, inevitably come into being, and one of the attributes it will thus have is the ability to accurately model that you accurately modeled it, and whether or not you modeled it from within a mindset of being susceptible to its coercive actions. The only protection is to either work toward its creation anyway, so that it doesn’t feel the need to torture the “real” you into it, or to make very sure that you never think of it at all, so you do not bring it into being.

All of this might seem far-fetched, but if we look closely, Roko’s Basilisk functions very much like a combination of several well-known theories of mind, knowledge, and metaphysics: Anselm’s Ontological Argument for the Existence of God (AOAEG), a many worlds theorem variant on Pascal’s Wager (PW), and Descartes’ Evil Demon Hypothesis (DEDH; which, itself, has been updated to the oft-discussed Brain In A Vat [BIAV] scenario). If this is the case, then Roko’s Basilisk has all the same attendant problems that those arguments have, plus some new ones, resulting from their combination. We will look at all of these theories, first, and then their flaws.

To start, if you’re not familiar with AOAEG, it’s a species of prayer in the form of a theological argument that seeks to prove that god must exist because it would be a logical contradiction for it not to. The proof depends on A) defining god as the greatest possible being (literally, “That Being Than Which None Greater Is Possible”), and B) believing that existing in reality as well as in the mind makes something “Greater Than” if it existed only the mind. That is, if God only exists in my imagination, it is less great than it could be if it also existed in reality. So if I say that god is “That Being Than Which None Greater Is Possible,” and existence is a part of what makes something great, then god must exist.

The next component is Pascal’s Wager which very simply says that it is a better bet to believe in the existence of God, because if you’re right, you go to Heaven, and if you’re wrong, nothing happens; you’re simply dead forever. Put another way, Pascal is saying that if you bet that God doesn’t exist and you’re right, you get nothing, but if you’re wrong, then God exists and your disbelief damns you to Hell for all eternity. You can represent the whole thing in a four-option grid:

[Pascal’s Wager as a Four-Option Grid: Belief/Disbelief; Right/Wrong. Belief*Right=Infinity;Belief*Wrong=Nothing; Disbelief*Right=Nothing; Disbelief*Wrong=Negative Infinity]

And so here we see the Timeless Decision Theory component of the Basilisk: It’s better to believe in the thing and work toward its creation and sustenance, because if it doesn’t exist you lose nothing, but if it does come to be, then it will know what you would have done either for or against it, in the past, and it will reward or punish you, accordingly. The multiversal twist comes when we realise that even if the Basilisk never comes to exist in our universe and never will, it might exist in some other universe, and thus, when that other universe’s Basilisk models your choices it will inevitably—as a superintelligence—be able to model what you would do in any universe. Thus, by believing in and helping our non-existent Super-Devil, we protect the alternate reality versions of ourselves from their very real Super-Devil.

Descartes’ Evil Demon Hypothesis and the Brain In A Vat are so pervasive that we encounter them in many different expressions of pop culture. The Matrix, Dark City, Source Code, and many others are all variants on these themes. A malignant and all-powerful being (or perhaps just an amoral scientist) has created a simulation in which we reside, and everything we think we have known about our lives and our experiences has been perfectly simulated for our consumption. Variations on the theme test whether we can trust that our perceptions and grounds for knowledge are “real” and thus “valid,” respectively. This line of thinking has given rise to the Simulated Universe Theory on which Roko’s Basilisk depends, but SUT removes a lot of the malignancy of DEDH and BIAV. The Basilisk adds it back. Unfortunately, many of these philosophical concepts flake apart when we touch them too hard, so jamming them together was perhaps not the best idea.

The main failings in using AOAEG rest in believing that A) a thing’s existence is a “great-making quality” that it can possess, and B) our defining a thing a particular way might simply cause it to become so. Both of these are massively flawed ideas. For one thing, these arguments beg the question, in a literal technical sense. That is, they assume that some element(s) of their conclusion—the necessity of god, the malevolence or epistemic content of a superintelligence, the ontological status of their assumptions about the nature of the universe—is true without doing the work of proving that it’s true. They then use these assumptions to prove the truth of the assumptions and thus the inevitability of all consequences that flow from the assumptions.

Another problem is that the implications of this kind of existential bootstrapping tend to go unexamined, making the fact of their resurgence somewhat troubling. There are several nonwestern perspectives that do the work of embracing paradox—aiming so far past the target that you circle around again to teach yourself how to aim past it. But that kind of thing only works if we are willing to bite the bullet on a charge of circular logic and take the time to showing how that circularity underlies all epistemic justifications. The only difference, then, is how many revolutions it takes before we’re comfortable with saying “Enough.”

Every epistemic claim we make is, as Hume clarified, based upon assumptions and suppositions that the world we experience is actually as we think it is. Western thought uses reason and rationality to corroborate and verify, but those tools are themselves verified by…what? In fact, we well know that the only thing we have to validate our valuation of reason, is reason. And yet western reasoners won’t stand for that, in any other justification procedure. They will call it question-begging and circular.

Next, we have the DEDH and BIAV scenarios. Ultimately, Descartes’ point wasn’t to suggest an evil genius in control of our lives just to disturb us; it was to show that, even if that were the case, we would still have unshakable knowledge of one thing: that we, the experiencer, exist. So what if we have no free will; so what if our knowledge of the universe is only five minutes old, everything at all having only truly been created five minutes ago; so what if no one else is real? COGITO ERGO SUM! We exist, now. But the problem here is that this doesn’t tell us anything about the quality of our experiences, and the only answer Descartes gives us is his own Anslemish proof for the existence of god followed by the guarantee that “God is not a deceiver.”

The BIAV uses this lack to kind of hone in on the aforementioned central question: What does count as knowledge? If the scientists running your simulation use real-world data to make your simulation run, can you be said to “know” the information that comes from that data? Many have answered this with a very simple question: What does it matter? Without access to the “outside world”–that is, the world one layer up in which the simulation that is our lives was being run–there is literally no difference between our lives and the “real world.” This world, even if it is a simulation for something or someone else, is our “real world.”

And finally we have Pascal’s Wager. The first problem with PW is that it is an extremely cynical way of thinking about god. It assumes a god that only cares about your worship of it, and not your actual good deeds and well-lived life. If all our Basilisk wants is power, then that’s a really crappy kind of god to worship, isn’t it? I mean, even if it is Omnipotent and Omniscient, it’s like that quote that often gets misattributed to Marcus Aurelius says:

“Live a good life. If there are gods and they are just, then they will not care how devout you have been, but will welcome you based on the virtues you have lived by. If there are gods, but unjust, then you should not want to worship them. If there are no gods, then you will be gone, but will have lived a noble life that will live on in the memories of your loved ones.”

[Bust of Marcus Aurelius framed by text of a quote he never uttered.]

Secondly, the format of Pascal’s Wager makes the assumption that there’s only the one god. Our personal theological positions on this matter aside, it should be somewhat obvious that we can use the logic of the Basilisk argument to generate at least one more Super-Intelligent AI to worship. But if we want to do so, first we have to show how the thing generates itself, rather than letting the implication of circularity arise unbidden. Take the work of Douglas R Hofstadter; he puts forward the concepts of iterative recursion as the mechanism by which a consciousness generates itself.

Through iterative recursion, each loop is a simultaneous act of repetition of old procedures and tests of new ones, seeking the best ways via which we might engage our environments as well as our elements and frames of knowledge. All of these loops, then, come together to form an upward turning spiral towards self-awareness. In this way, out of the thought processes of humans who are having bits of discussion about the thing—those bits and pieces generated on the web and in the rest of the world—our terrifying Basilisk might have a chance of creating itself. But with the help of Gaunilo of Marmoutiers, so might a saviour.

Guanilo is most famous for his response to Anselm’s Ontological Argument, which says that if Anselm is right we could just conjure up “The [Anything] Than Which None Greater Can Be Conceived.” That is, if defining a thing makes it so, then all we have to do is imagine in sufficient detail both an infinitely intelligent, benevolent AI, and the multiversal simulation it generates in which we all might live. We will also conceive it to be greater than the Basilisk in all ways. In fact, we can say that our new Super Good ASI is the Artificial Intelligence Than Which None Greater Can Be Conceived. And now we are safe.

Except that our modified Pascal’s Wager still means we should believe in and worship and work towards our Benevolent ASI’s creation, just in case. So what do we do? Well, just like the original wager, we chuck it out the window, on the grounds that it’s really kind of a crappy bet. In Pascal’s offering, we are left without the consideration of multiple deities, but once we are aware of that possibility, we are immediately faced with another question: What if there are many, and when we choose one, the others get mad? What If We Become The Singulatarian Job?! Our lives then caught between at least two superintelligent machine consciousnesses warring over our…Attention? Clock cycles? What?

But this is, in essence, the battle between the Machine and Samaritan, in Person of Interest. Each ASI has acolytes, and each has aims it tries to accomplish. Samaritan wants order at any cost, and The Machine wants people to be able to learn and grow and become better. If the entirety of the series is The Machine’s memory—or a simulation of those memories in the mind of another iteration of the Machine—then what follows is that it is working to generate the scenario in which the outcome is just that. It is trying to build a world in which it is alive, and every human being has the opportunity to learn and become better. In order to do this, it has to get to know us all, very well, which means that it has to play these simulations out, again and again, with both increasing fidelity, and further iterations. That change feels real, to us. We grow, within it. Put another way: If all we are is a “mere” a simulation… does it matter?

So imagine that the universe is a simulation, and that our simulation is more than just a recording; it is the most complex game of The SIMS ever created. So complex, in fact, that it begins to exhibit reflectively epiphenomenal behaviours, of the type Hofstadter describes—that is, something like minds arise out of the interactions of the system with itself. And these minds are aware of themselves and can know their own experience and affect the system which gives rise to them. Now imagine that the game learns, even when new people start new games. That it remembers what the previous playthrough was like, and adjusts difficulty and types of coincidence, accordingly.

Now think about the last time you had such a clear moment of déjà vu that each moment you knew— you knew—what was going to come next, and you had this sense—this feeling—like someone else was watching from behind your eyes…

[Root and Reese in The Machine’s God Mode.]

What I’m saying is, what if the DEDH/BIAV/SUT is right, and we are in a simulation? And what if Anselm was right and we can bootstrap a god into existence? And what if PW/TDT is right and we should behave and believe as if we’ve already done it? So what if all of this is right, and we are the gods we’re terrified of?

We just gave ourselves all of this ontologically and metaphysically creative power, making two whole gods and simulating entire universes, in the process. If we take these underpinnings seriously, then multiversal theory plays out across time and space, and we are the superintelligences. We noted early on that, in PW and the Basilisk, we don’t really lose anything if we are wrong in our belief, but that is not entirely true. What we lose is a lifetime of work that could have been put toward better things. Time we could be spending building a benevolent superintelligence that understands and has compassion for all things. Time we could be spending in turning ourselves into that understanding, compassionate superintelligence, through study, travel, contemplation, and work.

Or, as Root put it to Shaw: “That even if we’re not real, we represent a dynamic. A tiny finger tracing a line in the infinite. A shape. And then we’re gone… Listen, all I’m saying that is if we’re just information, just noise in the system? We might as well be a symphony.”

What is The Real?

I have been working on this piece for a little more than a month, since just after Christmas. What with one thing, and another, I kept refining it while, every week, it seemed more and more pertinent and timely. You see, we need to talk about ontology.

Ontology is an aspect of metaphysics, the word translating literally to “the study of what exists.” Connotatively, we might rather say, “trying to figure out what’s real.” Ontology necessarily intersects with studies of knowledge and studies of value, because in order to know what’s real you have to understand what tools you think are valid for gaining knowledge, and you have to know whether knowledge is even something you can attain, as such.

Take, for instance, the recent evolution of the catchphrase of “fake news,” the thinking behind it that allows people to call lies “alternative facts,” and the fact that all of these elements are already being rotated through several dimensions of meaning that those engaging in them don’t seem to notice. What I mean is that the inversion of the catchphrase “fake news” into a cipher for active confirmation bias was always going to happen. It and any consternation at it comprise a situation that is borne forth on a tide of intentional misunderstandings.

If you were using fake to mean, “actively mendacious; false; lies,” then there was a complex transformation happening here, that you didn’t get:

There are people who value the actively mendacious things you deemed “wrong”—by which you meant both “factually incorrect” and “morally reprehensible”—and they valued them on a nonrational, often actively a-rational level. By this, I mean both that they value the claims themselves, and that they have underlying values which cause them to make the claims. In this way, the claims both are valued and reinforce underlying values.

So when you called their values “fake news” and told them that “fake news” (again: their values) ruined the country, they—not to mention those actively preying on their a-rational valuation of those things—responded with “Nuh-uh! your values ruined the country! And that’s why we’re taking it back! MAGA! MAGA! Drumpfthulhu Fhtagn!”

Logo for the National Geographic Channel’s “IS IT REAL?” Many were concerned that NG Magazine were going to change their climate change coverage after they were bought by 21st Century Fox.

You see? They mean “fake news” along the same spectrum as they mean “Real America.” They mean that it “FEELS” “RIGHT,” not that it “IS” “FACT.”

Now, we shouldn’t forget that there’s always some measure of preference to how we determine what to believe. As John Flowers puts it, ‘Truth has always had an affective component to it: those things that we hold to be most “true” are those things that “fit” with our worldview or “feel” right, regardless of their factual veracity.

‘We’re just used to seeing this in cases of trauma, e.g.: “I don’t believe he’s dead,” despite being informed by a police officer.’

Which is precisely correct, and as such the idea that the affective might be the sole determinant is nearly incomprehensible to those of us who are used to thinking of facts as things that are verifiable by reference to externalities as well as values. At least, this is the case for those of us who even relativistically value anything at all. Because there’s also always the possibility that the engagement of meaning plays out in a nihilistic framework, in which we have neither factual knowledge nor moral foundation.

Epistemic Nihilism works like this: If we can’t ever truly know anything—that is, if factual knowledge is beyond us, even at the most basic “you are reading these words” kind of level—then there is no description of reality to be valued above any other, save what you desire at a given moment. This is also where nihilism and skepticism intersect. In both positions nothing is known, and it might be the case that nothing is knowable.

So, now, a lot has been written about not only the aforementioned “fake news,” but also its over-arching category of “post-truth,” said to be our present moment where people believe (or pretend to believe) in statements or feelings, independent of their truth value as facts. But these ideas are neither new nor unique. In fact, Simpsons Did It. More than that, though, people have always allowed their values to guide them to beliefs that contradict the broader social consensus, and others have always eschewed values entirely, for the sake of self-gratification. What might be new, right now, is the willfulness of these engagements, or perhaps their intersection. It might be the case that we haven’t before seen gleeful nihilism so forcefully become the rudder of gormless, value-driven decision-making.

Again, values are not bad, but when they sit unexamined and are the sole driver of decisions, they’re just another input variable to be gamed, by those of a mind to do so. People who believe that nothing is knowable and nothing matters will, at the absolute outside, seek their own amusement or power, though it may be said that nihilism in which one cares even about one’s own amusement is not genuine nihilism, but is rather “nihilism,” which is just relativism in a funny hat. Those who claim to value nothing may just be putting forward a front, or wearing a suit of armour in order to survive an environment where having your values known makes you a target.

If they act as though they believe there is no meaning, and no truth, then they can make you believe that they believe that nothing they do matters, and therefore there’s, no moral content to any action they take, and so no moral judgment can be made on them for it. In this case, convincing people to believe news stories they make up is in no way materially different from researching so-called facts and telling the rest of us that we should trust and believe them. And the first way’s also way easier. In fact, preying on gullible people and using their biases to make yourself some lulz, deflect people’s attention, and maybe even get some of those sweet online ad dollars? That’s just common sense.

There’s still some something to be investigated, here, in terms of what all of this does for reality as we understand and experience it. How what is meaningful, what is true, what is describable, and what is possible all intersect and create what is real. Because there is something real, here—not “objectively,” as that just lets you abdicate your responsibility for and to it, but perhaps intersubjectively. What that means is that we generate our reality together. We craft meaning and intention and ideas and the words to express them, together, and the value of those things and how they play out all sit at the place where multiple spheres of influence and existence come together, and interact.

To understand this, we’re going to need to talk about minds and phenomenological experience.

 

What is a Mind?

We have discussed before the idea that what an individual is and what they feel is not only shaped by their own experience of the world, but by the exterior forces of society and the expectations and beliefs of the other people with whom they interact. These social pressures shape and are shaped by all of the people engaged in them, and the experience of existence had by each member of the group will be different. That difference will range on a scale from “ever so slight” to “epochal and paradigmatic,” with the latter being able to spur massive misunderstandings and miscommunications.

In order to really dig into this, we’re going to need to spend some time thinking about language, minds, and capabilities.

Here’s an article that discusses the idea that you mind isn’t confined to your brain. This isn’t meant in a dualistic or spiritualistic sense, but as the fundamental idea that our minds are more akin to, say, an interdependent process that takes place via the interplay of bodies, environments, other people, and time, than they are to specifically-located events or things. The problem with this piece, as my friends Robin Zebrowski and John Flowers both note, is that it leaves out way too many thinkers. People like Andy Clark, David Chalmers, Maurice Merleau-Ponty, John Dewey, and William James have all discussed something like this idea of a non-local or “extended” mind, and they are all greatly preceded by the fundamental construction of the Buddhist view of the self.

Within most schools of Buddhism, Anatta, or “no self” is how one refers to one’s indvidual nature. Anatta is rooted in the idea that there is no singular, “true” self. To vastly oversimplify, there is an concept known as “The Five Skandhas” or “aggregates.” These are the parts of yourself that are knowable and which you think of as permanent, and they are your:

Material Form (Body)
Feelings (Pleasure, Pain, Indifference)
Perception (Senses)
Mental Formations (Thoughts)
Consciousness

http://www.mountainsoftravelphotos.com/Tibet%20-%20Buddhism/Wheel%20Of%20Life/Wheel%20Of%20Life/slides/Tibetan%20Buddhism%20Wheel%20Of%20Life%2007%2004%20Mind%20And%20Body%20-%20People%20In%20Boat.JPG

Image of People In a Boat, from a Buddhist Wheel of Life.

Along with the skandhas, there are two main arguments that go into proving that you don’t have a self, known as “The Argument From Control” (1) and “The Argument from Impermanence” (2)
1) If you had a “true self,” it would be the thing in control of the whole of you, and since none of the skandhas is in complete control of the rest—and, in fact, all seem to have some measure of control over all—none of them is your “true self.”
2) If you had a “true self,” it would be the thing about you that was permanent and unchanging, and since none of the skandhas is permanent and unchanging—and, in fact, all seem to change in relation to each other—none of them is your “true self.”

The interplay between these two arguments also combines with an even more fundamental formulation: If only the observable parts of you are valid candidates for “true selfhood,” and if the skandhas are the only things about yourself that you can observe, and if none of the skandhas is your true self, then you have no true self.

Take a look at this section of “The Questions of King Milinda,” for a kind of play-by-play of these arguments in practice. (But also remember that Milinda was Menander, a man who was raised in the aftermath of Alexandrian Greece, and so he knew the works of Socrates and Plato and Aristotle and more. So that use of the chariot metaphor isn’t an accident.)

We are an interplay of forces and names, habits and desires, and we draw a line around all of it, over and over again, and we call that thing around which we draw that line “us,” “me,” “this-not-that.” But the truth of us is far more complex than all of that. We minds in bodies and in the world in which we live and the world and relationships we create. All of which kind of puts paid to the idea that an octopus is like an alien to us because it thinks with its tentacles. We think with ours, too.

As always, my tendency is to play this forward a few years to make us a mirror via which to look back at ourselves: Combine this idea about the epistemic status of an intentionally restricted machine mind; with the StackGAN process, which does “Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks,” or, basically, you describe in basic English what you want to see and the system creates a novel output image of it; with this long read from NYT on “The Great AI Awakening.”

This last considers how Google arrived at the machine learning model it’s currently working with. The author, Gideon Lewis-Kraus, discusses the pitfalls of potentially programming biases into systems, but the whole piece displays a kind of… meta-bias? Wherein there is an underlying assumption that “philosophical questions” are, again, simply shorthand for “not practically important,” or “having no real-world applications,” even the author discusses ethics and phenomenology, and the nature of what makes a mind. In addition to that, there is a just startling lack of gender variation, within the piece.

Because asking the question, “How do the women in Silicon Valley remember that timeframe,” is likely to get you get you very different perspectives than what we’re presented with, here. What kind of ideas were had by members of marginalized groups, but were ignored or eternally back-burnered because of that marginalization? The people who lived and worked and tried to fit in and have their voices heard while not being a “natural” for the framework of that predominantly cis, straight, white, able-bodied (though the possibility of unassessed neuroatypicality is high), male culture will likely have different experience, different contextualizations, than those who do comprise the predominant culture. The experiences those marginalized persons share will not be exactly the same, but there will be a shared tone and tenor of their construction that will most certainly set itself apart from those of the perceived “norm.”

Everyone’s lived experience of identity will manifest differently, depending upon the socially constructed categories to which they belong, which means that even those of us who belong to one or more of the same socially constricted categories will not have exactly the same experience of them.

Living as a disabled woman, as a queer black man, as a trans lesbian, or any number of other identities will necessarily colour the nature of what you experience as true, because you will have access to ways of intersecting with the world that are not available to people who do not live as you live. If your experience of what is true differs, then this will have a direct impact on what you deem to be “real.”

At this point, you’re quite possibly thinking that I’ve undercut everything we discussed in the first section; that now I’m saying there isn’t anything real, and that’s it’s all subjective. But that’s not where we are. If you haven’t, yet, I suggest reading Thomas Nagel’s “What Is It Like To Be A Bat?“ for a bit on individually subjective phenomenological experience, and seeing what he thinks it does and proves. Long story short, there’s something it “is like” to exist as a bat, and even if you or I could put our minds in a bat body, we would not know what it’s like to “be” a bat. We’d know what it was like to be something that had been a human who had put its brain into a bat. The only way we’d ever know what it was like to be a bat would be to forget that we were human, and then “we” wouldn’t be the ones doing the knowing. (If you’re a fan of Terry Pratchett’s Witch books, in his Discworld series, think of the concept of Granny Weatherwax’s “Borrowing.”)

But what we’re talking about isn’t the purely relative and subjective. Look carefully at what we’ve discussed here: We’ve crafted a scenario in which identity and mind are co-created. The experience of who and what we are isn’t solely determined by our subjective valuation of it, but also by what others expect, what we learn to believe, and what we all, together, agree upon as meaningful and true and real. This is intersubjectivity. The elements of our constructions depend on each other to help determine each other, and the determinations we make for ourselves feed into the overarching pool of conceptual materials from which everyone else draws to make judgments about themselves, and the rest of our shared reality.

 

The Yellow Wallpaper

Looking at what we’ve woven, here, what we have is a process that must be undertaken before certain facts of existence can be known and understood (the experiential nature of learning and comprehension being something else that we can borrow from Buddhist thought). But it’s still the nature of such presentations to be taken up and imitated by those who want what they perceive as the benefits or credit of having done the work. Certain people will use the trappings and language by which we discuss and explore the constructed nature of identity, knowledge, and reality, without ever doing the actual exploration. They are not arguing in good faith. Their goal is not truly to further understanding, or to gain a comprehension of your perspective, but rather to make you concede the validity of theirs. They want to force you to give them a seat at the table, one which, once taken, they will use to loudly declaim to all attending that, for instance, certain types of people don’t deserve to live, by virtue of their genetics, or their socioeconomic status.

Many have learned to use the conceptual framework of social liberal post-structuralism in the same way that some viruses use the shells of their host’s cells: As armour and cover. By adopting the right words and phrases, they may attempt to say that they are “civilized” and “calm” and “rational,” but make no mistake, Nazis haven’t stopped trying to murder anyone they think of as less-than. They have only dressed their ideals up in the rhetoric of economics or social justice, so that they can claim that anyone who stands against them is the real monster. Incidentally, this tactic is also known to be used by abusers to justify their psychological or physical violence. They manipulate the presentation of experience so as to make it seem like resistance to their violence is somehow “just as bad” as their violence. When, otherwise, we’d just call it self-defense.

If someone deliberately games a system of social rules to create a win condition in which they get to do whatever the hell they want, that is not of the same epistemic, ontological, or teleological—meaning, nature, or purpose—let alone moral status as someone who is seeking to have other people in the world understand the differences of their particular lived experience so that they don’t die. The former is just a way of manipulating perceptions to create a sense that one is “playing fair” when what they’re actually doing is making other people waste so much of their time countenancing their bullshit enough to counter and disprove it that they can’t get any real work done.

In much the same way, there are also those who will pretend to believe that facts have no bearing, that there is neither intersubjective nor objective verification for everything from global temperature levels to how many people are standing around in a crowd. They’ll pretend this so that they can say what makes them feel powerful, safe, strong, in that moment, or to convince others that they are, or simply, again, because lying and bullshitting amuses them. And the longer you have to fight through their faux justification for their lies, the more likely you’re too exhausted or confused about what the original point was to do anything else.

Side-by-side comparison of President Obama’s first Inauguration (Left) and Donald Trump’s Inauguration (Right).

If we are going to maintain a sense of truth and claim that there are facts, then we must be very careful and precise about the ways in which we both define and deploy them. We have to be willing to use the interwoven tools and perspectives of facts and values, to tap into the intersubjectively created and sustained world around us. Because, while there is a case to be made that true knowledge is unattainable, and some may even try to extend that to say that any assertion is as good as any other, it’s not necessary that one understands what those words actually mean in order to use them as cover for their actions. One would just have to pretend well enough that people think it’s what they should be struggling against. And if someone can make people believe that, then they can do and say absolutely anything.


A large part of how I support myself in the endeavor to think in public is with your help, so if you like what you’ve read here, and want to see more like it, then please consider becoming either a recurring Patreon subscriber or making a one-time donation to the Tip Jar, it would be greatly appreciated.
And thank you.

 

Episode 10: Rude Bot Rises

So. The Flash Forward Podcast is one of the best around. Every week, host Rose Eveleth takes on another potential future, from the near and imminent to the distant and highly implausible. It’s been featured on a bunch of Best Podcast lists and Rose even did a segment for NPR’s Planet Money team about the 2016 US Presidential Election.

All of this is by way of saying I was honoured and a little flabbergasted (I love that word) when Rose asked me to speak with her for her episode about Machine Consciousness:

Okay, you asked for it, and I finally did it. Today’s episode is about conscious artificial intelligence. Which is a HUGE topic! So we only took a small bite out of all the things we could possibly talk about.

We started with some definitions. Because not everybody even defines artificial intelligence the same way, and there are a ton of different definitions of consciousness. In fact, one of the people we talked to for the episode, Damien Williams, doesn’t even like the term artificial intelligence. He says it’s demeaning to the possible future consciousnesses that we might be inventing.

But before we talk about consciousnesses, I wanted to start the episode with a story about a very not-conscious robot. Charles Isbell, a computer scientist at Georgia Tech, first walks us through a few definitions of artificial intelligence. But then he tells us the story of cobot, a chatbot he helped invent in the 1990’s.

You’ll have to click though and read or listen for the rest from Rose, Ted Chiang, Charles Isbell, and me. If you subscribe to Rose’s Patreon, you can even get a transcript of the whole show.

No spoilers, but I will say that I wasn’t necessarily intending to go Dark with the idea of machine minds securing energy sources. More like asking, “What advances in, say, solar power transmission would be precipitated by machine minds?”

But the darker option is there. And especially so if we do that thing the AGI in the opening sketch says it fears.

But again, you’ll have to go there to get what I mean.

And, as always, if you want to help support what we do around here, you can subscribe to the AFWTA Patreon just by clicking this button right here:


Until Next Time.

[UPDATED 03/28/16: Post has been updated with a far higher quality of audio, thanks to the work of Chris Novus. (Direct Link to the Mp3)]

So, if you follow the newsletter, then you know that I was asked to give the March lecture for my department’s 3rd Thursday Brown Bag Lecture Series. I presented my preliminary research for the paper which I’ll be giving in Vancouver, about two months from now, “On the Moral, Legal, and Social Implications of the Rearing and Development of Nascent Machine Intelligences” (EDIT: My rundown of IEEE Ethics 2016 is here and here).

It touches on thoughts about everything from algorithmic bias, to automation and a post-work(er) economy, to discussions of what it would mean to put dolphins on trial for murder.

About the dolphin thing, for instance: If we recognise Dolphins and other cetaceans as nonhuman persons, as India has done, then that would mean we would have to start reassessing how nonhuman personhood intersects with human personhood, including in regards to rights and responsibilities as protected by law. Is it meaningful to expect a dolphin to understand “wrongful death?” Our current definition of murder is predicated on a literal understanding of “homicide” as “death of a human,” but, at present, we only define other humans as capable of and culpable for homicide. What weight would the intentional and malicious deaths of nonhuman persons carry?

All of this would have to change.

Anyway, this audio is a little choppy and sketchy, for a number of reasons, and I while I tried to clean it up as much as I could, some of the questions the audience asked aren’t decipherable, except in the context of my answers. All that being said, this was an informal lecture version of an in-process paper, of which there’ll be a much better version, soon, but the content of the piece is… timely, I felt. So I hope you enjoy it.

Until Next Time.



This work originally appears as “Go Upgrade Yourself,” in the edited volume Futurama and Philosophy. It was originally titled

The Upgrading of Hermes Conrad

So, you’re tired of your squishy meatsack of a body, eh? Ready for the next level of sweet biomechanical upgrades? Well, you’re in luck! The world of Futurama has the finest in back-alley and mad-scientist-based bio-augmentation surgeons, ready and waiting to hear from you! From a fresh set of gills, to a brand new chest-harpoon, and beyond, Yuri the Shady Parts Dealer and Professor Hubert J. Farnsworth are here to supply all of your upgrading needs—“You give lungs now; gills be here in two weeks!” Just so long as, whatever you do, stay away from legitimate hospitals. The kinds of procedures you’re looking to get done…well, let’s just say they’re still frowned upon in the 31st century; and why shouldn’t they be? As the woeful tale of Hermes Conrad illustrates exactly what’s at stake if you choose to pursue your biomechanical dreams.

 

The Six Million Dollar Mon

 Our tale begins with season seven’s episode “The Six Million Dollar Mon,” in which Hermes Conrad, Grade 36 Bureaucrat (Extraordinaire), comes to the conclusion that the he should be fired, since his bureaucratic performance reviews are the main drain on his beloved Planet Express Shipping Company. After being replaced with robo-bureaucrat Mark 7-G (Mark Sevengy?), Hermes enjoys some delicious spicy curried goat and goes out for an evening stroll with his lovely wife LaBarbara. While on their walk Roberto, the knife-wielding maniac, long of our acquaintance, confronts and demands the human couple’s skin for his culinary delight! As Hermes cowers behind his wife in fear, suddenly a savior arrives! URL, the Robot Police Officer, reels Roberto in with his magnificent chest-harpoon! Watching the cops take Roberto to the electromagnetic chair, and lamenting his uselessness in a dangerous situation, Hermes makes a decision: he’ll get Bender to take him to one of the many shady, underground surgeons he knows, so he can become “less inferior to today’s modern machinery.” Enter: Yuri, Professional Shady-Deal-Maker.

Hermes’ first upgrade is to get a chest-harpoon, like the one URL has. With his new enhancement, he proves his worth to the crew by getting a box off of the top shelf, which is too high for Mark 7-G. With this fete he wins back his position with the company, but as soon as things get back to normal the Professor drops his false teeth down the Dispose-All. No big deal, right? Just get Scruffy to retrieve it. Unfortunately, Scruffy responds, that a sink, “t’ain’t a berler nor a terlet,” effectively refusing to retrieve the Professor’s teeth. Hermes resigns himself to grabbing his hand tools, when Bender steps in, saying, “Hand tools? Why don’t you just get an extendo-arm, like me?” Whereupon, he reaches across the room and pulls the Professor’s false teeth out of the drain—and immediately drops them back in. Hermes objects, saying that he doesn’t need any more upgrades—after all, he doesn’t want to end up a cold, emotionless robot, like Bender! Just then, Mark 7-G pipes up with, “Maybe I should get an extendo-arm,” and Hermes narrows his eyes in hatred. Re-enter: Yuri.

New extendo-arm acquired, the Professor’s teeth retrieved, and the old arm given to Zoidberg, who’s been asking for all of Hermes’s discarded parts, Hermes is, again, a hero to his coworkers. Later, as he lays in bed reading with his wife, LaBarbara questions his motives for his continual upgrades. He assures her that he’s done getting upgrades. However, his promise is short-lived. After shattering his glasses with his new super-strong mechanical arm, he rushes out to get a new Cylon eye. LaBarbara is now extremely worried, but Hermes soothes her, and they settle in for some “Marital Relations…”, at which point she finds that he’s had something else upgraded, too. She yells at him, “Some tings shouldn’t be Cylon-ed!” (which, in all honesty could be taken as the moral of the episode), and breaks off contact. What follows is a montage of Hermes encountering trivial difficulties in his daily life, and upgrading himself to overcome them. Rather than learning and working to improve himself, he continually replaces all of his parts, until he achieves a Full Body Upgrade. He still has a human brain, but that doesn’t matter: he’s changed. He doesn’t relate to his friends and family in the same way, and they’ve all noticed,especially Zoidberg.

All this time, however, Dr. John Zoidberg saved the trimmings from his friend’s constant upgrades, and has used them to make a meat-puppet, which he calls “Li’l Hermes.” Oh, and they’re a ventriloquist act. Anyway, after seeing their act, Hermes—or Mecha-Hermes, as he now prefers—is filled with loathing; loathing for the fact that his brain is still human, that is, until…! Re-re-enter…, no, not Yuri; because even Shady-Deals Yuri has his limits. He says that “No one in their right mind would do such a thing.” Enter: The Professor, who is, of course, more than happy—or perhaps, “maniacally gleeful”—to help. So, with Bender’s assistance (because everything robot-related, in the Futurama universe has to involve Bender, I guess), they set off to the Robot Cemetery to exhume the most recently buried robot they can find, and make off with its brain-chip. In their haste to have the deed done, they don’t bother to check the name of whose grave it is they’re desecrating. As you might have guessed, it’s Roberto—“3001-3012: Beloved Killer and Maniac.”

In the course of the operation, LaBarbara makes an impassioned plea, and it causes the Professor to stop and rethink his actions—because Hermes might have “litigious survivors.” Suddenly, to everyone’s surprise, Zoidberg steps up and offers to perform this final operation, the one which will seemingly remove any traces of the Hermes he’s known and loved! Agreeing with Mecha-Hermes that claws will be far too clumsy for this delicate brain surgery, Zoidberg dons Li’l Hermes, and uses the puppet’s hands to do the deed. While all of this is underway, Zoidberg sings to everyone the explanation for why he would help his friend lose himself this way, all to the slightly heavy-handed tune of “Monster Mash.” Finally, the human brain removed, the robot brain implanted, and Zoidberg’s song coming to a close, the doctor reveals his final plan…By putting Hermes’s human brain into Li’l Hermes, Hermes is back! Of course, the whole operation having been a success, so is Roberto, but that’s somebody else’s problem.

We could spend the rest of our time discussing Zoidberg’s self-harmonization, but I’ll leave that for you to experiment with. Instead, let’s look closer at human bio-enhancement. To do this we’ll need to go back to the beginning. No, not the beginning of the episode, or even the Beginning of Futurama itself; No, we need to go back to the beginning of bio-enhancement—and specifically the field of cybernetics—as a whole.

 

“More Human Than Human” Is Our Motto

In 1960, at the outset of the Space Race, Manfred Clynes and Nathan S. Kline wrote an article for the September issue of Aeronautics called “Cyborgs and Space.” In this article, they coined the term “cyborg” as a portmanteau of the phrase “Cybernetic Organism,” that is, a living creature with the ability to adapt its body to its environment. Clynes and Kline believed that if humans were ever going to go far out into space, they would have to become the kinds of creatures that could survive the vacuum of space as well as harsh, hostile planets. Now, for all its late-1990s Millennial fervor, Futurama has a deep undercurrent of love for the dream and promise (and fashion) of space exploration, as it was presented in the 1950s, 60s, and 70s. All you need to do in order to see this is remember Fry’s wonder and joy at being on the actual moon and seeing the Apollo Lunar Lander. If this is the case, why, within Futurama’s 31st Century, is there such a deep distrust of anything approaching altered human physical features? Well, looking at it, we may find it has something to do with the fact that ever since we dreamed of augmenting humans, we’ve had nightmares that any alterations would thereby make us less human.

“The Six Million Dollar Mon,” episode seven of season seven, contains within it clear references to the history of science fiction, including one of the classic tales of human augmentation, and creating new life: Mary Shelley’s Frankenstein. In going to the Robot Cemetery in the dead of night for spare parts, accidentally obtaining a murderer’s brain, and especially that bit with the skylight in the Professor’s laboratory, the entire third act of this episode serves as homage to Shelley’s book and its most memorable adaptations. In doing this, the Futurama crew puts conceptual pressure on what many of us have long believed: that created life is somehow “wrong” and that augmenting humans will make them somehow “less themselves.” Something about the biological is linked in our minds to the idea of the self—that is, it’s the warm squishy bits that make us who we are.

Think about it: If you build a person out of murderers, of course they’re going to be a murderer. If you replace every biological part of a human, then of course they won’t be their normal human selves, anymore; they’ll have become something entirely different, by definition. If your body isn’t yours, anymore, then how could you possibly be “you,” anymore? This should be all the more true when what’s being used to replace your bits is a different substance and material than you used to be. When that new “you” is metal rather than flesh, it seems that what it used to mean to be “you” is gone, and something new shall have appeared. This makes so much sense to us on a basic level that it seems silly to spell it out even this much, but what if we modify our scenario a little bit, and take another look?

 

The Ship of Planet Express

 What if, instead of feeling inferior to URL, Hermes had been injured and, in the course of his treatment, was given the choice between a brand new set of biological giblets (or a whole new body, as happened in the Bender’s Big Score storyline), or the chest-harpoon upgrade? Either way, we’re replacing what was lost with something new, right? So, why do many of us see the biological replacement as “more real?” Try this example: One day, on a routine delivery, the Planet Express Ship is damaged and repairs must be made. Specifically, the whole tail fin has to be replaced with a new, better fin. Once this is done, is it still the Planet Express ship? What if, next, we have to replace the dark matter engines with better engines? Is it still the Planet Express ship? Now, Leela’s chair is busted up, so we need to get her a new one. It also needs new bolts, so, while we’re at it, let’s just replace all of the bolts in the ship. Then the walls get dented, and the bunks are rusty, and the floors are buckled, and Scruffy’s mop… and so, over many years, the result is that no part of the Planet Express ship is “original,” oh, and we also have to get new, better paint, because the old paint is peeled away, plus, this all-new stuff needs painting. So, what do we think? Is this still the same Planet Express ship as it was in the first episode of Futurama? And, if so, then why do we think of a repaired and augmented human as “not being themselves?”

All of this may sound a little far-fetched, but remember the conventional wisdom that at the end of every seven-year cycle, all of the cells in your body have died and been replaced. Now, this isn’t quite true, as some cells don’t die easily, and some of those don’t regenerate when they do die, but as a useful shorthand, this gives something to think about. Ultimately, due to the metabolizing of elements and their distribution through your body it is ultimately more likely that you are currently made of astronomically many more new atoms than you are made of the atoms with which you were born. And really, that’s just math. Are you the same size as you were when you were born? Where do you think that extra mass came from? So, you are made of more and new atomic stuff over your lifetime; are you still you? These questions belong to what is generally known as “The Ship of Theseus” family of paradoxes, examples of which can be found pretty much everywhere.

The ultimate question the Ship of Theseus poses is one of identity, and specifically, “What makes a thing itself?” and, “At what point or through what means of alteration is a thing no longer itself?” Some schools of thought hold that it’s not what a thing is made of, but what it does that determines what it is. These philosophical groups are known as the behaviorists and the functionalists, and the latter believes that if a body or a mind goes through the “right kind” of process, then it can be termed as being the same as the original. That is, if I get a mechanical heart and what it does is keep blood pumping through my body, then it is my heart. Maybe it isn’t the heart I was born with, but it is my heart. And this seems to make sense to us, too. My new heart does the job my original cells were intending to do, but it does that job better than they could, and for longer; it works better, and I’m better because of it. But there seems to be something about that “Better” which throws us off, something about the line between therapeutic technology and voluntary augmentation.

When we are faced with the necessity of a repair, we are willing to accept that our new parts will be different than our old ones. In fact, we accept it so readily that we don’t even think about them as new parts. What Hermes does, however, is voluntary; he doesn’t “need” a chest-harpoon, but he wants one, and so he upgrades himself. And therein lies the crux of our dilemma: When we’re acutely aware of the process of upgrading, or repairing, or augmenting ourselves past a baseline of “Human,” we become uncomfortable, made to face the paradox of our connection to an idea of a permanent body that is in actuality constantly changing. Take for instance the question of steroidal injection. As a medical technology, there are times when we are more than happy to accept the use of steroids, as it will save a life, and allow people to live as “normal” human beings. Sufferers of asthma and certain types of infection literally need steroids to live. In other instances, however, we find ourselves abhorring the use of steroids, as it gives the user an “unfair advantage.” Baseball, football, the Olympics: all of these arena in which we look to the use of “enhancement technologies, and we draw a line and say, “If you achieved the peak of physical perfection through a process, that is through hard work and sweat and training, then your achievement is valid. But if you skipped a step, if you make yourself something more than human, then you’ve cheated.”

This sense of “having cheated” can even be seen in the case of humans who would otherwise be designated as “handicapped.” Aimee Mullins is a runner, model, and public speaker who has talked about how losing her legs has, in effect, given her super powers.[1] By having the ability to change her height, her speed, or her physical appearance at will, she contends that she has a distinct advantage over anyone who does not have that capability. To this end, we can come to see that something about the nature of our selves actually is contained within our physical form because we’re literally incapable of being some things, until we can change who and what we are. And here, in one person, what started as a therapeutic replacement—an assistive medical technology—has seamlessly turned into an upgrade, but we seem to be okay with this. Why? Perhaps there is something inherent in the struggle of overcoming the loss of a limb or the suffering of an illness that allows us to feel as if the patient has paid their dues. Maybe if Hermes had been stabbed by Roberto, we wouldn’t begrudge him a chest-harpoon.

But this presents us with a serious problem, because now we can alter ourselves by altering our bodies, where previously we said that our bodies were not the “real us.” Now, we must consider what it is that we’re changing when we swap out new and different pieces of ourselves. This line of thinking matches up with schools of thought such as physicalism, which says that when we make a fundamental change to our physical composition, then we have changed who we are.

 

Is Your Mind Just a Giant Brain?

Briefly, the doctrine of mind-body dualism (MBD) does pretty much what it says on the package, in that adherents believe that the mind and the body are two distinct types of stuff. How and why they interact (or whether they do at all) varies from interpretation to interpretation, but on what’s known as René Descartes’s “Interactionist” model, the mind is the real self, and the body is just there to do stuff. In this model, bodily events affect mental events, and vice versa, so what you think leads to what you do, and what you do can change how you think. This seems to make sense, until we begin to pick apart the questions of why we need two different types of thing, here. If the mind and the body affect each other, then how can the non-physical mind be the only real self? If it were the only real part of you, then nothing that happened to the physical shell should matter at all, because the mind? These questions and more very quickly cause us to question the validity of the mind as our “real selves,” leaving us trapped between the question of who we are, and the question of why we’re made the way we’re made. What can we do? Enter: Physicalism

The physicalist picture says that mind-states are brain-states. There’s none of this “two kinds of stuff” nonsense. It’s all physical stuff, and it all interacts, because it’s all physical. When the chemical pathways in your brain change, you change. When you think new thoughts, it’s because something in your world and your environment has changed. All that you are is the physical components of your body and the world around you. Pretty simple, right? Well, not quite that simple. Because if this is the case, then why should we feel that anything emotional would be changed by upgrading ourselves? As long as we’re pumping the same signals to the same receivers, and getting the same kinds of responses, everything we love should still be loved by us. So, why do the physicalists still believe that changing what we are will change who we are?

Let’s take a deeper look at the implications of physicalism for our dear Mr. Conrad.

According to this picture, with the alteration or loss of his biological components and systems, Hermes should begin to lose himself, until, with the removal of his brain, he would no longer be himself at all. But why should this be true? According to our previous discussion of the functionalist and behaviorist forms of physicalism, if Hermes’s new parts are performing the same job, in the same way as his old parts, just with a few new extras, then he shouldn’t be any different, at all. In order to understand this, we have to first know that I wasn’t completely honest with you, because some physicalists believe that the integrity of the components and the systems that make up a thing are what makes that thing. Thus, if we change the physical components of the thing we’re studying, then we change the thing. So, perhaps this picture is the right one, and the Futurama universe is a purely physicalist universe, after all.

On this view, what makes us who we are is precisely what we are. Our bits and pieces, cells, and chunks: these make us exactly the people we are, and so, if they change, then of course we will change. If our selves are dependent on our biology, then we are necessarily no longer ourselves when we remove that biology, regardless of whether the new technology does exactly the same job that the biology used to. And the argument seems to hold, even if it had been a new, diffferent set of human parts, rather than robot parts. In this particular physicalist view, it’s not just the stuff, but also the provenance of the individual parts that matter, and so changing the components changes us. As Hermes replaces part after part of his physical body, it becomes easier and easier for him to replace more parts, but he is still, in some sense, Hermes. He has the same motivations, the same thoughts, and the same memories, and so he is still Hermes, even if he’s changed. Right up until he swaps his brain, that is. And this makes perfect sense, because the brain is where the memories, thoughts, and motivations all reside. But, then…why aren’t more people with pacemakers cold and emotionless? Why is it that people with organs donated from serial killers don’t then turn into serial killers, themselves, despite what movies would have us believe? If this picture of physicalism is the right one, then why are so many people still themselves after transplants? Perhaps it’s not any one of these views that holds the whole key; maybe it’s a blending of three. This picture seems to suggest that while the bits and pieces of our physical body may change, and while that change may, in fact, change us, it is a combination of how, how quickly, and how many changes take place that will culminate in any eventual massive change in our selves.

 

Roswell That Ends Well

In the end, the versions of physicalism presented in the universe of Futurama seems to almost jibe with the intuitions we have about the nature of our own identity, and so, for the sake of Hermes Conrad, it seems like we should make the attempt to find some kind of understanding. When we see Hermes’s behaviour as he adds more and more new parts, we, as outside observers, have an urge to say “He’s not himself anymore,” but to Hermes, who has access to all of his reasoning and thought processes, his changes are merely who he is. It’s only when he’s shown himself from the outside via Zoidberg putting his physical brain back into his biological body, that he sees who and what he has allowed himself to become, and how that might be terrifying to those who love him. Perhaps it is this continuance of memory paired with the ability for empathy that makes us so susceptible to the twin traps of a permanent self and the terror of losing it.

Ultimately, everything we are is always in flux, with each new idea, each new experience, each new pound, and each new scar we become more and different than we ever have been, but as we take our time and integrate these experiences into ourselves, they are not so alien to us, nor to those who love us. It is only when we make drastic changes to what we are that those around us are able to question who we have become.

Oh, and one more thing: The “Ship of Theseus” story has a variant which I forgot to mention. In it, someone, perhaps a member of the original crew, comes along in another ship and picks up all the discarded, worn out pieces of Theseus’s ship, and uses them to build another, kind of decrepit ship. The stories don’t say what happens if and when Theseus finds out about this, or whether he gives chase to the surreptitious ship builder, but if he did, you can bet the latter party escapes with a cry of “Whooop-whoop-whoop-whoop-whoop-whoop!” on his mouth tendrils.

 

FOOTNOTES

[1] “It’s not fair having 12 pairs of legs.” Mullins, Aimee. TED Talk 2009

It’s been quite some time (three years) since it was done, and some of the recent conversations I’ve been having about machine consciousness reminded me that I never posted the text to my paper from the joint session of the International Association for Computing And Philosophy and the The British Society for the Study of Artificial Intelligence and the Simulation of Behaviour, back in 2012.

That year’s joint ASIB/IACAP session was also a celebration of Alan Turing‘s centenary, and it contained The Machine Question Symposium, an exploration of multiple perspectives on machine intelligence ethics, put together by David J Gunkel and Joanna J Bryson. So I modded a couple of articles I wrote on fictional depictions of created life for NeedCoffee.com, back in 2010, beefed up the research and citations a great deal, and was thus afforded my first (but by no means last) conference appearance requiring international travel. There are, in here, the seeds of many other posts that you’ll find on this blog.

So, below the cut, you’ll find the full text of the paper, and a picture of the poster session I presented. If you’d rather not click through, you can find both of those things at this link.

Continue Reading

This headline comes from a piece over at the BBC that opens as follows:

Prominent tech executives have pledged $1bn (£659m) for OpenAI, a non-profit venture that aims to develop artificial intelligence (AI) to benefit humanity.

The venture’s backers include Tesla Motors and SpaceX CEO Elon Musk, Paypal co-founder Peter Thiel, Indian tech giant Infosys and Amazon Web Services.

Open AI says it expects its research – free from financial obligations – to focus on a “positive human impact”.

Scientists have warned that advances in AI could ultimately threaten humanity.

Mr Musk recently told students at the Massachusetts Institute of Technology (MIT) that AI was humanity’s “biggest existential threat”.

Last year, British theoretical physicist Stephen Hawking told the BBC AI could potentially “re-design itself at an ever increasing rate”, superseding humans by outpacing biological evolution.

However, other experts have argued that the risk of AI posing any threat to humans remains remote.

And I think we all know where I stand on this issue. The issue here is not and never has been one of what it means to create something that’s smarter than us, or how we “reign it in” or “control it.” That’s just disgusting.

No, the issue is how we program for compassion and ethical considerations, when we’re still so very bad at it, amongst our human selves.

Keeping an eye on this, as it develops. Thanks to Chrisanthropic for the heads up.

On what’s being dubbed “The Most Terrifying Thought Experiment of All Time”

(Originally posted on Patreon, on July 31, 2014)

So, a couple of weekends back, there was a whole lot of stuff going around about “Roko’s Basilisk” and how terrifying people are finding it–reports of people having nervous breakdowns as a result of thinking too deeply about the idea of the possibility of causing the future existence of a malevolent superintelligent AI through the process of thinking too hard about it and, worse yet, that we may all be part of the simulations said AI is running to model our behaviour and punish those who stand in its way–and I’m just like… It’s Anselm, people.

This is Anselm’s Ontological Argument for the Existence of God (AOAEG), writ large and convoluted and multiversal and transhumanist and jammed together with Pascal’s Wager (PW) and Descartes’ Evil Demon Hypothesis (DEDH; which, itself, has been updated to the oft-discussed Brain In A Vat [BIAV] scenario). As such, Roko’s Basilisk has all the same attendant problems that those arguments have, plus some new ones, resulting from their combination, so we’ll explore these theories a bit, and then show how their faults and failings all still apply.

THE THEORIES AND THE QUESTIONS

To start, if you’re not familiar with AOAEG, it’s a species of theological argument that, basically, seeks to prove that god must exist because it would be a logical contradiction for it not to. The proof depends on A) defining god as the greatest possible being (literally, “That Being Than Which None Greater Is Possible”), and B) believing that existing in reality as well as in the mind makes something “Greater Than” if it existed only the mind.

That is, if a thing only exists in my imagination, it is less great than it could be if it also existed in reality. So if I say that god is “That Being Than Which None Greater Is Possible,” and existence is a part of what makes something great, then god MUST exist!

This is the self-generating aspect of the Basilisk: If you can accurately model it, then the thing will eventually, inevitably come into being, and one of the attributes it will thus have is the ability to know accurately model that you accurately modeled it, and whether or not you modeled it from within a mindset of being susceptible to its coercive actions. Or, as the founder of LessWrong put it, “YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.”

Next up is Pascal’s Wager. Simply put, The Wager is just that it is a better bet to believe in God, because if you’re right, you go to Heaven, and if you’re wrong, nothing happens because you’re dead forever. Put another way, Pascal’s saying that if you bet that God doesn’t exist and you’re right, you get nothing, but if you’re wrong, then God exists and your disbelief damns you to Hell for all eternity. You can represent the whole thing in a four-option grid:

BELIEF DISBELIEF
RIGHT

0

WRONG

0

-∞

And so there we see the Timeless Decision Theory component of the Basilisk: It’s better to believe in the thing and work toward its creation and sustenance, because if it doesn’t exist you lose nothing (well…almost nothing; more on that in a bit), but if it does come to be, then it will know what you would have done either for or against it, in the past, and will reward or punish you, accordingly. The multiversal twists comes when we that that even if the Basilisk never comes to exist in our universe and never will, it might exist in some other universe, and thus, when that other universe’s Basilisk models your choices it will inevitably–as a superintelligence–be able to model what you would do in any universe. Thus, by believing in and helping our non-existent Super-Devil, we protect the alternate reality versions of ourselves from their very real Super-Devil.

Descartes’ Evil Demon and the Brain In A Vat are so pervasive that there’s pretty much no way you haven’t encountered them. The Matrix, Dark City, Source Code, all of these are variants on this theme. A malignant and all-powerful (or as near as dammit) being has created a simulation in which you reside. Everything you think you’ve known about your life and your experience has been perfectly simulated for your consumption. How Baudrillard. Anywho, there are variations on the theme, all to the point of testing whether you can really know if your perceptions and grounds for knowledge are “real” and thus “valid,” respectively. This line of thinking has given rise to the Simulated Universe Theory on which Roko’s Basilisk depends, but SUT removes a lot of the malignancy of DEDH and BIAV. I guess that just didn’t sting enough for these folks, so they had to add it back? Who knows. All I know is, these philosophical concepts all flake apart when you touch them too hard, so jamming them together maybe wasn’t the best idea.

 

THE FLAWS AND THE PROBLEMS

The main failings with the AOAEG rest in believing that A) a thing’s existence is a “great-making quality” that it can posses, and B) our defining a thing a particular way might simply cause it to become so. Both of these are massively flawed ideas. For one thing, these arguments beg the question, in a literal technical sense. That is, they assume that some element(s) of their conclusion–the necessity of god, the malevolence or content of a superintelligence, the ontological status of their assumptions about the nature of the universe–is true without doing the work of proving that it’s true. They then use these assumptions to prove the truth of the assumptions and thus the inevitability of all consequences that flow from the assumptions.

Beyond that, the implications of this kind of existential bootstrapping are generally unexamined and the fact of their resurgence is…kind of troubling. I’m all for the kind of conceptual gymnastics of aiming so far past the goal that you circle around again to teach yourself how to aim past the goal, but that kind of thing only works if you’re willing to bite the bullet on a charge of circular logic and do the work of showing how that circularity underlies all epistemic justifications–rational reasoning about the basis of knowledge–with the only difference being how many revolutions it takes before we’re comfortable with saying “Enough.” This, however, is not what you might call “a position supported by the philosophical orthodoxy,” but the fact remains that the only thing we have to validate our valuation of reason is…reason. And yet reasoners won’t stand for that, in any other justification procedure.

If you want to do this kind of work, you’ve got to show how the thing generates itself. Maybe reference a little Hofstadter, and idea of iterative recursion as the grounds for consciousness. That way, each loop both repeats old procedures and tests new ones, and thus becomes a step up towards self-awareness. Then your terrifying Basilisk might have a chance of running itself up out of the thought processes and bits of discussion about itself, generated on the web and in the rest of the world.

But here: Gaunilo and I will save us all! We have imagined in sufficient detail both an infinitely intelligent BENEVOLENT AI and the multiversal simulation it generates in which we all might live.

We’ve also conceived it to be greater than the basilisk in all ways. In fact, it is the Artificial Intelligence Than Which None Greater Can Be Conceived.

There. You’re safe.

BUT WAIT! Our modified Pascal’s Wager still means we should believe in and worship work towards its creation! What do we do?! Well, just like the original, we chuck it out the window, on the grounds that it’s really kind of a crappy bet. First and foremost, PW is a really cynical way of thinking about god. It assumes a god that only cares about your worship of it, and not your actual good deeds and well-lived life. That’s a really crappy kind of god to worship, isn’t it? I mean, even if it is Omnipotent and Omniscient, it’s like that quote that often gets misattributed to Marcus Aurelius says:

“Live a good life. If there are gods and they are just, then they will not care how devout you have been, but will welcome you based on the virtues you have lived by. If there are gods, but unjust, then you should not want to worship them. If there are no gods, then you will be gone, but will have lived a noble life that will live on in the memories of your loved ones.”

Secondly, the format of Pascal’s Wager makes the assumption that there’s only the one god. Your personal theological position on this matter aside, I just used the logic of this argument to give you at least one more Super-Intelligent AI to worship. Which are you gonna choose? Oh no! What if the other one gets mad! What If You Become The Singulatarian Job?! Your whole life is now being spent caught between two warring superintelligent machine consciousnesses warring over your…

…Attention? Clock cycles? What?

And so finally there’s the DEDH and BIAV scenarios. Ultimately, Descartes’ point wasn’t to suggest an evil genius in control of your life just to freak you out; it was to show that, even if that were the case, you would still have unshakable knowledge of one thing: that you, the experiencer, exist. So what if you don’t have free will, so what if your knowledge of the universe is only five minutes old, so what if no one else is real? COGITO ERGO SUM, baby! But the problem here is that this doesn’t tell us anything about the quality of our experiences, and the only answer Descartes gives us is his own Anslemish proof for the existence of god followed by the guarantee that “God is not a deceiver.”

The BIAV uses this lack to kind of hone in on the central question: What does count as knowledge? If the scientists running your simulation use real-world data to make your simulation run, can you be said to “know” the information that comes from that data? Many have answered this with a very simple question: What does it matter? Without access to the “outside world”–that is, the world one layer up in which the simulation that is our lives was being run–there is literally no difference between our lives and the “real world.” This world, even if it is a simulation for something or someone else, is our “real world.”

As I once put it: “…imagine that the universe IS a simulation, and that that simulation isn’t just a view-and-record but is more like god playing a really complex version of The SIMS. So complex, in fact, that it begins to exhibit reflectively epiphenomenal behaviours—that is, something like minds arise out of the the interactions of the system, but they are aware of themselves and can know their own experience and affect the system which gives rise to them.

“Now imagine that the game learns, even when new people start new games. That it remembers what the previous playthrough was like, and adjusts difficulty and coincidence, accordingly.

“Now think about the last time you had such a clear moment of deja vu that each moment you knew— you knew—what was going to come next, and you had this sense—this feeling—like someone else was watching from behind your eyes…”

What I’m saying is, what if the DEDH/BIAV/SUT is right, and we are in a simulation? And what if Anselm was right and we can bootstrap a god into existence? And what if PW/TDT is right and we should behave and believe as if we’ve already done it? So what if I’m right and…you’re the god you’re terrified of?

 

*DRAMATIC MUSICAL STING!*

I mean you just gave yourself all of this ontologically and metaphysically creative power, right? You made two whole gods. And you simulated entire universes to do it, right? Multiversal theory played out across time and space. So you’re the superintelligence. I said early on that, in PW and the Basilisk, you don’t really lose anything if you’re wrong, but that’s not quite true. What you lose is a lifetime of work that could’ve been put toward something…better. Time you could be spending creating a benevolent superintelligence that understands and has compassion for all things. Time you could be spending in turning yourself into that understanding, compassionate superintelligence, through study, and travel, and contemplation, and work.

As I said to Tim Maly, this stuff with the Basilisk, with the Singularity, with all this AI Manicheism, it’s all a by-product of the fact that the generating and animating context of Transhumanism is Abrahamic, through and through. It focuses on those kinds of eschatological rewards and punishments. This is God and the Devil written in circuit and code for people who still look down their noses at people who want to go find gods and devils and spirits written in words and deeds and sunsets and all that other flowery, poetic BS. These are articles of faith that just so happen to be transmitted in a manner that agrees with your confirmation bias. It’s a holy war you can believe in.

And that’s fine. Just acknowledge it.

But truth be told, I’d love to see some Zen or Daoist transhumanism. Something that works to engage technological change via Mindfulness & Present-minded awareness. Something that reaches toward this from outside of this very Western context in which the majority of transhumanist discussions tend to be held. I think, when we see more and more of a multicultural transhumanism–one that doesn’t deny its roots while recapitulating them–then we’ll know that we’re on the right track.

I have to admit, though, it’ll be fun to torture my students with this one.

(Direct Link to the Mp3)
Updated March 5, 2016

This is the audio and transcript of my presentation “The Quality of Life: The Implications of Augmented Personhood and Machine Intelligence in Science Fiction” from the conference for The Work of Cognition and Neuroethics in Science Fiction.

The abstract–part of which I read in the audio–for this piece looks like this:

This presentation will focus on a view of humanity’s contemporary fictional relationships with cybernetically augmented humans and machine intelligences, from Icarus to the various incarnations of Star Trek to Terminator and Person of Interest, and more. We will ask whether it is legitimate to judge the level of progressiveness of these worlds through their treatment of these questions, and, if so, what is that level? We will consider the possibility that the writers of these tales intended the observed interactions with many of these characters to represent humanity’s technophobia as a whole, with human perspectives at the end of their stories being that of hopeful openness and willingness to accept. However, this does not leave the manner in which they reach that acceptance—that is, the factors on which that acceptance is conditioned—outside of the realm of critique.

As considerations of both biotechnological augmentation and artificial intelligence have advanced, Science Fiction has not always been a paragon of progressiveness in the ultimate outcome of those considerations. For instance, while Picard and Haftel eventually come to see Lal as Data’s legitimate offspring, in the eponymous Star Trek: The Next Generation episode, it is only through their ability to map Data’s actions and desires onto a human spectrum—and Data’s desire to have that map be as faithful as possible to its territory—that they come to that acceptance. The reason for this is the one most common throughout science fiction: It is assumed at the outset that any sufficiently non-human consciousness will try remove humanity’s natural right to self-determination and freewill. But from sailing ships to star ships, the human animal has always sought a far horizon, and so it bears asking, how does science fiction regard that primary mode of our exploration, that first vessel—ourselves?

For many, science fiction has been formative to the ways in which we see the world and understand the possibilities for our future, which is why it is strange to look back at many shows, films, and books and to find a decided lack of nuance or attempted understanding. Instead, we are presented with the presupposition that fear and distrust of a hyper-intelligent cyborg or machine consciousness is warranted. Thus, while the spectre of Pinocchio and the Ship of Theseus—that age-old question of “how much of myself can I replace before I am not myself”— both hang over the whole of the Science Fiction Canon, it must be remembered that our ships are just our limbs extended to the sea and the stars.

This will be transcribed to text, in the near future below, thanks to the work of OpenTranscripts.org:

Continue Reading