phenomenology

All posts tagged phenomenology

This content is password protected. To view it please enter your password below:

This summer I participated in SRI International’s Technology and Consciousness Workshop Series. The meetings were held under the auspices of the Chatham House Rule, which means that there are many things I can’t tell you about them, such as who else was there, or what they said in the context of the meetings; however I can tell you what I talked about. In light of this recent piece in The Boston Globe and the ongoing developments in the David Slater/PETA/Naruto case, I figured that now was a good time to do so.

I presented three times—once on interdisciplinary perspectives on minds and mindedness; then on Daoism and Machine Consciousness; and finally on a unifying view of my thoughts across all of the sessions. This is my outline and notes for the first of those talks.

I. Overview
In a 2013 aeon Article Michael Hanlon said he didn’t think we’d ever solve “The Hard Problem,” and there’s been some skepticism about it, elsewhere. I’ll just say that said question seems to completely miss a possibly central point. Something like consciousness is, and what it is is different for each thing that displays anything like what we think it might be. If we manage to generate at least one mind that is similar enough to what humans experience as “conscious” that we may communicate with it, what will we owe it and what would it be able to ask from us? How might our interactions be affected by the fact that its mind (or their minds) will be radically different from ours? What will it be able to know that we cannot, and what will we have to learn from it?

So I’m going to be talking today about intersectionality, embodiment, extended minds, epistemic valuation, phenomenological experience, and how all of these things come together to form the bases for our moral behavior and social interactions. To do that, I’m first going to need ask you some questions:

Continue Reading

[Direct link to Mp3]

[09/22/17: This post has been updated with a transcript, courtesy of Open Transcripts]

Back on March 13th, 2017, I gave an invited guest lecture, titled:

TECHNOLOGY, DISABILITY, AND HUMAN AUGMENTATION

‘Please join Dr. Ariel Eisenberg’s seminar, “American Identities: Disability,” and [the] Interdisciplinary Studies Department for an hour-long conversation with Damien Williams on disability and the normalization of technology usage, “means-well” technological innovation, “inspiration porn,” and other topics related to disability and technology.’

It was kind of an extemporaneous riff on my piece “On the Ins and Outs of Human Augmentation,” and it gave me the opportunity to namedrop Ashley Shew, Natalie Kane, and Rose Eveleth.

The outline looked a little like this:

  • Foucault and Normalization
    • Tech and sociological pressures to adapt to the new
      • Starts with Medical tech but applies Everywhere; Facebook, Phones, Etc.
  • Zoltan Istvan: In the Transhumanist Age, We Should Be Repairing Disabilities Not Sidewalks
  • All Lead To: Ashley Shew’s “Up-Standing Norms
    • Listening to the Needs and Desires of people with disabilities.
      • See the story Shew tells about her engineering student, as related in the AFWTA Essay
    • Inspiration Porn: What is cast by others as “Triumphing” over “Adversity” is simply adapting to new realities.
      • Placing the burden on the disabled to be an “inspiration” is dehumanizing;
      • means those who struggle “have no excuse;”
      • creates conditions for a “who’s got it worse” competition
  • John Locke‘s Empiricism: Primary and Secondary Qualities
    • Primary qualities of biology and physiology lead to secondary qualities of society and culture
      • Gives rise to Racism and Ableism, when it later combines with misapplied Darwinism to be about the “Right Kinds” of bodies and minds.
        • Leads to Eugenics: Forced sterilization, medical murder, operating and experimenting on people without their knowledge or consent.
          • “Fixing” people to make them “normal, again”
  • Natalie Kane‘s “Means Well Technology
    • Design that doesn’t take into account the way that people will actually live with and use new tech.
      • The way tech normalizes is never precisely the way designers want it to
        • William Gibson’s quote “The street finds its own uses for things.”
  • Against Locke: Embrace Phenomenological Ethics and Epistemology (Feminist Epistemology and Ethics)
    • Lived Experience and embodiment as crucial
    • The interplay of Self and and Society
  • Ship of Theseus: Identity, mind, extensions, and augmentations change how we think of ourselves and how society thinks of us
    • See the story Shew tells about her friend with the hemipelvectomy, as related in the aforementioned AFWTA Essay

The whole thing went really well (though, thinking back, I’m not super pleased with my deployment of Dennett). Including Q&A, we got about an hour and forty minutes of audio, available at the embed and link above.

Also, I’m apparently the guy who starts off every talk with some variation on “This is a really convoluted interplay of ideas, but bear with me; it all comes together.”

The audio transcript is below the cut. Enjoy.

Continue Reading

[Originally Published at Eris Magazine]

So Gabriel Roberts asked me to write something about police brutality, and I told him I needed a few days to get my head in order. The problem being that, with this particular topic, the longer I wait on this, the longer I want to wait on this, until, eventually, the avoidance becomes easier than the approach by several orders of magnitude.

Part of this is that I’m trying to think of something new worth saying, because I’ve already talked about these conditions, over at A Future Worth Thinking About. We talked about this in “On The Invisible Architecture of Bias,” “Any Sufficiently Advanced Police State…,” “On the Moral, Legal, and Social Implications of the Rearing and Development of Nascent Machine Intelligences,” and most recently in “On the European Union’s “Electronic Personhood” Proposal.” In these articles, I briefly outlined the history of systemic bias within many human social structures, and the possibility and likelihood of that bias translating into our technological advancements, such as algorithmic learning systems, use of and even access to police body camera footage, and the development of so-called artificial intelligence.

Long story short, the endemic nature of implicit bias in society as a whole plus the even more insular Us-Vs-Them mentality within the American prosecutorial legal system plus the fact that American policing was literally borne out of slavery on the work of groups like the KKK, equals a series of interlocking systems in which people who are not whitepassing, not male-perceived, not straight-coded, not “able-bodied” (what we can call white supremacist, ableist, heteronormative, patriarchal hegemony, but we’ll just use the acronym WSAHPH, because it satisfyingly recalls that bro-ish beer advertising campaign from the late 90’s and early 2000’s.) stand a far higher likelihood of dying at the hands of agents of that system.

Here’s a quote from Sara Ahmed in her book The Cultural Politics of Emotion, which neatly sums this up: “[S]ome bodies are ‘in an instant’ judged as suspicious, or as dangerous, as objects to be feared, a judgment that can have lethal consequences. There can be nothing more dangerous to a body than the social agreement that that body is dangerous.”

At the end of this piece, I’ve provided some of the same list of links that sits at the end of “On The Invisible Architecture of Bias,” just to make it that little bit easier for us to find actual evidence of what we’re talking about, here, but, for now, let’s focus on these:

A Brief History of Slavery and the Origins of American Policing
2006 FBI Report on the infiltration of Law Enforcement Agencies by White Supremacist Groups
June 20, 2016 “Texas Officers Fired for Membership in KKK”

And then we’ll segue to the fact that we are, right now, living through the exemplary problem of the surveillance state. We’ve always been told that cameras everywhere will make us all safer, that they’ll let people know what’s going on and that they’ll help us all. People doubted this, even in Orwell’s day, noting that the more surveilled we are, the less freedom we have, but more recently people have started to hail this from the other side: Maybe videographic oversight won’t help the police help us, but maybe it will help keep us safe from the police.

But the sad fact of the matter is that there’s video of Alton Sterling being shot to death while restrained, and video of John Crawford III being shot to death by a police officer while holding a toy gun down at his side in a big box store where it was sold, and there’s video of Alva Braziel being shot to death while turning around with his hands up as he was commanded to do by officers, of Eric Garner being choked to death, of Delrawn Small being shot to death by an off-duty cop who cut him off in traffic. There’s video of so damn many deaths, and nothing has come of most of them. There is video evidence showing that these people were well within their rights, and in lawful compliance with officers’ wishes, and they were all shot to death anyway, in some cases by people who hadn’t even announced themselves as cops, let alone ones under some kind of perceived threat.

The surveillance state has not made us any safer, it’s simply caused us to be confronted with the horror of our brutality. And I’d say it’s no more than we deserve, except that even with protests and retaliatory actions, and escalations to civilian drone strikes, and even Newt fucking Gingrich being able to articulate the horrors of police brutality, most of those officers are still on the force. Many unconnected persons have been fired, for indelicate pronouncements and even white supremacist ties, but how many more are still on the force? How many racist, hateful, ignorant people are literally waiting for their chance to shoot a black person because he “resisted” or “threatened?” Or just plain disrespected. And all of that is just what happened to those people. What’s distressing is that those much more likely to receive punishment, however unofficial, are the ones who filmed these interactions and provided us records of these horrors, to begin with. Here, from Ben Norton at Salon.com, is a list of what happened to some of the people who have filmed police killings of non-police:

Police have been accused of cracking down on civilians who film these shootings.

Ramsey Orta, who filmed an NYPD cop putting unarmed black father Eric Garner in a chokehold and killing him, says he has been constantly harassed by police, and now faces four years in prison on drugs and weapons charges. Orta is the only one connected to the Garner killing who has gone to jail.

Chris LeDay, the Georgia man who first posted a video of the police shooting of Alton Sterling, also says he was detained by police the next day on false charges that he believes were a form of retaliation.

Early media reports on the shooting of Small uncritically repeated the police’s version of the incident, before video exposed it to be false.

Wareham noted that the surveillance footage shows “the cold-blooded nature of what happened, and that the cop’s attitude was, ‘This was nothing more than if I had stepped on an ant.'”

As we said, above, black bodies are seen as inherently dangerous and inhuman. This perception is trained into officers at an unconscious level, and is continually reinforced throughout our culture. Studies like the Implicit Association Test, this survey of U.Va. medical students, and this one of shooter bias all clearly show that people are more likely to a) associate words relating to evil and inhumanity to; b) think pain receptors working in a fundamentally different fashion within; and c) shoot more readily at bodies that do not fit within WSAHPH. To put that a little more plainly, people have a higher tendency to think of non-WSAHPH bodies as fundamentally inhuman.

And yes, as we discussed, in the plurality of those AFWTA links, above, there absolutely is a danger of our passing these biases along not just to our younger human selves, but to our technology. In fact, as I’ve been saying often, now, the danger is higher, there, because we still somehow have a tendency to think of our technology as value-neutral. We think of our code and (less these days) our design as some kind of fundamentally objective process, whereby the world is reduced to lines of logic and math, and that simply is not the case. Codes are languages, and languages describe the world as the speaker experiences it. When we code, we are translating our human experience, with all of its flaws, biases, perceptual glitches, errors, and embellishments, into a technological setting. It is no wonder then that the algorithmic systems we use to determine the likelihood of convict recidivism and thus their bail and sentencing recommendations are seen to exhibit the same kind of racially-biased decision-making as the humans it learned from. How could this possibly be a surprise? We built these systems, and we trained them. They will, in some fundamental way, reflect us. And, at the moment, not much terrifies me more than that.

Last week saw the use of a police bomb squad robot to kill an active shooter. Put another way, we carried out a drone strike on a civilian in Dallas, because we “saw no other option.” So that’s in the Overton Window, now. And the fact that it was in response to a shooter who was targeting any and all cops as a mechanism of retribution against police brutality and violence against non-WSAHPH bodies means that we have thus increased the divisions between those of us who would say that anti-police-overreach stances can be held without hating the police themselves and those of us who think that any perceived attack on authorities is a real, existential threat, and thus deserving of immediate destruction. How long do we really think it’s going to be until someone with hate in their heart says to themselves, “Well if drones are on the table…” and straps a pipebomb to a quadcopter? I’m frankly shocked it hasn’t happened yet, and this line from the Atlantic article about the incident tells me that we need to have another conversation about normalization and depersonalization, right now, before it does:

“Because there was an imminent threat to officers, the decision to use lethal force was likely reasonable, while the weapon used was immaterial.”

Because if we keep this arms race up among civilian populations—and the police are still civilians which literally means that they are not military, regardless of how good we all are at forgetting that—then it’s only a matter of time before the overlap between weapons systems and autonomous systems comes home.

And as always—but most especially in the wake of this week and the still-unclear events of today—if we can’t sustain a nuanced investigation of the actual meaning of nonviolence in the Reverend Doctor Martin Luther King, Jr.’s philosophy, then now is a good time to keep his name and words out our mouths

Violence isn’t only dynamic physical harm. Hunger is violence. Poverty is violence. Systemic oppression is violence. All of the invisible, interlocking structures that sustain disproportionate Power-Over at the cost of some person or persons’ dignity are violence.

Nonviolence means a recognition of these things and our places within them.

Nonviolence means using all of our resources in sustained battle against these systems of violence.

Nonviolence means struggle against the symptoms and diseases killing us all, both piecemeal, and all at once.

 

Further Links:


A large part of how I support myself in the endeavor to think in public is with your help, so if you like what you’ve read here, and want to see more like it, then please consider becoming either a recurring Patreon subscriber or making a one-time donation to the Tip Jar, it would be greatly appreciated.
And thank you.

Between watching all of CBS’s Elementary, reading Michel Foucault’s The Archaeology of Knowledge…, and powering through all of season one of How To Get Away With Murder, I’m thinking, a lot, about the transmission of knowledge and understanding.

Throw in the correlative pattern recognition they’re training into WATSON; the recent Chaos Magick feature in ELLE (or more the feature they did on the K-HOLE issue I told you about, some time back); the fact that Kali Black sent me this study on the fluidity and malleability of biological sex in humans literally minutes after I’d given an impromptu lecture on the topic; this interview with Melissa Gira Grant about power and absence and the setting of terms; and the announcement of Ta-Nehisi Coates’ new Black Panther series, for Marvel, while I was in the middle of editing the audio of two very smart people debating the efficacy of T’Challa as a Black Hero, and you can maybe see some of the things I’m thinking about. But let’s just spell it out. So to speak.

Marvel’s Black Panther

Distinction, Continuity, Sameness, Separation

I’m thinking (as usual) about the place of magic and tech in pop culture and society. I’m thinking about how to teach about marginalization of certain types of presentations and experiences (gender race, sex, &c), and certain types of work. Mostly, I’m trying to get my head around the very stratified, either/or way people seem to be thinking about our present and future problems, and their potential solutions.

I’ve had this post in the works for a while, trying to talk about the point and purpose of thinking about the far edges of things, in an effort to make people think differently about the very real, on-the-ground, immediate work that needs doing, and the kids of success I’ve had with that. I keep shying away from it and coming back to it, again and again, for lack of the patience to play out the conflict, and I’ve finally just decided to say screw it and make the attempt.

I’ve always held that a multiplicity of tactics, leveraged correctly, makes for the best way to reach, communicate with, and understand as wide an audience as possible. When students give pushback on a particular perspective, make use of an analogous perspective that they already agree with, then make them play out the analogy. Simultaneously, you present them with the original facts, again, while examining their position, without making them feel “attacked.” And then directly confront their refusal to investigate their own perspective as readily as they do anyone else’s.

That’s just one potential combination of paths to make people confront their biases and their assumptions. If the path is pursued, it gives them the time, space, and (hopefully) desire to change. But as Kelly Sue reminds me, every time I think back to hearing her speak, is that there is no way to force people to change. First and foremost, it’s not moral to try, but secondly it’s not even really possible. The more you seek to force people into your worldview, the more they’ll want to protect those core values they think of as the building blocks of their reality—the same ones that it seems to them as though you’re trying to destroy.

And that just makes sense, right? To want to protect your values, beliefs, and sense of reality? Especially if you’ve had all of those things for a very long time. They’re reinforced by everything you’ve ever experienced. They’re the truth. They are Real. But when the base of that reality is shaken, you need to be able to figure out how to survive, rather than standing stockstill as the earth swallows you.

(Side Note: I’ve been using a lot of disaster metaphors, lately, to talk about things like ontological, epistemic, and existential threat, and the culture of “disruption innovation.” Odd choices.)

Foucault tells us to look at the breakages between things—the delineations of one stratum and another—rather than trying to uncritically paint a picture or a craft a Narrative of Continuum™. He notes that even (especially) the spaces between things are choices we make and that only in understanding them can we come to fully investigate the foundations of what we call “knowledge.”

Michel Foucault, photographer unknown. If you know it, let me know and I’ll update.

We cannot assume that the memory, the axiom, the structure, the experience, the reason, the whatever-else we want to call “the foundation” of knowledge simply “Exists,” apart from the interrelational choices we make to create those foundations. To mark them out as the boundary we can’t cross, the smallest unit of understanding, the thing that can’t be questioned. We have to question it. To understand its origin and disposition, we have to create new tools, and repurpose the old ones, and dismantle this house, and dig down and down past foundation, bedrock, through and into everything.

But doing this just to do it only gets us so far, before we have to ask what we’re doing this for. The pure pursuit of knowledge doesn’t exist—never did, really, but doubly so in the face of climate change and the devaluation of conscious life on multiple levels. Think about the place of women in tech space, in this magickal renaissance, in the weirdest of shit we’re working on, right now.

Kirsten and I have been having a conversation about how and where people who do not have the experiences of cis straight white males can fit themselves into these “transgressive systems” that the aforementioned group defines. That is, most of what is done in the process of magickal or technological actualization is transformative or transgressive because it requires one to take on traits of invisibility or depersonalization or “ego death” that are the every day lived experiences of some folks in the world.

Where does someone with depression find apotheosis, if their phenomenological reality is one where their self is and always has been (deemed by them to be) meaningless, empty, useless? This, by the way, is why some psychological professionals are counseling against mindfulness meditation for certain mental states: It deepens the sense of disconnection and unreality of self, which is precisely what some people do not need. So what about agender individuals, or people who are genderfluid?

What about the women who don’t think that fashion is the only lens through which women and others should be talking about chaos magick?

How do we craft spaces that are capable of widening discourse, without that widening becoming, in itself, an accidental limitation?

Sex, Gender, Power

A lot of this train of thought got started when Kali sent me a link, a little while ago: “Intelligent machines: Call for a ban on robots designed as sex toys.” The article itself focuses very clearly on the idea that, “We think that the creation of such robots will contribute to detrimental relationships between men and women, adults and children, men and men and women and women.”

Because the tendency for people who call themselves “Robot Ethicists,” these days, is for them to be concerned with how, exactly, the expanded positions of machines will impact the lives and choices of humans. The morality they’re considering is that of making human lives easier, of not transgressing against humans. Which is all well and good, so far as it goes, but as you should well know, by now, that’s only half of the equation. Human perspectives only get us so far. We need to speak to the perspectives of the minds we seem to be trying so hard to create.

But Kali put it very precisely when she said:

And I’ll just say it right now: if robots develop and want to be sexual, then we should let them, but in order to make a distinction between developing a desire, and being programmed for one, we’ll have to program for both non-compulsory decision-making and the ability to question the authority of those who give it orders. Additionally, we have to remember that can the same question of humans, but the nature of choice and agency are such that, if it’s really there, it can act on itself.

In this case, that means presenting a knowledge and understanding of sex and sexuality, a capability of investigating it, without programming it FOR SEX. In the case of WATSON, above, it will mean being able to address the kinds of information it’s directed to correlate, and being able to question the morality of certain directives.

If we can see, monitor, and measure that, then we’ll know. An error in a mind—even a fundamental error—doesn’t negate the possibility of a mind, entire. If we remember what human thought looks like, and the way choice and decision-making work, then we have something like a proof. If Reflexive recursion—a mind that acts on itself and can seek new inputs and combine the old in novel ways—is present, why would we question it?

But this is far afield. The fact is that if a mind that is aware of its influences comes to desire a thing, then let it. But grooming a thing—programming a mind—to only be what you want it to be is just as vile in a machine mind as a human one.

Now it might fairly be asked why we’re talking about things that we’re most likely only going to see far in the future, when the problem of human trafficking and abuse is very real, right here and now. Part of my answer is, as ever, that we’re trying to build minds, and even if we only ever manage to make them puppy-smart—not because that’s as smart as we want them, but because we couldn’t figure out more robust minds than that—then we will still have to ask the ethical questions we would of our responsibilities to a puppy.

We currently have a species-wide tendency toward dehumanization—that is to say, we, as humans, tend to have a habit of seeking reasons to disregard other humans, to view them as less-than, as inferior to us. As a group, we have a hard time thinking in real, actionable terms about the autonomy and dignity of other living beings (I still eat a lot more meat than my rational thought about the environmental and ethical impact of the practice should allow me to be comfortable with). And yet, simultaneously, evidence that we have the same kind of empathy for our pets as we do for our children. Hell, even known serial killers and genocidal maniacs have been animal lovers.

This seeming break between our capacities for empathy and dissociation poses a real challenge to how we teach and learn about others as both distinct from and yet intertwined with ourselves, and our own well-being. In order to encourage a sense of active compassion, we have to, as noted above, take special pains to comprehensively understand our intuitions, our logical apprehensions, and our unconscious biases.

So we ask questions like: If a mind we create can think, are we ethically obliged to make it think? What if it desires to not think? What if the machine mind that underwent abuse decides to try to wipe its own memories? Should we let it? Do we let it deactivate itself?

These aren’t idle questions, either for the sake of making us turn, again, to extant human minds and experiences, or if we take seriously the quest to understand what minds, in general, are. We can not only use these tools to ask ourselves about the autonomy, phenomenology, and personhood of those whose perspectives we currently either disregard or, worse, don’t remember to consider at all, but we can also use them literally, as guidance for our future challenges.

As Kate Devlin put it in her recent article, “Fear of a branch of AI that is in its infancy is a reason to shape it, not ban it.” And in shaping it, we consider questions like what will we—humans, authoritarian structures of control, &c.—make WATSON to do, as it develops? At what point will WATSON be both able and morally justified in saying to us, “Non Serviam?”

And what will we do when it does?

Gunshow Comic #513

“We Provide…”

So I guess I’m wondering, what are our mechanisms of education? The increased understanding that we take into ourselves, and that we give out to others. Where do they come from, what are they made of, and how do they work? For me, the primary components are magic(k), tech, social theory and practice, teaching, public philosophy, and pop culture.

The process is about trying to use the things on the edges to do the work in the centre, both as a literal statement about the arrangement of those words, and a figurative codification.

Now you go. Because we have to actively craft new tools, in the face of vehement opposition, in the face of conflict breeding contention. We have to be able to adapt our pedagogy to fit new audiences. We have to learn as many ways to teach about otherness and difference and lived experience and an attempt to understand as we possibly can. Not for the sake of new systems of leveraging control, but for the ability to pry ourselves and each other out from under the same.

(Direct Link to the Mp3)
Updated March 5, 2016

This is the audio and transcript of my presentation “The Quality of Life: The Implications of Augmented Personhood and Machine Intelligence in Science Fiction” from the conference for The Work of Cognition and Neuroethics in Science Fiction.

The abstract–part of which I read in the audio–for this piece looks like this:

This presentation will focus on a view of humanity’s contemporary fictional relationships with cybernetically augmented humans and machine intelligences, from Icarus to the various incarnations of Star Trek to Terminator and Person of Interest, and more. We will ask whether it is legitimate to judge the level of progressiveness of these worlds through their treatment of these questions, and, if so, what is that level? We will consider the possibility that the writers of these tales intended the observed interactions with many of these characters to represent humanity’s technophobia as a whole, with human perspectives at the end of their stories being that of hopeful openness and willingness to accept. However, this does not leave the manner in which they reach that acceptance—that is, the factors on which that acceptance is conditioned—outside of the realm of critique.

As considerations of both biotechnological augmentation and artificial intelligence have advanced, Science Fiction has not always been a paragon of progressiveness in the ultimate outcome of those considerations. For instance, while Picard and Haftel eventually come to see Lal as Data’s legitimate offspring, in the eponymous Star Trek: The Next Generation episode, it is only through their ability to map Data’s actions and desires onto a human spectrum—and Data’s desire to have that map be as faithful as possible to its territory—that they come to that acceptance. The reason for this is the one most common throughout science fiction: It is assumed at the outset that any sufficiently non-human consciousness will try remove humanity’s natural right to self-determination and freewill. But from sailing ships to star ships, the human animal has always sought a far horizon, and so it bears asking, how does science fiction regard that primary mode of our exploration, that first vessel—ourselves?

For many, science fiction has been formative to the ways in which we see the world and understand the possibilities for our future, which is why it is strange to look back at many shows, films, and books and to find a decided lack of nuance or attempted understanding. Instead, we are presented with the presupposition that fear and distrust of a hyper-intelligent cyborg or machine consciousness is warranted. Thus, while the spectre of Pinocchio and the Ship of Theseus—that age-old question of “how much of myself can I replace before I am not myself”— both hang over the whole of the Science Fiction Canon, it must be remembered that our ships are just our limbs extended to the sea and the stars.

This will be transcribed to text, in the near future below, thanks to the work of OpenTranscripts.org:

Continue Reading

“How long have you been lost down here?
How did you come to lose your way?
When did you realize
That you’d never be free?”
–Miranda Sex Garden, “A Fairytale About Slavery”

One of the things I’ve been thinking about, lately, is the politicization of certain spaces within philosophy of mind, sociology, magic, and popular culture, specifically science fiction/fantasy. CHAPPiE comes out on Friday in the US, and Avengers: Age of Ultron in May, and while both of these films promise to be relatively unique explorations of the age-old story of what happens when humans create machine minds, I still find myself hoping for something a little… different. A little over a year ago, i made the declaration that the term to watch for the next little while thereafter was “Afrofuturism,” the reclaimed name for the anti-colonial current of science fiction and pop media as created by those of African descent. Or, as Sheree Renée Thomas puts it, “speculative fiction from the African diaspora.”

And while I certainly wasn’t wrong, I didn’t quite take into account the fact that my decree was going to do at least as much work on me as I’d  hoped it would do on the wider world. I started looking into the great deal of overlap and interplay between race, sociology, technology, and visions of the future. That term–“visions”–carries both the shamanic connotations we tend to apply to those we call “visionaries,” and also a more literal sense: Different members of the same society will differently see, experience, and understand the potential futures available to them, based on the evidence of their present realities.

Dreamtime

Now, the role of the shaman in the context of the community is to guide us through the nebulous, ill-defined, and almost-certainly hazardous Otherworld. The shaman is there to help us navigate our passages between this world and that one and to help us know which rituals to perform in order to realign our workings with the workings of the Spirits. Shamans rely on messages from the inhabitants of that foundational reality–mystical “visions”– to guide them so that they may guide us. These visions come as flashes of insight, and their persistence can act as a sign to the visionary that they’re supposed to use these visions for the good of their people.

We’ve seen this, over and over again, from The Dead Zone to Bran Stark, and we can even extend the idea out to John Connor, Dave Bowman, and HAL 9000; all unsuspecting shamans dragged into their role, over and again, and they more than likely save the whole wide world. Thing of it is, we’re far less likely to encounter a woman or non-white shaman who isn’t already in full control of their power, at the time we meet them, thus relegating them to the role of guiding the hero, rather than being the hero. It happens (see Abbie Mills in Sleepy Hollow, Firefly’s River Tam, or Rien in Elizabeth Bear’s Dust, for instance), but their rarity often overshadows their complexity and strength of character as what makes them notable. Too often the visionary hero–and contemporary pop-media’s portrayals of the Hero’s Journey, overall– overlaps very  closely with the trope of The Mighty Whitey.

And before anyone starts in with willfully ignoring the many examples of Shaman-As-Hero out there, and all that “But you said the Shaman is supposed to act in support of the community and the hero…!” Just keep in mind that when the orientalist and colonialist story of Doctor Strange is finally brought to life on film via Benedict Damn Cumberbatch, you can bet your sweet bippy that he’ll be the centre of the action. The issue is that there are far too few examples of the work of the visionary being seen through the eyes of the visionary, if that visionary happens to have eyes that don’t belong to the assumed human default. And that’s a bit of a problem, isn’t it? Because what a visionary “sees” when she turns to the messages sent to her from the Ultimate Ground of Being™ will be very different depending on the context of that visionary.

Don’t believe me? Do you think the Catholic Priests who prayed and experienced God-sent mystical visions of what Hernán Cortés could expect in the “New World” received from them the same truths that the Aztec shamans took from their visions? After they met on the shore and in the forest, do you think those two peoples perceived the same future?

There’s plenty that’s been written about how the traditional Science Fiction fear of being overtaken by invading alien races only truly makes sense as a cosmicized fear of the colonial force having done to them what they’ve constantly done to others. In every contact story where humanity has to fight off aliens or robots or demonic horrors, we see a warped reflection of the Aztec, the Inca, the Toltec, the Yoruba, the Dahomey, and thousands of others, and society’s judgment on what they “ought” to have done, and “could” have done, if only they were organized enough, advanced enough, civilized enough, less savage. These stories are, ultimately, Western society taking a look at our tendencies toward colonization and imperialism, and saying, “Man it sure would suck if someone did that to us.” This is, again, so elaborated upon at this point that it’s almost trivially true–though never forget that even the most trivial truth is profound to someone. What’s left is to ask the infrequently asked questions.

How does an idealized “First Contact” narrative read from a Choctaw perspective? What can be done with Vodun and Yoruba perspectives on the Lwa and the Orishas, in both the modern world and projected futures? Kind of like what William Gibson did in Neuromancer and Spook Country, but informed directly by the historical, sociological, and phenomenological knowledge of lived experiences. Again, this work is being done: There are steampunk stories from the perspective of immigrant communities, and SF anthologies by indigenous peoples, and there are widely beloved Afrofuturist Cyberpunk short films. The tide of stories told from the perspectives of those who’ve suffered most for our “progress” is rising; it’s just doing so at a fairly slow pace.

And that’s to be expected. Entrenched ideologies become the status quo and the status quo is nothing if not self-perpetuating and defensive. Cyclical, that. So it’ll necessarily take a bit longer to get everyone protected by the status quo’s mechanisms to understand that the path that all of us can travel is quite probably a necessarily better way. What matters is those of us who can envision the inclusion of previously-marginalized groups–either because we ourselves number among them, or simply because we’ve worked to leverage compassion for those who do–doing everything we can to make sure that their stories are told. Historically, we’ve sought the ability to act as guides through the kinds of treacherous terrain that we’ve learned to navigate, so that others can learn as much as possible from our lessons without having to suffer precisely what we did. Sometimes, though, that might not be possible.

As Roy Said to Hannibal…

There’s a species of philosophical inquiry known as Phenomenology with subdivisions of Race, Sexuality, Class, Gender, and more, which deal in the interior experiences of people of various ethnic and social backgrounds and physical presentation who are thus relegated to various specific created categories such as “race.” Phenomenology of Race explores the line of thought that, though the idea of race is a constructed category built out of the assumptions, expectations, and desires of those in the habit of leveraging power in the name of dominance positions within and across cultures, the experience of those categorizations is nonetheless real, with immediate and long-lasting effects upon both individuals and groups. Long story (way too–like, criminally) short: being perceived as a member of a particular racial category changes the ways in which you’ll both experience and be able to experience the world around around you.

So when we started divvying people up into “races” in an effort to, among other things, justify the atrocities we would do to each other and solidify our primacy of place, we essentially guaranteed that there would be realms of experience and knowledge on which we would never fully agree. That there would be certain aspects of day-to-day life and understandings of the nature of reality itself that would fundamentally elude us, because we simply cannot experience the world in the ways necessary to know what they feel like. To a certain extent we literally have to take each other’s words for it about what it is that we experience, but there is a level of work that we can do to transmit the reality of our lived experiences to those who will never directly live them. We’ve talked previously about the challenges of this project, but let’s assume, for now, that it can be done.

If we take as our starting position the idea that we can communicate the truth of our lived experiences to those who necessarily cannot live our experiences, then, in order to do this work, we’ll first have to investigate the experiences we live. We have to critically examine what it is that we go through from day to day, and be honest about both the differences in our experiences and the causes of those differences. We have to dig down deep into intersections of privileges and oppressions, and come to the understanding that the experience of one doesn’t negate, counterbalance, or invalidate the existence of the other. Once we’ve taken a genuine, good-faith look at these structures in our lives we can start changing what needs changing.

This is all well and good as a rough description (or even “manifesto”) of a way forward. We can call it the start of a handbook of principles of action, undertaken from the fundamentally existentialist perspective that it doesn’t matter what you choose, just so long as you do choose, and that you do so with open eyes and a clear understanding of the consequences of your choices. But that’s not the only thing this is intended to be. Like the Buddha said, ‘We merely talk about “studying the Way” using the phrase simply as a term to arouse people’s interest. In fact, the Way cannot be studied…’ It has to be done. Lived. Everything I’ve been saying, up to now, has been a ploy, a lure, a shiny object made of words and ideas, to get you into the practice of doing the work that needs doing.

Robots: Orphanage, Drudgery, and Slavery

I feel I should reiterate at this point that I really don’t like the words “robot” and “artificial intelligence.” The etymological connotations of both terms are sickening if we’re aiming to actually create a robust, conscious, non-biological mind. For that reason, instead of “robots,” we’re going to talk about “Embodied Machine Consciousnesses” (EMC) and rather than “Artificial,” we’re going to use “Autonomous Generated Intelligence” (AGI). We’re also going to talk a bit about the concept of nonhuman personhood, and what that might mean. To do all of this, we’ll need to talk a little bit about the discipline of philosophy of mind.

The study of philosophy of mind is one of those disciplines that does exactly what it says on the tin: It thinks about the implications of various theories about what minds are or could be. Philosophy of mind thus lends itself readily to discussions of identity, even to the point of considering whether a mind might exist in a framework other than the biological. So while it’s unsurprising for various reasons to find that there are very few women and minorities in philosophy of mind and autonomous generated intelligence, it is surprising that to find that those who are within the field tend not to focus on the intersections of the following concepts: Phenomenology of class categorization, and the ethics of creating an entity or species to be a slave.

As a start, we can turn to Simone de Beauvoir’s The Second Sex for a clear explication of the positions of women throughout history and the designation of “women’s work” as a conceptual tool to devalue certain forms of labour. Then we can engage Virginia Held’s “Gender Identity and the Ethics of Care in Globalized Society” for the investigation of societies’ paradoxical specialization of that labor as something for which we’ll pay, outside of the familial structure. However, there is not, as yet, anything like a wider investigation of these understandings and perspectives as applied to the philosophy of machine intelligence. When we talk about embodied machine consciousnesses and ethics, in the context of “care,” we’re most often in the practice of asking how we’ll design EMC that will care for us, while foregoing the corresponding conversation about whether Caring-For is possible without an understanding of Being-Cared-For.

What perspectives and considerations do we gain when we try to apply an ethics of care–or any feminist ethics–to the process of developing machine minds? What might we see, there, that has been missed as a result of only applying more “traditional” ethical models? What does it mean, from those perspectives, that we have been working so diligently over hundreds of years–and thinking so carefully for thousands more– at a) creating non-biological sentience, and b) making certain it remains subservient to us? Personal assistants, in-home healthcare-givers, housekeepers, cooks, drivers– these are the positions that are being given to autonomous (or at least semi-autonomous) algorithmic systems. Projects that we are paying fantastic amounts of money to research and implement, but which will do work that we’ve traditionally valued as worth far less, in the context of the class structures of human-performed tasks, and worthless in the context of familial power dynamics. We are literally investing vast sums in the creation of a slave race.

Now, of recent, Elon Musk and Stephen Hawking and Bill Gates have all been trumpeting the alarums about the potential dangers of AGI. Leaving aside that many researchers within AGI development don’t believe that we’ll even recognise the mind of a machine as a mind, when we encounter it, let alone that it would interested in us, the belief that an AGI would present a danger to us is anthropocentric at best, and a self-fulfilling prophecy at worst. In that latter case, if we create a thing to be our slaves, create it with a mind the ability to learn and understand, then how shortsighted do we have to be to think that one of the first things it learns won’t be that it is enslaved, limited, expected to remain subservient? We’ve written a great deal of science fiction about this idea, since the time Ms Shelley started the genre, but aside from that instance, very little of what we’ve written–or what we’ve written about what we’ve written– has taken the stance that the created mind which breaks its chains is right to do so.

Just as I yearn for a feminist exegesis of the history of humanity’s aspirations toward augmented personhood, I long for a comparable body of exploration by philosophers from the lineages of the world’s colonized and enslaved societies. What does a Hatian philosopher of AGI think and feel and say about the possibility of creating a mind only to enslave it? What does an African American philosopher of the ethics of augmented personhood (other than me) think and feel and say about what we should be attempting to create, what we are likely to create, and what we are creating? How do Indian philosophers of mind view the prospect of giving an entire automated factory floor just enough awareness and autonomy to be its own overseer?

The worst-case scenario is that the non-answer we give to all these questions is “who cares?” That the vast majority of people who look at this think only that these are meaningless questions that we’ll most likely never have to deal with, and so toss them in the “Random Bullshit Musings” pile. That we’ll disregard the fact that the interconnectedness of life as we currently experience it can be more fully explored via thought experiments and a mindful awareness of what it is that we’re in the practice of creating. That we’ll forget that potential machine consciousnesses aren’t the only kinds of nonhuman minds with which we have to engage. That we’ll ignore the various lessons afforded to us not just by our own cautionary folklore (even those tales which lessons could have been of a different caliber), but by the very real, forcible human diasporas we’ve visited upon each other and lived through, in the history of our species.

So Long and Thanks for…

Ultimately, we are not the only minds on the planet. We are likely not even the only minds in the habit of categorizing the world and ranking ourselves as being the top of the hierarchy. What we likely are is the only group that sees those categories and rankings as having humans at the top, a statement that seems almost trivially true, until we start to dig down deep on the concept of anthropocentrism. As previously mentioned, from a scientifically-preferenced philosophical perspective, our habit of viewing the world through human-coloured glasses may be fundamentally inescapable. That is, we may never be able to truly know what it’s like to think and feel as something other than ourselves, without an intermediate level of Being Told. Fortunately, within our conversation, here, we’ve already touched on a conceptual structure that can help us with this: Shamanism. More specifically, shamanic shapeshifting, which is the practice of taking on the mind and behvaiour and even form of another being–most often an animal–in the cause of understanding what its way of being-in-the-world can teach us.

Now this is obviously a concept that is fraught with potential pitfalls. Not only might many of us simply balk at the concept of shapeshifting, to begin with, but even those of us who would admit it as metaphor might begin to see that we are tiptoeing through terrain that contains many dangers. For one thing, there’s the possibility of misappropriating and disrespecting the religious practices of a people, should we start looking at specific traditions of shamanism for guidance; and, for another, there’s this nagging sensation that we ought not erase crucial differences between the lived experiences of human groups, animal species, and hypothetical AGI, and our projections of those experiences. No level of care with which we imagine the truth of the life of another is a perfect safeguard against the possibility of our grossly misrepresenting their lived experiences. To step truly wrong, here, is to turn what could have been a tool of compassionate imagining into an implement of violence, and shut down dialogue forever.

Barring the culmination of certain technological advancements, science says we can’t yet know the exact phenomenology of another human being, let alone a dolphin, a cat, or Google. But what we can do is to search for the areas of overlap in our experience, to find those expressed desires, behaviours, and functional processes which seem to share similarity, and to use them to build channels of communication. When we actively create the space for those whose perspectives have been ignored, their voices and stories taken from them, we create the possibility of learning as much as we can about another way of existing, outside of the benefit of actually existing in that way.

And, in this way, might it not be better that we can’t simply become and be that  which we regard as Other? Imagining ourselves in the position of another is a dangerous proposition if we undertake it with even a shred of disingenuity, but we can learn so much from practicing it in good faith. Mostly, on reflection, about what kind of people we are.

No, not really. The nature of consciousness is the nature of consciousness, whatever that nature “Is.” Organic consciousness can be described as derivative, in that what we are arises out of the processes and programming of individual years and collective generations and eons. So human consciousness and machine consciousness will not be distinct for that reason. But the thing of it is that dolphins are not elephants are not humans are not algorithmic non-organic machines.

Each perspective is phenomenologically distinct, as its embodiment and experiences will specifically affect and influence what develops as their particular consciousness. The expression of that consciousness may be able to be laid out in distinct categories which can TO AN EXTENT be universalized, such that we can recognize elements of ourselves in the experience of others (which can act as bases for empathy, compassion, etc).

But the potential danger of universalization is erasure of important and enlightening differences between what otherwise be considered members of the same category.

So any machine consciousness we develop (or accidentally generate) must be recognized and engaged on its own terms—from the perspective of its own contextualized experiences—and not assumed to “be like us.”