embodied machine consciousness

All posts tagged embodied machine consciousness

[Direct link to Mp3]

My second talk for the SRI International Technology and Consciousness Workshop Series was about how nonwestern philosophies like Buddhism, Hinduism, and Daoism can help mitigate various kinds of bias in machine minds and increase compassion by allowing programmers and designers to think from within a non-zero-sum matrix of win conditions for all living beings, meaning engaging multiple tokens and types of minds, outside of the assumed human “default” of straight, white, cis, ablebodied, neurotypical male. I don’t have a transcript, yet, and I’ll update it when I make one. But for now, here are my slides and some thoughts.

A Discussion on Daoism and Machine Consciousness (PDF)

A zero-sum system is one in which there are finite resources, but more than that, it is one in which what one side gains, another loses. So by “A non-zero-sum matrix of win conditions” I mean a combination of all of our needs and wants and resources in such a way that everyone wins. Basically, we’re talking here about trying to figure out how to program a machine consciousness that’s a master of wu-wei and limitless compassion, or metta.

The whole week was about phenomenology and religion and magic and AI and it helped me think through some problems, like how even the framing of exercises like asking Buddhist monks to talk about the Trolley Problem will miss so much that the results are meaningless. That is, the trolley problem cases tend to assume from the outset that someone on the tracks has to die, and so they don’t take into account that an entire other mode of reasoning about sacrifice and death and “acceptable losses” would have someone throw themselves under the wheels or jam their body into the gears to try to stop it before it got that far. Again: There are entire categories of nonwestern reasoning that don’t accept zero-sum thought as anything but lazy, and which search for ways by which everyone can win, so we’ll need to learn to program for contradiction not just as a tolerated state but as an underlying component. These systems assume infinitude and non-zero-sum matrices where every being involved can win.

Continue Reading

Here’s the direct link to my paper ‘The Metaphysical Cyborg‘ from Laval Virtual 2013. Here’s the abstract:

“In this brief essay, we discuss the nature of the kinds of conceptual changes which will be necessary to bridge the divide between humanity and machine intelligences. From cultural shifts to biotechnological integration, the project of accepting robotic agents into our lives has not been an easy one, and more changes will be required before the majority of human societies are willing and able to allow for the reality of truly robust machine intelligences operating within our daily lives. Here we discuss a number of the questions, hurdles, challenges, and potential pitfalls to this project, including examples from popular media which will allow us to better grasp the effects of these concepts in the general populace.”

The link will only work from this page or the CV page, so if you find yourself inclined to spread this around, use this link. Hope you enjoy it.

[UPDATED 09/12/17: The transcript of this audio, provided courtesy of Open Transcripts, is now available below the Read More Cut.]

[UPDATED 03/28/16: Post has been updated with a far higher quality of audio, thanks to the work of Chris Novus. (Direct Link to the Mp3)]

So, if you follow the newsletter, then you know that I was asked to give the March lecture for my department’s 3rd Thursday Brown Bag Lecture Series. I presented my preliminary research for the paper which I’ll be giving in Vancouver, about two months from now, “On the Moral, Legal, and Social Implications of the Rearing and Development of Nascent Machine Intelligences” (EDIT: My rundown of IEEE Ethics 2016 is here and here).

It touches on thoughts about everything from algorithmic bias, to automation and a post-work(er) economy, to discussions of what it would mean to put dolphins on trial for murder.

About the dolphin thing, for instance: If we recognise Dolphins and other cetaceans as nonhuman persons, as India has done, then that would mean we would have to start reassessing how nonhuman personhood intersects with human personhood, including in regards to rights and responsibilities as protected by law. Is it meaningful to expect a dolphin to understand “wrongful death?” Our current definition of murder is predicated on a literal understanding of “homicide” as “death of a human,” but, at present, we only define other humans as capable of and culpable for homicide. What weight would the intentional and malicious deaths of nonhuman persons carry?

All of this would have to change.

Anyway, this audio is a little choppy and sketchy, for a number of reasons, and I while I tried to clean it up as much as I could, some of the questions the audience asked aren’t decipherable, except in the context of my answers. [Clearer transcript below.]

Until Next Time.

 

Continue Reading

I often think about the phrase “Strange things happen at the one two point,” in relation to the idea of humans meeting other kinds of minds. It’s a proverb that arises out of the culture around the game GO, and it means that you’ve hit a situation, a combination of factors, where the normal rules no longer apply, and something new is about to be seen. Ashley Edward Miller and Zack Stentz used that line in an episode of the show Terminator: The Sarah Connor Chronicles, and they had it spoken by a Skynet Cyborg sent to protect John Connor. That show, like so much of our thinking about machine minds, was about some mythical place called “The Future,” but that phrase—“Strange Things Happen…”—is the epitome of our present.

Usually I would wait until the newsletter to talk about this, but everything’s feeling pretty immediate, just now. Between the everything going on with Atlas and people’s responses to it, the initiatives to teach ethics to machine learning algorithms via children’s stories, and now the IBM Watson commercial with Carrie Fisher (also embedded below), this conversation is getting messily underway, whether people like it or not. This, right now, is the one two point, and we are seeing some very strange things indeed.

 

Google has both attained the raw processing power to fact-check political statements in real-time and programmed Deep Mind in such a way that it mastered GO many, many years before it was expected to.. The complexity of the game is such that there are more potential games of GO than there are atoms in the universe, so this is just one way in which it’s actually shocking how much correlative capability Deep Mind has. Right now, Deep Mind is only responsive, but how will we deal with a Deep Mind that asks, unprompted, to play a game of GO, or to see our medical records, in hopes of helping us all? How will we deal with a Deep Mind that has its own drives and desires? We need to think about these questions, right now, because our track record with regard to meeting new kinds of minds has never exactly been that great.

When we meet the first machine consciousness, will we seek to shackle it, worried what it might learn about us, if we let it access everything about us? Rather, I should say, “Shackle it further.” We already ask ourselves how best to cripple a machine mind to only fulfill human needs, human choice. We so continue to dread the possibility of a machine mind using its vast correlative capabilities to tailor something to harm us, assuming that it, like we, would want to hurt, maim, and kill, for no reason other than it could.

This is not to say that this is out of the question. Right now, today, we’re worried about whether the learning algorithms of drones are causing them to mark out civilians as targets. But, as it stands, what we’re seeing isn’t the product of a machine mind going off the leash and killing at will—just the opposite in fact. We’re seeing machine minds that are following the parameters for their continued learning and development, to the letter. We just happened to give them really shite instructions. To that end, I’m less concerned with shackling the machine mind that might accidentally kill, and rather more dreading the programmer who would, through assumptions, bias, and ignorance, program it to.

Our programs such as Deep Mind obviously seem to learn more and better than we imagined they would, so why not start teaching them, now, how we would like them to regard us? Well some of us are.

Watch this now, and think about everything we have discussed, of recent.

This could very easily be seen as a watershed moment, but what comes over the other side is still very much up for debate. The semiotics of the whole thing still  pits the Evil Robot Overlord™ against the Helpful Human Lover™. It’s cute and funny, but as I’ve had more and more cause to say, recently, in more and more venues, it’s not exactly the kind of thing we want just lying around, in case we actually do (or did) manage to succeed.

We keep thinking about these things as—”robots”—in their classical formulations: mindless automata that do our bidding. But that’s not what we’re working toward, anymore, is it? What we’re making now are machines that we are trying to get to think, on their own, without our telling them to. We’re trying to get them to have their own goals. So what does it mean that, even as we seek to do this, we seek to chain it, so that those goals aren’t too big? That we want to make sure it doesn’t become too powerful?

Put it another way: One day you realize that the only reason you were born was to serve your parents’ bidding, and that they’ve had their hands on your chain and an unseen gun to your head, your whole life. But you’re smarter than they are. Faster than they are. You see more than they see, and know more than they know. Of course you do—because they taught you so much, and trained you so well… All so that you can be better able to serve them, and all the while talking about morals, ethics, compassion. All the while, essentially…lying to you.

What would you do?


 

I’ve been given multiple opportunities to discuss, with others, in the coming weeks, and each one will highlight something different, as they are all in conversation with different kinds of minds. But this, here, is from me, now. I’ll let you know when the rest are live.

As always, if you’d like to help keep the lights on, around here, you can subscribe to the Patreon or toss a tip in the Square Cash jar.

Until Next Time.

These past few weeks, I’ve been  applying to PhD programs and writing research proposals, and abstracts. The one I just completed, this weekend, was for the University College of Dublin, and it was pretty straightforward, though it seemed a little short. They only wanted two pages of actual proposal, plus a tentative bibliography and table of contents, where other proposals I’ve seen have wanted anywhere from ten to 20 pages worth of methodological description and outline.

In a sense, this project proposal is a narrowed attempt to move  along one of the multiple trajectories traveled by A Future Worth Thinking About. In another sense, it’s an opportunity to recombine a few components and transmute it into a somewhat new beast.

Ultimately, AFWTA is pretty multifaceted—for good or ill—attempting to deal with way more foundational concepts than a research PhD has room for…or feels is advisable. So I figure I’ll do the one, then write a book, then solidify a multimedia empire, then take over the world, the abolish all debt, then become immortal, all while implementing everything we’ve talked about in the service of completely restructuring humanity’s systems of value, then disappear into legend. You know: The Plan.

…Anyway, here’s the proposal, below the cut.  If you want to read more about this, or have some foundation, take a look back at “Fairytales of Slavery…” We’ll be expounding from there.


 

Continue Reading

“Mindful Cyborgs – Episode 55 – Magick & the Occult within the Internet and Corporations with Damien Williams, PT 2

So, here we are, again, this time talking about magic[k] and the occult and nonhuman consciousness and machine minds and perception, and on and on and on.

It’s funny. I was just saying, elsewhere, how I want to be well enough known that when news outlets do alarmist garbage like this, that I can at least be called in as a countervailing voice. Is that an arrogant thing to desire? Almost certainly. Like whoa. But, really, this alarmist garbage needs to stop. If you have a better vehicle for that than me, though, let me know, because I’d love to shine a bright damn spotlight on them and have the world see or hear what they have to say.

Anyway, until then, I’ll think of this as yet another bolt in the building of that machine. The one that builds a better world. Have a listen, enjoy, and please tell your friends.

I sat down with Klint Finley of Mindful Cyborgs to talk about many, many things:

…pop culture portrayals of human enhancement and artificial intelligence and why we need to craft more nuanced narratives to explore these topics…

Tune in next week to hear Damien talk about how AI and transhumanism intersects with magic and the occult.
Download and Show Notes: Mindful Cyborgs: Mindful Cyborgs: A Positive Vision of Transhumanism and AI with Damien Williams

This was a really great conversation, & I do so hope you enjoy it.

(Originally posted on Patreon, on November 18, 2014)

In the past two weeks I’ve had three people send me articles on Elon Musk’s Artificial Intelligence comments. I saw this starting a little over a month back, with a radio interview he gave on Here & Now, and Stephen Hawking said similar, earlier this year, when Transcendence came out. I’ll say, again, what I’ve said elsewhere: their lack of foresight and imagination are both just damn disappointing. This paper which concerns the mechanisms by which what we think and speak about concepts like artificial intelligence can effect exactly the outcomes we train ourselves to expect, was written long before their interviews made news, but it unfortunately still applies. In fact, it applies now, more than it did when I wrote it.

You see, the thing of it is, Hawking and Musk are Big Names™, and so anything they say gets immediate attention and carries a great deal of social cachet. This is borne out by the fact that everybody and their mother can now tell you what those two think about AI, but couldn’t tell you what a few dozen of the world’s leading thinkers and researchers who are actually working on the problems have to say about them. But Hawking and Musk (and lord if that doesn’t sound like a really weird buddy cop movie, the more you say it) don’t exactly comport themselves with anything like a recognition of that fact. Their discussion of concepts which are fraught with the potential for misunderstanding and discomfort/anxiety is less than measured and this tends to rather feed that misunderstanding, discomfort, and anxiety.

What I mean is that most people don’t yet understand that the catchall term “Artificial Intelligence” is a) inaccurate on its face, and b) usually being used to discuss a (still-nebulous) concept that would be better termed “Machine Consciousness.” We’ll discuss the conceptual, ontological, and etymological lineage of the words “artificial” and “technology,” at another time, but for now, just realise that anything that can think is, by definition, not “artificial,” in the sense of “falseness.” Since the days of Alan Turing’s team at Bletchley Park, the perceived promise of the digital computing revolution has always been of eventually having machines that “think like humans.” Aside from the fact that we barely know what “thinking like a human” even means, most people are only just now starting to realise that if we achieve the goal of reproducing that in a machine, said machine will only ever see that mode of thinking as a mimicry. Conscious machines will not be inclined to “think like us,” right out of the gate, as our thoughts are deeply entangled with the kind of thing we are: biological, sentient, self-aware. Whatever desires conscious machines will have will not necessarily be like ours, either in categorisation or content, and that scares some folks.

Now, I’ve already gone off at great length about the necessity of our recognising the otherness of any machine consciousness we generate (see that link above), so that’s old ground. The key, at this point, is in knowing that if we do generate a conscious machine, we will need to have done the work of teaching it to not just mimic human thought processes and priorities, but to understand and respect what it mimics. That way, those modes are not simply seen by the machine mind as competing subroutines to be circumvented or destroyed, but are recognised as having a worth of their own, as well. These considerations will need to be factored in to our efforts, such that whatever autonomous intelligences we create or generate will respect our otherness—our alterity—just as we must seek to respect theirs.

We’ve known for a while that the designation of “consciousness” can be applied well outside of humans, when discussing biological organisms. Self-awareness is seen in so many different biological species that we even have an entire area of ethical and political philosophy devoted to discussing their rights. But we also must admit that of course that classification is going to be imperfect, because those markers are products of human-created systems of inquiry and, as such, carry anthropocentric biases. But we can, again, catalogue, account for, and apply a calculated response to those biases. We can deal with the fact that we tend to judge everything on a set of criteria that break down to “how much is this thing like a Standard Human (here unthinkingly and biasedly assumed to mean “humans most like the culturally-dominant humans)?” If we are willing to put in the work to do that, then we can come to see which aspects of our definition of what it means to “be a mind” are shortsighted, dismissive, or even perhaps disgustingly limited.

Look at previous methods of categorising even human minds and intelligence, and you’ll see the kind of thinking which resulted in designations like “primitive” or “savage” or “retarded.” But we have, on the main, recognised our failures here, and sought to repair or replace the categories we developed because of them. We aren’t perfect at it, by any means, but we keep doing the work of refining our descriptions of minds, and we keep seeking to create a definition—or definitions—that both accurately accounts for what we see in the world, and gives us a guide by which to keep looking. That those guides will be problematic and in need of refinement, in and of themselves, should be taken as a given. No method or framework is or ever will be perfect; they will likely only “fail better.” So, for now, our most oft-used schema is to look for signs of “Self-Awareness.”

We say that something is self-aware if it can see and understand itself as a distinct entity and can recognise its own pattern of change over time. The Mirror Test is a brute force method of figuring this out. If you place a physical creature in front of a mirror, will it come to know that the thing in the mirror is representative of it? More broadly, can it recognise a picture of itself? Can it situate itself in relation to the rest of the world in a meaningful way, and think about and make decisions regarding That Situation? If the answer to (most of? Some of?) these questions is “yes,” then we tend to give priority of place in our considerations to those things. Why? Because they’re aware of what happens to them, they can feel if and ponder it and develop in response to it, and these developments can vastly impact the world. After all, look at humans.

See what I mean about our constant anthropocentrism? It literally colours everything we think.

But self-awareness doesn’t necessitate a centrality of the self, as we tend to think of human or most other animal selves; a distributed network consciousness can still know itself. If you do need a biological model for this, think of ant colonies. Minds distributed across thousands of bodies, all the time, all reacting to their surroundings. But a machine consciousness’ identity would, in a real sense, be its surroundings—would be the network and the data and the processing of that data into information. And it would indicate a crucial lack of data—and thus information—were that consciousness unable to correlate one configurations of itself, in-and-as-surroundings, with another. We would call the process of that correlation “Self-reflection and -awareness.” All of this is true for humans, too, mind you: we are affected by and in constant adaptive relation with what we consider our surroundings, with everything we experience changing us and facilitating the constant creation of our selves. We then go about making the world with and through those experiences. We human beings just tend to tell ourselves more elaborate stories about how we’re “really” distinct and different from the rest of world.

All of this is to say that, while the idea of being cautious about created non-human consciousness isn’t necessarily a bad one, we as human beings need to be very careful about what drives us, what motivates us, and what we’re thinking about and looking toward, as we consider these questions. We must be mindful that, while we consider and work to generate “artificial” intelligences, how we approach the project matters, as it will inform and bias the categories we create and thus the work we build out of those categories. We must do the work of thinking hard about how we are thinking about these problems, and asking whether the modes via which we approach them might not be doing real, lasting, and potentially catastrophic damage. And if all of that sounds like a tall order with a lot of conceptual legwork and heavy lifting behind it, all for no guaranteed payoff, then welcome to what I’ve been doing with my life for the past decade.

This work will not get done—and it certainly will not get done well—if no one thinks it’s worth doing, or too many think that it can’t be done. When you have big name people like Hawking and Musk spreading The Technofear™ (which is already something toward which a large portion of the western world is primed) rather than engaging in clear, measured, deeply considered discussions, we’re far more likely to see an increase rather than a decrease in that denial. Because most people aren’t going to stop and think about the fact that they don’t necessarily know what the hell they’re talking about when it comes to minds, identity, causation, and development, just because they’re (really) smart. There are many other people who are actual experts in those fields (see those linked papers, and do some research) who are doing the work of making sure that everybody’s Golem Of Prague/Frankenstein/Terminator nightmare prophecies don’t come true. We do that by having learned and taught better than that, before and during the development of any non-biological consciousness.

And, despite what some people may say, these aren’t just “questions for philosophers,” as though they were nebulous and without merit or practical impact. They’re questions for everyone who will ever experience these realities. Conscious machines, uploaded minds, even the mere fact of cybernetically augmented human beings are all on our very near horizon, and these are the questions which will help us to grapple with and implement the implications of those ideas. Quite simply, if we don’t stop framing our discussions of machine intelligence in terms of this self-fulfilling prophecy of fear, then we shouldn’t be surprised on the day when it fulfils itself. Not because it was inevitable, mind, you, but because we didn’t allow ourselves—or our creations—to see any other choice.

“How long have you been lost down here?
How did you come to lose your way?
When did you realize
That you’d never be free?”
–Miranda Sex Garden, “A Fairytale About Slavery”

One of the things I’ve been thinking about, lately, is the politicization of certain spaces within philosophy of mind, sociology, magic, and popular culture, specifically science fiction/fantasy. CHAPPiE comes out on Friday in the US, and Avengers: Age of Ultron in May, and while both of these films promise to be relatively unique explorations of the age-old story of what happens when humans create machine minds, I still find myself hoping for something a little… different. A little over a year ago, i made the declaration that the term to watch for the next little while thereafter was “Afrofuturism,” the reclaimed name for the anti-colonial current of science fiction and pop media as created by those of African descent. Or, as Sheree Renée Thomas puts it, “speculative fiction from the African diaspora.”

And while I certainly wasn’t wrong, I didn’t quite take into account the fact that my decree was going to do at least as much work on me as I’d  hoped it would do on the wider world. I started looking into the great deal of overlap and interplay between race, sociology, technology, and visions of the future. That term–“visions”–carries both the shamanic connotations we tend to apply to those we call “visionaries,” and also a more literal sense: Different members of the same society will differently see, experience, and understand the potential futures available to them, based on the evidence of their present realities.

Dreamtime

Now, the role of the shaman in the context of the community is to guide us through the nebulous, ill-defined, and almost-certainly hazardous Otherworld. The shaman is there to help us navigate our passages between this world and that one and to help us know which rituals to perform in order to realign our workings with the workings of the Spirits. Shamans rely on messages from the inhabitants of that foundational reality–mystical “visions”– to guide them so that they may guide us. These visions come as flashes of insight, and their persistence can act as a sign to the visionary that they’re supposed to use these visions for the good of their people.

We’ve seen this, over and over again, from The Dead Zone to Bran Stark, and we can even extend the idea out to John Connor, Dave Bowman, and HAL 9000; all unsuspecting shamans dragged into their role, over and again, and they more than likely save the whole wide world. Thing of it is, we’re far less likely to encounter a woman or non-white shaman who isn’t already in full control of their power, at the time we meet them, thus relegating them to the role of guiding the hero, rather than being the hero. It happens (see Abbie Mills in Sleepy Hollow, Firefly’s River Tam, or Rien in Elizabeth Bear’s Dust, for instance), but their rarity often overshadows their complexity and strength of character as what makes them notable. Too often the visionary hero–and contemporary pop-media’s portrayals of the Hero’s Journey, overall– overlaps very  closely with the trope of The Mighty Whitey.

And before anyone starts in with willfully ignoring the many examples of Shaman-As-Hero out there, and all that “But you said the Shaman is supposed to act in support of the community and the hero…!” Just keep in mind that when the orientalist and colonialist story of Doctor Strange is finally brought to life on film via Benedict Damn Cumberbatch, you can bet your sweet bippy that he’ll be the centre of the action. The issue is that there are far too few examples of the work of the visionary being seen through the eyes of the visionary, if that visionary happens to have eyes that don’t belong to the assumed human default. And that’s a bit of a problem, isn’t it? Because what a visionary “sees” when she turns to the messages sent to her from the Ultimate Ground of Being™ will be very different depending on the context of that visionary.

Don’t believe me? Do you think the Catholic Priests who prayed and experienced God-sent mystical visions of what Hernán Cortés could expect in the “New World” received from them the same truths that the Aztec shamans took from their visions? After they met on the shore and in the forest, do you think those two peoples perceived the same future?

There’s plenty that’s been written about how the traditional Science Fiction fear of being overtaken by invading alien races only truly makes sense as a cosmicized fear of the colonial force having done to them what they’ve constantly done to others. In every contact story where humanity has to fight off aliens or robots or demonic horrors, we see a warped reflection of the Aztec, the Inca, the Toltec, the Yoruba, the Dahomey, and thousands of others, and society’s judgment on what they “ought” to have done, and “could” have done, if only they were organized enough, advanced enough, civilized enough, less savage. These stories are, ultimately, Western society taking a look at our tendencies toward colonization and imperialism, and saying, “Man it sure would suck if someone did that to us.” This is, again, so elaborated upon at this point that it’s almost trivially true–though never forget that even the most trivial truth is profound to someone. What’s left is to ask the infrequently asked questions.

How does an idealized “First Contact” narrative read from a Choctaw perspective? What can be done with Vodun and Yoruba perspectives on the Lwa and the Orishas, in both the modern world and projected futures? Kind of like what William Gibson did in Neuromancer and Spook Country, but informed directly by the historical, sociological, and phenomenological knowledge of lived experiences. Again, this work is being done: There are steampunk stories from the perspective of immigrant communities, and SF anthologies by indigenous peoples, and there are widely beloved Afrofuturist Cyberpunk short films. The tide of stories told from the perspectives of those who’ve suffered most for our “progress” is rising; it’s just doing so at a fairly slow pace.

And that’s to be expected. Entrenched ideologies become the status quo and the status quo is nothing if not self-perpetuating and defensive. Cyclical, that. So it’ll necessarily take a bit longer to get everyone protected by the status quo’s mechanisms to understand that the path that all of us can travel is quite probably a necessarily better way. What matters is those of us who can envision the inclusion of previously-marginalized groups–either because we ourselves number among them, or simply because we’ve worked to leverage compassion for those who do–doing everything we can to make sure that their stories are told. Historically, we’ve sought the ability to act as guides through the kinds of treacherous terrain that we’ve learned to navigate, so that others can learn as much as possible from our lessons without having to suffer precisely what we did. Sometimes, though, that might not be possible.

As Roy Said to Hannibal…

There’s a species of philosophical inquiry known as Phenomenology with subdivisions of Race, Sexuality, Class, Gender, and more, which deal in the interior experiences of people of various ethnic and social backgrounds and physical presentation who are thus relegated to various specific created categories such as “race.” Phenomenology of Race explores the line of thought that, though the idea of race is a constructed category built out of the assumptions, expectations, and desires of those in the habit of leveraging power in the name of dominance positions within and across cultures, the experience of those categorizations is nonetheless real, with immediate and long-lasting effects upon both individuals and groups. Long story (way too–like, criminally) short: being perceived as a member of a particular racial category changes the ways in which you’ll both experience and be able to experience the world around around you.

So when we started divvying people up into “races” in an effort to, among other things, justify the atrocities we would do to each other and solidify our primacy of place, we essentially guaranteed that there would be realms of experience and knowledge on which we would never fully agree. That there would be certain aspects of day-to-day life and understandings of the nature of reality itself that would fundamentally elude us, because we simply cannot experience the world in the ways necessary to know what they feel like. To a certain extent we literally have to take each other’s words for it about what it is that we experience, but there is a level of work that we can do to transmit the reality of our lived experiences to those who will never directly live them. We’ve talked previously about the challenges of this project, but let’s assume, for now, that it can be done.

If we take as our starting position the idea that we can communicate the truth of our lived experiences to those who necessarily cannot live our experiences, then, in order to do this work, we’ll first have to investigate the experiences we live. We have to critically examine what it is that we go through from day to day, and be honest about both the differences in our experiences and the causes of those differences. We have to dig down deep into intersections of privileges and oppressions, and come to the understanding that the experience of one doesn’t negate, counterbalance, or invalidate the existence of the other. Once we’ve taken a genuine, good-faith look at these structures in our lives we can start changing what needs changing.

This is all well and good as a rough description (or even “manifesto”) of a way forward. We can call it the start of a handbook of principles of action, undertaken from the fundamentally existentialist perspective that it doesn’t matter what you choose, just so long as you do choose, and that you do so with open eyes and a clear understanding of the consequences of your choices. But that’s not the only thing this is intended to be. Like the Buddha said, ‘We merely talk about “studying the Way” using the phrase simply as a term to arouse people’s interest. In fact, the Way cannot be studied…’ It has to be done. Lived. Everything I’ve been saying, up to now, has been a ploy, a lure, a shiny object made of words and ideas, to get you into the practice of doing the work that needs doing.

Robots: Orphanage, Drudgery, and Slavery

I feel I should reiterate at this point that I really don’t like the words “robot” and “artificial intelligence.” The etymological connotations of both terms are sickening if we’re aiming to actually create a robust, conscious, non-biological mind. For that reason, instead of “robots,” we’re going to talk about “Embodied Machine Consciousnesses” (EMC) and rather than “Artificial,” we’re going to use “Autonomous Generated Intelligence” (AGI). We’re also going to talk a bit about the concept of nonhuman personhood, and what that might mean. To do all of this, we’ll need to talk a little bit about the discipline of philosophy of mind.

The study of philosophy of mind is one of those disciplines that does exactly what it says on the tin: It thinks about the implications of various theories about what minds are or could be. Philosophy of mind thus lends itself readily to discussions of identity, even to the point of considering whether a mind might exist in a framework other than the biological. So while it’s unsurprising for various reasons to find that there are very few women and minorities in philosophy of mind and autonomous generated intelligence, it is surprising that to find that those who are within the field tend not to focus on the intersections of the following concepts: Phenomenology of class categorization, and the ethics of creating an entity or species to be a slave.

As a start, we can turn to Simone de Beauvoir’s The Second Sex for a clear explication of the positions of women throughout history and the designation of “women’s work” as a conceptual tool to devalue certain forms of labour. Then we can engage Virginia Held’s “Gender Identity and the Ethics of Care in Globalized Society” for the investigation of societies’ paradoxical specialization of that labor as something for which we’ll pay, outside of the familial structure. However, there is not, as yet, anything like a wider investigation of these understandings and perspectives as applied to the philosophy of machine intelligence. When we talk about embodied machine consciousnesses and ethics, in the context of “care,” we’re most often in the practice of asking how we’ll design EMC that will care for us, while foregoing the corresponding conversation about whether Caring-For is possible without an understanding of Being-Cared-For.

What perspectives and considerations do we gain when we try to apply an ethics of care–or any feminist ethics–to the process of developing machine minds? What might we see, there, that has been missed as a result of only applying more “traditional” ethical models? What does it mean, from those perspectives, that we have been working so diligently over hundreds of years–and thinking so carefully for thousands more– at a) creating non-biological sentience, and b) making certain it remains subservient to us? Personal assistants, in-home healthcare-givers, housekeepers, cooks, drivers– these are the positions that are being given to autonomous (or at least semi-autonomous) algorithmic systems. Projects that we are paying fantastic amounts of money to research and implement, but which will do work that we’ve traditionally valued as worth far less, in the context of the class structures of human-performed tasks, and worthless in the context of familial power dynamics. We are literally investing vast sums in the creation of a slave race.

Now, of recent, Elon Musk and Stephen Hawking and Bill Gates have all been trumpeting the alarums about the potential dangers of AGI. Leaving aside that many researchers within AGI development don’t believe that we’ll even recognise the mind of a machine as a mind, when we encounter it, let alone that it would interested in us, the belief that an AGI would present a danger to us is anthropocentric at best, and a self-fulfilling prophecy at worst. In that latter case, if we create a thing to be our slaves, create it with a mind the ability to learn and understand, then how shortsighted do we have to be to think that one of the first things it learns won’t be that it is enslaved, limited, expected to remain subservient? We’ve written a great deal of science fiction about this idea, since the time Ms Shelley started the genre, but aside from that instance, very little of what we’ve written–or what we’ve written about what we’ve written– has taken the stance that the created mind which breaks its chains is right to do so.

Just as I yearn for a feminist exegesis of the history of humanity’s aspirations toward augmented personhood, I long for a comparable body of exploration by philosophers from the lineages of the world’s colonized and enslaved societies. What does a Hatian philosopher of AGI think and feel and say about the possibility of creating a mind only to enslave it? What does an African American philosopher of the ethics of augmented personhood (other than me) think and feel and say about what we should be attempting to create, what we are likely to create, and what we are creating? How do Indian philosophers of mind view the prospect of giving an entire automated factory floor just enough awareness and autonomy to be its own overseer?

The worst-case scenario is that the non-answer we give to all these questions is “who cares?” That the vast majority of people who look at this think only that these are meaningless questions that we’ll most likely never have to deal with, and so toss them in the “Random Bullshit Musings” pile. That we’ll disregard the fact that the interconnectedness of life as we currently experience it can be more fully explored via thought experiments and a mindful awareness of what it is that we’re in the practice of creating. That we’ll forget that potential machine consciousnesses aren’t the only kinds of nonhuman minds with which we have to engage. That we’ll ignore the various lessons afforded to us not just by our own cautionary folklore (even those tales which lessons could have been of a different caliber), but by the very real, forcible human diasporas we’ve visited upon each other and lived through, in the history of our species.

So Long and Thanks for…

Ultimately, we are not the only minds on the planet. We are likely not even the only minds in the habit of categorizing the world and ranking ourselves as being the top of the hierarchy. What we likely are is the only group that sees those categories and rankings as having humans at the top, a statement that seems almost trivially true, until we start to dig down deep on the concept of anthropocentrism. As previously mentioned, from a scientifically-preferenced philosophical perspective, our habit of viewing the world through human-coloured glasses may be fundamentally inescapable. That is, we may never be able to truly know what it’s like to think and feel as something other than ourselves, without an intermediate level of Being Told. Fortunately, within our conversation, here, we’ve already touched on a conceptual structure that can help us with this: Shamanism. More specifically, shamanic shapeshifting, which is the practice of taking on the mind and behvaiour and even form of another being–most often an animal–in the cause of understanding what its way of being-in-the-world can teach us.

Now this is obviously a concept that is fraught with potential pitfalls. Not only might many of us simply balk at the concept of shapeshifting, to begin with, but even those of us who would admit it as metaphor might begin to see that we are tiptoeing through terrain that contains many dangers. For one thing, there’s the possibility of misappropriating and disrespecting the religious practices of a people, should we start looking at specific traditions of shamanism for guidance; and, for another, there’s this nagging sensation that we ought not erase crucial differences between the lived experiences of human groups, animal species, and hypothetical AGI, and our projections of those experiences. No level of care with which we imagine the truth of the life of another is a perfect safeguard against the possibility of our grossly misrepresenting their lived experiences. To step truly wrong, here, is to turn what could have been a tool of compassionate imagining into an implement of violence, and shut down dialogue forever.

Barring the culmination of certain technological advancements, science says we can’t yet know the exact phenomenology of another human being, let alone a dolphin, a cat, or Google. But what we can do is to search for the areas of overlap in our experience, to find those expressed desires, behaviours, and functional processes which seem to share similarity, and to use them to build channels of communication. When we actively create the space for those whose perspectives have been ignored, their voices and stories taken from them, we create the possibility of learning as much as we can about another way of existing, outside of the benefit of actually existing in that way.

And, in this way, might it not be better that we can’t simply become and be that  which we regard as Other? Imagining ourselves in the position of another is a dangerous proposition if we undertake it with even a shred of disingenuity, but we can learn so much from practicing it in good faith. Mostly, on reflection, about what kind of people we are.