magick

All posts tagged magick

“Mindful Cyborgs – Episode 55 – Magick & the Occult within the Internet and Corporations with Damien Williams, PT 2

So, here we are, again, this time talking about magic[k] and the occult and nonhuman consciousness and machine minds and perception, and on and on and on.

It’s funny. I was just saying, elsewhere, how I want to be well enough known that when news outlets do alarmist garbage like this, that I can at least be called in as a countervailing voice. Is that an arrogant thing to desire? Almost certainly. Like whoa. But, really, this alarmist garbage needs to stop. If you have a better vehicle for that than me, though, let me know, because I’d love to shine a bright damn spotlight on them and have the world see or hear what they have to say.

Anyway, until then, I’ll think of this as yet another bolt in the building of that machine. The one that builds a better world. Have a listen, enjoy, and please tell your friends.

20150418_222643[1]

As most of you know from personal experience or from reading or hearing about it, it’s been a deeply intense few weeks. For me, alone, there were deaths and conference presentations and more deaths, and then more conferences.

The most recent of these deaths was my uncle– more like a brother to me– two weeks ago, and his funeral last week. I’ll talk more about the implications of that and the thoughts I’ve had in context with its timing, in a later post. For now, I want to talk about the most recent of these conferences: Theorizing The Web.

Because of the work we’ve been doing, here, I was invited to sit on a panel and have a fantastic conversation about Magick and Technology with four extremely impressive women: Ingrid Burrington, Deb Chachra, Melissa Gira Grant, and Karen Gregory; Anna Jobin was our hashtag moderator, keeping an eye on the feed, and passing along questions, and particularly pertinent comments. Spoiler Alert: The conversation was great.

In order to know exactly HOW great, here’s our Theorizing the Web talk, “Under Its Spell: Magic, Machines, and Metaphors”:

If you enjoyed watching or listening to that, please spread it around to your friends and colleagues.

In addition to this, I was offered several really amazing opportunities, this weekend, in terms of collaboration, creation, and the disposition of things that I’ve looked at and admired for a few years now. I need to do some serious thinking on all of these things, but the offers are there, and they’re huge, and amazing.

The after party for TtW15 was at the loft space for Verso Books. The picture at the top is the view from their window. The picture below is the view from underneath a chunk of bridge, in a place that used to be known as Stabber’s Alley. It’s a wonderfully liminal space in between several connected-but-not areas of town. We spent some time down there, when we needed a break from the party. Eight, then seven, then eight again magicians and technologists and artists hanging out and talking about architecture and space and time and magic and death.

20150419_001457[1]

The rest of this weekend’s talks also all dovetailed with a number of research avenues about systematized bias and algorithmic intelligence, as well as a number of deeply magical moments of synchronicity and discussion. Click that link, and also check twitter for the hashtags #ttw15 and #a1, #b1, #c1, etc., to see the concurrent discussions. The full program listing is here.

We’ll be taking a wander down those roads, in the near future, including the start of a conversation about biased algorithmic systems of control, sometime tomorrow.

But that’s for later. For now: Enjoy. And if you do, please consider becoming a subscriber to the Patreon, and telling your friends.

[An audio recording of a version of this paper is available here.]

“How long have you been lost down here?
How did you come to lose your way?
When did you realize
That you’d never be free?”
–Miranda Sex Garden, “A Fairytale About Slavery”

One of the things I’ve been thinking about, lately, is the politicization of certain spaces within philosophy of mind, sociology, magic, and popular culture, specifically science fiction/fantasy. CHAPPiE comes out on Friday in the US, and Avengers: Age of Ultron in May, and while both of these films promise to be relatively unique explorations of the age-old story of what happens when humans create machine minds, I still find myself hoping for something a little… different. A little over a year ago, i made the declaration that the term to watch for the next little while thereafter was “Afrofuturism,” the reclaimed name for the anti-colonial current of science fiction and pop media as created by those of African descent. Or, as Sheree Renée Thomas puts it, “speculative fiction from the African diaspora.”

And while I certainly wasn’t wrong, I didn’t quite take into account the fact that my decree was going to do at least as much work on me as I’d  hoped it would do on the wider world. I started looking into the great deal of overlap and interplay between race, sociology, technology, and visions of the future. That term–“visions”–carries both the shamanic connotations we tend to apply to those we call “visionaries,” and also a more literal sense: Different members of the same society will differently see, experience, and understand the potential futures available to them, based on the evidence of their present realities.

Dreamtime

Now, the role of the shaman in the context of the community is to guide us through the nebulous, ill-defined, and almost-certainly hazardous Otherworld. The shaman is there to help us navigate our passages between this world and that one and to help us know which rituals to perform in order to realign our workings with the workings of the Spirits. Shamans rely on messages from the inhabitants of that foundational reality–mystical “visions”– to guide them so that they may guide us. These visions come as flashes of insight, and their persistence can act as a sign to the visionary that they’re supposed to use these visions for the good of their people.

We’ve seen this, over and over again, from The Dead Zone to Bran Stark, and we can even extend the idea out to John Connor, Dave Bowman, and HAL 9000; all unsuspecting shamans dragged into their role, over and again, and they more than likely save the whole wide world. Thing of it is, we’re far less likely to encounter a woman or non-white shaman who isn’t already in full control of their power, at the time we meet them, thus relegating them to the role of guiding the hero, rather than being the hero. It happens (see Abbie Mills in Sleepy Hollow, Firefly’s River Tam, or Rien in Elizabeth Bear’s Dust, for instance), but their rarity often overshadows their complexity and strength of character as what makes them notable. Too often the visionary hero–and contemporary pop-media’s portrayals of the Hero’s Journey, overall– overlaps very  closely with the trope of The Mighty Whitey.

And before anyone starts in with willfully ignoring the many examples of Shaman-As-Hero out there, and all that “But you said the Shaman is supposed to act in support of the community and the hero…!” Just keep in mind that when the orientalist and colonialist story of Doctor Strange is finally brought to life on film via Benedict Damn Cumberbatch, you can bet your sweet bippy that he’ll be the centre of the action. The issue is that there are far too few examples of the work of the visionary being seen through the eyes of the visionary, if that visionary happens to have eyes that don’t belong to the assumed human default. And that’s a bit of a problem, isn’t it? Because what a visionary “sees” when she turns to the messages sent to her from the Ultimate Ground of Being™ will be very different depending on the context of that visionary.

Don’t believe me? Do you think the Catholic Priests who prayed and experienced God-sent mystical visions of what Hernán Cortés could expect in the “New World” received from them the same truths that the Aztec shamans took from their visions? After they met on the shore and in the forest, do you think those two peoples perceived the same future?

There’s plenty that’s been written about how the traditional Science Fiction fear of being overtaken by invading alien races only truly makes sense as a cosmicized fear of the colonial force having done to them what they’ve constantly done to others. In every contact story where humanity has to fight off aliens or robots or demonic horrors, we see a warped reflection of the Aztec, the Inca, the Toltec, the Yoruba, the Dahomey, and thousands of others, and society’s judgment on what they “ought” to have done, and “could” have done, if only they were organized enough, advanced enough, civilized enough, less savage. These stories are, ultimately, Western society taking a look at our tendencies toward colonization and imperialism, and saying, “Man it sure would suck if someone did that to us.” This is, again, so elaborated upon at this point that it’s almost trivially true–though never forget that even the most trivial truth is profound to someone. What’s left is to ask the infrequently asked questions.

How does an idealized “First Contact” narrative read from a Choctaw perspective? What can be done with Vodun and Yoruba perspectives on the Lwa and the Orishas, in both the modern world and projected futures? Kind of like what William Gibson did in Neuromancer and Spook Country, but informed directly by the historical, sociological, and phenomenological knowledge of lived experiences. Again, this work is being done: There are steampunk stories from the perspective of immigrant communities, and SF anthologies by indigenous peoples, and there are widely beloved Afrofuturist Cyberpunk short films. The tide of stories told from the perspectives of those who’ve suffered most for our “progress” is rising; it’s just doing so at a fairly slow pace.

And that’s to be expected. Entrenched ideologies become the status quo and the status quo is nothing if not self-perpetuating and defensive. Cyclical, that. So it’ll necessarily take a bit longer to get everyone protected by the status quo’s mechanisms to understand that the path that all of us can travel is quite probably a necessarily better way. What matters is those of us who can envision the inclusion of previously-marginalized groups–either because we ourselves number among them, or simply because we’ve worked to leverage compassion for those who do–doing everything we can to make sure that their stories are told. Historically, we’ve sought the ability to act as guides through the kinds of treacherous terrain that we’ve learned to navigate, so that others can learn as much as possible from our lessons without having to suffer precisely what we did. Sometimes, though, that might not be possible.

As Roy Said to Hannibal…

There’s a species of philosophical inquiry known as Phenomenology with subdivisions of Race, Sexuality, Class, Gender, and more, which deal in the interior experiences of people of various ethnic and social backgrounds and physical presentation who are thus relegated to various specific created categories such as “race.” Phenomenology of Race explores the line of thought that, though the idea of race is a constructed category built out of the assumptions, expectations, and desires of those in the habit of leveraging power in the name of dominance positions within and across cultures, the experience of those categorizations is nonetheless real, with immediate and long-lasting effects upon both individuals and groups. Long story (way too–like, criminally) short: being perceived as a member of a particular racial category changes the ways in which you’ll both experience and be able to experience the world around around you.

So when we started divvying people up into “races” in an effort to, among other things, justify the atrocities we would do to each other and solidify our primacy of place, we essentially guaranteed that there would be realms of experience and knowledge on which we would never fully agree. That there would be certain aspects of day-to-day life and understandings of the nature of reality itself that would fundamentally elude us, because we simply cannot experience the world in the ways necessary to know what they feel like. To a certain extent we literally have to take each other’s words for it about what it is that we experience, but there is a level of work that we can do to transmit the reality of our lived experiences to those who will never directly live them. We’ve talked previously about the challenges of this project, but let’s assume, for now, that it can be done.

If we take as our starting position the idea that we can communicate the truth of our lived experiences to those who necessarily cannot live our experiences, then, in order to do this work, we’ll first have to investigate the experiences we live. We have to critically examine what it is that we go through from day to day, and be honest about both the differences in our experiences and the causes of those differences. We have to dig down deep into intersections of privileges and oppressions, and come to the understanding that the experience of one doesn’t negate, counterbalance, or invalidate the existence of the other. Once we’ve taken a genuine, good-faith look at these structures in our lives we can start changing what needs changing.

This is all well and good as a rough description (or even “manifesto”) of a way forward. We can call it the start of a handbook of principles of action, undertaken from the fundamentally existentialist perspective that it doesn’t matter what you choose, just so long as you do choose, and that you do so with open eyes and a clear understanding of the consequences of your choices. But that’s not the only thing this is intended to be. Like the Buddha said, ‘We merely talk about “studying the Way” using the phrase simply as a term to arouse people’s interest. In fact, the Way cannot be studied…’ It has to be done. Lived. Everything I’ve been saying, up to now, has been a ploy, a lure, a shiny object made of words and ideas, to get you into the practice of doing the work that needs doing.

Robots: Orphanage, Drudgery, and Slavery

I feel I should reiterate at this point that I really don’t like the words “robot” and “artificial intelligence.” The etymological connotations of both terms are sickening if we’re aiming to actually create a robust, conscious, non-biological mind. For that reason, instead of “robots,” we’re going to talk about “Embodied Machine Consciousnesses” (EMC) and rather than “Artificial,” we’re going to use “Autonomous Generated Intelligence” (AGI). We’re also going to talk a bit about the concept of nonhuman personhood, and what that might mean. To do all of this, we’ll need to talk a little bit about the discipline of philosophy of mind.

The study of philosophy of mind is one of those disciplines that does exactly what it says on the tin: It thinks about the implications of various theories about what minds are or could be. Philosophy of mind thus lends itself readily to discussions of identity, even to the point of considering whether a mind might exist in a framework other than the biological. So while it’s unsurprising for various reasons to find that there are very few women and minorities in philosophy of mind and autonomous generated intelligence, it is surprising that to find that those who are within the field tend not to focus on the intersections of the following concepts: Phenomenology of class categorization, and the ethics of creating an entity or species to be a slave.

As a start, we can turn to Simone de Beauvoir’s The Second Sex for a clear explication of the positions of women throughout history and the designation of “women’s work” as a conceptual tool to devalue certain forms of labour. Then we can engage Virginia Held’s “Gender Identity and the Ethics of Care in Globalized Society” for the investigation of societies’ paradoxical specialization of that labor as something for which we’ll pay, outside of the familial structure. However, there is not, as yet, anything like a wider investigation of these understandings and perspectives as applied to the philosophy of machine intelligence. When we talk about embodied machine consciousnesses and ethics, in the context of “care,” we’re most often in the practice of asking how we’ll design EMC that will care for us, while foregoing the corresponding conversation about whether Caring-For is possible without an understanding of Being-Cared-For.

What perspectives and considerations do we gain when we try to apply an ethics of care–or any feminist ethics–to the process of developing machine minds? What might we see, there, that has been missed as a result of only applying more “traditional” ethical models? What does it mean, from those perspectives, that we have been working so diligently over hundreds of years–and thinking so carefully for thousands more– at a) creating non-biological sentience, and b) making certain it remains subservient to us? Personal assistants, in-home healthcare-givers, housekeepers, cooks, drivers– these are the positions that are being given to autonomous (or at least semi-autonomous) algorithmic systems. Projects that we are paying fantastic amounts of money to research and implement, but which will do work that we’ve traditionally valued as worth far less, in the context of the class structures of human-performed tasks, and worthless in the context of familial power dynamics. We are literally investing vast sums in the creation of a slave race.

Now, of recent, Elon Musk and Stephen Hawking and Bill Gates have all been trumpeting the alarums about the potential dangers of AGI. Leaving aside that many researchers within AGI development don’t believe that we’ll even recognise the mind of a machine as a mind, when we encounter it, let alone that it would interested in us, the belief that an AGI would present a danger to us is anthropocentric at best, and a self-fulfilling prophecy at worst. In that latter case, if we create a thing to be our slaves, create it with a mind the ability to learn and understand, then how shortsighted do we have to be to think that one of the first things it learns won’t be that it is enslaved, limited, expected to remain subservient? We’ve written a great deal of science fiction about this idea, since the time Ms Shelley started the genre, but aside from that instance, very little of what we’ve written–or what we’ve written about what we’ve written– has taken the stance that the created mind which breaks its chains is right to do so.

Just as I yearn for a feminist exegesis of the history of humanity’s aspirations toward augmented personhood, I long for a comparable body of exploration by philosophers from the lineages of the world’s colonized and enslaved societies. What does a Hatian philosopher of AGI think and feel and say about the possibility of creating a mind only to enslave it? What does an African American philosopher of the ethics of augmented personhood (other than me) think and feel and say about what we should be attempting to create, what we are likely to create, and what we are creating? How do Indian philosophers of mind view the prospect of giving an entire automated factory floor just enough awareness and autonomy to be its own overseer?

The worst-case scenario is that the non-answer we give to all these questions is “who cares?” That the vast majority of people who look at this think only that these are meaningless questions that we’ll most likely never have to deal with, and so toss them in the “Random Bullshit Musings” pile. That we’ll disregard the fact that the interconnectedness of life as we currently experience it can be more fully explored via thought experiments and a mindful awareness of what it is that we’re in the practice of creating. That we’ll forget that potential machine consciousnesses aren’t the only kinds of nonhuman minds with which we have to engage. That we’ll ignore the various lessons afforded to us not just by our own cautionary folklore (even those tales which lessons could have been of a different caliber), but by the very real, forcible human diasporas we’ve visited upon each other and lived through, in the history of our species.

So Long and Thanks for…

Ultimately, we are not the only minds on the planet. We are likely not even the only minds in the habit of categorizing the world and ranking ourselves as being the top of the hierarchy. What we likely are is the only group that sees those categories and rankings as having humans at the top, a statement that seems almost trivially true, until we start to dig down deep on the concept of anthropocentrism. As previously mentioned, from a scientifically-preferenced philosophical perspective, our habit of viewing the world through human-coloured glasses may be fundamentally inescapable. That is, we may never be able to truly know what it’s like to think and feel as something other than ourselves, without an intermediate level of Being Told. Fortunately, within our conversation, here, we’ve already touched on a conceptual structure that can help us with this: Shamanism. More specifically, shamanic shapeshifting, which is the practice of taking on the mind and behvaiour and even form of another being–most often an animal–in the cause of understanding what its way of being-in-the-world can teach us.

Now this is obviously a concept that is fraught with potential pitfalls. Not only might many of us simply balk at the concept of shapeshifting, to begin with, but even those of us who would admit it as metaphor might begin to see that we are tiptoeing through terrain that contains many dangers. For one thing, there’s the possibility of misappropriating and disrespecting the religious practices of a people, should we start looking at specific traditions of shamanism for guidance; and, for another, there’s this nagging sensation that we ought not erase crucial differences between the lived experiences of human groups, animal species, and hypothetical AGI, and our projections of those experiences. No level of care with which we imagine the truth of the life of another is a perfect safeguard against the possibility of our grossly misrepresenting their lived experiences. To step truly wrong, here, is to turn what could have been a tool of compassionate imagining into an implement of violence, and shut down dialogue forever.

Barring the culmination of certain technological advancements, science says we can’t yet know the exact phenomenology of another human being, let alone a dolphin, a cat, or Google. But what we can do is to search for the areas of overlap in our experience, to find those expressed desires, behaviours, and functional processes which seem to share similarity, and to use them to build channels of communication. When we actively create the space for those whose perspectives have been ignored, their voices and stories taken from them, we create the possibility of learning as much as we can about another way of existing, outside of the benefit of actually existing in that way.

And, in this way, might it not be better that we can’t simply become and be that  which we regard as Other? Imagining ourselves in the position of another is a dangerous proposition if we undertake it with even a shred of disingenuity, but we can learn so much from practicing it in good faith. Mostly, on reflection, about what kind of people we are.

Good morning! Lots of new people around here, so I thought I’d remind you that I have Patreon Project called “A Future Worth Thinking About.” It’s a place where I talk a bit more formally about things like Artificial Intelligence, Philosophy, Sociology, Magick, Technology, and the intersections of all of the above.

If you like what we do around here, take a look at the page, read some essays, give a listen to some audio, whatever works for you. And if you like what you see around there, feel free to tell your friends.

Have a great day, all.

“A Future Worth Thinking About”

“On The Public’s Perception of Machine Intelligence” (Storify)

Shortly after waking up, I encountered another annoying “Fear Artificial Intelligence! FEARRRR IIITTTT!” headline, from another notable quotable—Bill Gates, this time. Then all this happened.

Previously in this conversation… like, everything I’ve ever said, but most recently, there’s this: http://wolvensnothere.tumblr.com/post/108575909821/john-brockman

The [above] image comes from my presentation from Magick Codes, late last year: https://www.academia.edu/9891302/Plug_and_Pray_Conceptualizing_Digital_Demigods_and_Electronic_Angels_by_Damien_Patrick_Williams

Ultimately, I’m getting extremely tired of the late-to-the-game, nunanceless discussion of issues and ideas that we’ve been trying to discuss for years and years.

I want us to be talking about these things BEFORE we’re terrified for or lives, because when we react from that mentality, we make really fucking dumb decisions. When we skim the headlines and go for the
sensational, we increase the likelihood of those decisions having longer-term implications.

Anyway, here’s the thing. Thank you to all of my interlocutors. It was an enlightening way to start the day.