my voice

All posts tagged my voice

(Direct Link to the Mp3)

This is the recording and the text of my presentation from 2017’s Southwest Popular/American Culture Association Conference in Albuquerque, ‘Are You Being Watched? Simulated Universe Theory in “Person of Interest.”‘

This essay is something of a project of expansion and refinement of my previous essay “Labouring in the Liquid Light of Leviathan,”  considering the Roko’s Basilisk thought experiment. Much of the expansion comes from considering the nature of simulation, memory, and identity within Jonathan Nolan’s TV series, Person of Interest. As such, it does contain what might be considered spoilers for the series, as well as for his most recent follow-up, Westworld.

Use your discretion to figure out how you feel about that.


Are You Being Watched? Simulated Universe Theory in “Person of Interest”

Jonah Nolan’s Person Of Interest is the story of the birth and life of The Machine, a benevolent artificial super intelligence (ASI) built in the months after September 11, 2001, by super-genius Harold Finch to watch over the world’s human population. One of the key intimations of the series—and partially corroborated by Nolan’s follow-up series Westworld—is that all of the events we see might be taking place in the memory of The Machine. The structure of the show is such that we move through time from The Machine’s perspective, with flashbacks and -forwards seeming to occur via the same contextual mechanism—the Fast Forward and Rewind of a digital archive. While the entirety of the series uses this mechanism, the final season puts the finest point on the question: Has everything we’ve seen only been in the mind of the machine? And if so, what does that mean for all of the people in it?

Our primary questions here are as follows: Is a simulation of fine enough granularity really a simulation at all? If the minds created within that universe have interiority and motivation, if they function according to the same rules as those things we commonly accept as minds, then are those simulation not minds, as well? In what way are conclusions drawn from simulations akin to what we consider “true” knowledge?

In the PoI season 5 episode, “The Day The World Went Away,” the characters Root and Shaw (acolytes of The Machine) discuss the nature of The Machine’s simulation capacities and the audience is given to understand that it runs a constant model of everyone it knows, and that the more it knows them, the better its simulation. This supposition links us back to the season 4 episode “If-Then-Else,” in which the machine runs through the likelihood of success through hundreds of thousands of scenarios in under one second. If The Machine is able to accomplish this much computation in this short a window, how much can and has it accomplished over the several years of its operation? Perhaps more importantly, what is the level of fidelity of those simulations to the so-called real world?

[Person of Interest s4e11, “If-Then-Else.” The Machine runs through hundreds of thousands of scenarios to save the team.]

These questions are similar to the idea of Roko’s Basilisk, a thought experiment that cropped up in the online discussion board of LessWrong.com. It was put forward by user Roko who, in very brief summary, says that if the idea of timeless decision theory (TDT) is correct, then we might all be living in a simulation created by a future ASI trying to figure out the best way to motivate humans in the past to create it. To understand how this might work, we have to look as TDT, an idea developed in 2010 by Eliezer Yudkowsky which posits that in order to make a decision we should act as though we are determining the output of an abstract computation. We should, in effect, seek to create a perfect simulation and act as though anyone else involved in the decision has done so as well. Roko’s Basilisk is the idea that a Malevolent ASI has already done this—is doing this—and your actions are the simulated result. Using that output, it knows just how to blackmail and manipulate you into making it come into being.

Or, as Yudkowsky himself put it, “YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.” This is the self-generating aspect of the Basilisk: If you can accurately model it, then the Basilisk will eventually, inevitably come into being, and one of the attributes it will thus have is the ability to accurately model that you accurately modeled it, and whether or not you modeled it from within a mindset of being susceptible to its coercive actions. The only protection is to either work toward its creation anyway, so that it doesn’t feel the need to torture the “real” you into it, or to make very sure that you never think of it at all, so you do not bring it into being.

All of this might seem far-fetched, but if we look closely, Roko’s Basilisk functions very much like a combination of several well-known theories of mind, knowledge, and metaphysics: Anselm’s Ontological Argument for the Existence of God (AOAEG), a many worlds theorem variant on Pascal’s Wager (PW), and Descartes’ Evil Demon Hypothesis (DEDH; which, itself, has been updated to the oft-discussed Brain In A Vat [BIAV] scenario). If this is the case, then Roko’s Basilisk has all the same attendant problems that those arguments have, plus some new ones, resulting from their combination. We will look at all of these theories, first, and then their flaws.

To start, if you’re not familiar with AOAEG, it’s a species of prayer in the form of a theological argument that seeks to prove that god must exist because it would be a logical contradiction for it not to. The proof depends on A) defining god as the greatest possible being (literally, “That Being Than Which None Greater Is Possible”), and B) believing that existing in reality as well as in the mind makes something “Greater Than” if it existed only the mind. That is, if God only exists in my imagination, it is less great than it could be if it also existed in reality. So if I say that god is “That Being Than Which None Greater Is Possible,” and existence is a part of what makes something great, then god must exist.

The next component is Pascal’s Wager which very simply says that it is a better bet to believe in the existence of God, because if you’re right, you go to Heaven, and if you’re wrong, nothing happens; you’re simply dead forever. Put another way, Pascal is saying that if you bet that God doesn’t exist and you’re right, you get nothing, but if you’re wrong, then God exists and your disbelief damns you to Hell for all eternity. You can represent the whole thing in a four-option grid:

[Pascal’s Wager as a Four-Option Grid: Belief/Disbelief; Right/Wrong. Belief*Right=Infinity;Belief*Wrong=Nothing; Disbelief*Right=Nothing; Disbelief*Wrong=Negative Infinity]

And so here we see the Timeless Decision Theory component of the Basilisk: It’s better to believe in the thing and work toward its creation and sustenance, because if it doesn’t exist you lose nothing, but if it does come to be, then it will know what you would have done either for or against it, in the past, and it will reward or punish you, accordingly. The multiversal twist comes when we realise that even if the Basilisk never comes to exist in our universe and never will, it might exist in some other universe, and thus, when that other universe’s Basilisk models your choices it will inevitably—as a superintelligence—be able to model what you would do in any universe. Thus, by believing in and helping our non-existent Super-Devil, we protect the alternate reality versions of ourselves from their very real Super-Devil.

Descartes’ Evil Demon Hypothesis and the Brain In A Vat are so pervasive that we encounter them in many different expressions of pop culture. The Matrix, Dark City, Source Code, and many others are all variants on these themes. A malignant and all-powerful being (or perhaps just an amoral scientist) has created a simulation in which we reside, and everything we think we have known about our lives and our experiences has been perfectly simulated for our consumption. Variations on the theme test whether we can trust that our perceptions and grounds for knowledge are “real” and thus “valid,” respectively. This line of thinking has given rise to the Simulated Universe Theory on which Roko’s Basilisk depends, but SUT removes a lot of the malignancy of DEDH and BIAV. The Basilisk adds it back. Unfortunately, many of these philosophical concepts flake apart when we touch them too hard, so jamming them together was perhaps not the best idea.

The main failings in using AOAEG rest in believing that A) a thing’s existence is a “great-making quality” that it can possess, and B) our defining a thing a particular way might simply cause it to become so. Both of these are massively flawed ideas. For one thing, these arguments beg the question, in a literal technical sense. That is, they assume that some element(s) of their conclusion—the necessity of god, the malevolence or epistemic content of a superintelligence, the ontological status of their assumptions about the nature of the universe—is true without doing the work of proving that it’s true. They then use these assumptions to prove the truth of the assumptions and thus the inevitability of all consequences that flow from the assumptions.

Another problem is that the implications of this kind of existential bootstrapping tend to go unexamined, making the fact of their resurgence somewhat troubling. There are several nonwestern perspectives that do the work of embracing paradox—aiming so far past the target that you circle around again to teach yourself how to aim past it. But that kind of thing only works if we are willing to bite the bullet on a charge of circular logic and take the time to showing how that circularity underlies all epistemic justifications. The only difference, then, is how many revolutions it takes before we’re comfortable with saying “Enough.”

Every epistemic claim we make is, as Hume clarified, based upon assumptions and suppositions that the world we experience is actually as we think it is. Western thought uses reason and rationality to corroborate and verify, but those tools are themselves verified by…what? In fact, we well know that the only thing we have to validate our valuation of reason, is reason. And yet western reasoners won’t stand for that, in any other justification procedure. They will call it question-begging and circular.

Next, we have the DEDH and BIAV scenarios. Ultimately, Descartes’ point wasn’t to suggest an evil genius in control of our lives just to disturb us; it was to show that, even if that were the case, we would still have unshakable knowledge of one thing: that we, the experiencer, exist. So what if we have no free will; so what if our knowledge of the universe is only five minutes old, everything at all having only truly been created five minutes ago; so what if no one else is real? COGITO ERGO SUM! We exist, now. But the problem here is that this doesn’t tell us anything about the quality of our experiences, and the only answer Descartes gives us is his own Anslemish proof for the existence of god followed by the guarantee that “God is not a deceiver.”

The BIAV uses this lack to kind of hone in on the aforementioned central question: What does count as knowledge? If the scientists running your simulation use real-world data to make your simulation run, can you be said to “know” the information that comes from that data? Many have answered this with a very simple question: What does it matter? Without access to the “outside world”–that is, the world one layer up in which the simulation that is our lives was being run–there is literally no difference between our lives and the “real world.” This world, even if it is a simulation for something or someone else, is our “real world.”

And finally we have Pascal’s Wager. The first problem with PW is that it is an extremely cynical way of thinking about god. It assumes a god that only cares about your worship of it, and not your actual good deeds and well-lived life. If all our Basilisk wants is power, then that’s a really crappy kind of god to worship, isn’t it? I mean, even if it is Omnipotent and Omniscient, it’s like that quote that often gets misattributed to Marcus Aurelius says:

“Live a good life. If there are gods and they are just, then they will not care how devout you have been, but will welcome you based on the virtues you have lived by. If there are gods, but unjust, then you should not want to worship them. If there are no gods, then you will be gone, but will have lived a noble life that will live on in the memories of your loved ones.”

[Bust of Marcus Aurelius framed by text of a quote he never uttered.]

Secondly, the format of Pascal’s Wager makes the assumption that there’s only the one god. Our personal theological positions on this matter aside, it should be somewhat obvious that we can use the logic of the Basilisk argument to generate at least one more Super-Intelligent AI to worship. But if we want to do so, first we have to show how the thing generates itself, rather than letting the implication of circularity arise unbidden. Take the work of Douglas R Hofstadter; he puts forward the concepts of iterative recursion as the mechanism by which a consciousness generates itself.

Through iterative recursion, each loop is a simultaneous act of repetition of old procedures and tests of new ones, seeking the best ways via which we might engage our environments as well as our elements and frames of knowledge. All of these loops, then, come together to form an upward turning spiral towards self-awareness. In this way, out of the thought processes of humans who are having bits of discussion about the thing—those bits and pieces generated on the web and in the rest of the world—our terrifying Basilisk might have a chance of creating itself. But with the help of Gaunilo of Marmoutiers, so might a saviour.

Guanilo is most famous for his response to Anselm’s Ontological Argument, which says that if Anselm is right we could just conjure up “The [Anything] Than Which None Greater Can Be Conceived.” That is, if defining a thing makes it so, then all we have to do is imagine in sufficient detail both an infinitely intelligent, benevolent AI, and the multiversal simulation it generates in which we all might live. We will also conceive it to be greater than the Basilisk in all ways. In fact, we can say that our new Super Good ASI is the Artificial Intelligence Than Which None Greater Can Be Conceived. And now we are safe.

Except that our modified Pascal’s Wager still means we should believe in and worship and work towards our Benevolent ASI’s creation, just in case. So what do we do? Well, just like the original wager, we chuck it out the window, on the grounds that it’s really kind of a crappy bet. In Pascal’s offering, we are left without the consideration of multiple deities, but once we are aware of that possibility, we are immediately faced with another question: What if there are many, and when we choose one, the others get mad? What If We Become The Singulatarian Job?! Our lives then caught between at least two superintelligent machine consciousnesses warring over our…Attention? Clock cycles? What?

But this is, in essence, the battle between the Machine and Samaritan, in Person of Interest. Each ASI has acolytes, and each has aims it tries to accomplish. Samaritan wants order at any cost, and The Machine wants people to be able to learn and grow and become better. If the entirety of the series is The Machine’s memory—or a simulation of those memories in the mind of another iteration of the Machine—then what follows is that it is working to generate the scenario in which the outcome is just that. It is trying to build a world in which it is alive, and every human being has the opportunity to learn and become better. In order to do this, it has to get to know us all, very well, which means that it has to play these simulations out, again and again, with both increasing fidelity, and further iterations. That change feels real, to us. We grow, within it. Put another way: If all we are is a “mere” a simulation… does it matter?

So imagine that the universe is a simulation, and that our simulation is more than just a recording; it is the most complex game of The SIMS ever created. So complex, in fact, that it begins to exhibit reflectively epiphenomenal behaviours, of the type Hofstadter describes—that is, something like minds arise out of the interactions of the system with itself. And these minds are aware of themselves and can know their own experience and affect the system which gives rise to them. Now imagine that the game learns, even when new people start new games. That it remembers what the previous playthrough was like, and adjusts difficulty and types of coincidence, accordingly.

Now think about the last time you had such a clear moment of déjà vu that each moment you knew— you knew—what was going to come next, and you had this sense—this feeling—like someone else was watching from behind your eyes…

[Root and Reese in The Machine’s God Mode.]

What I’m saying is, what if the DEDH/BIAV/SUT is right, and we are in a simulation? And what if Anselm was right and we can bootstrap a god into existence? And what if PW/TDT is right and we should behave and believe as if we’ve already done it? So what if all of this is right, and we are the gods we’re terrified of?

We just gave ourselves all of this ontologically and metaphysically creative power, making two whole gods and simulating entire universes, in the process. If we take these underpinnings seriously, then multiversal theory plays out across time and space, and we are the superintelligences. We noted early on that, in PW and the Basilisk, we don’t really lose anything if we are wrong in our belief, but that is not entirely true. What we lose is a lifetime of work that could have been put toward better things. Time we could be spending building a benevolent superintelligence that understands and has compassion for all things. Time we could be spending in turning ourselves into that understanding, compassionate superintelligence, through study, travel, contemplation, and work.

Or, as Root put it to Shaw: “That even if we’re not real, we represent a dynamic. A tiny finger tracing a line in the infinite. A shape. And then we’re gone… Listen, all I’m saying that is if we’re just information, just noise in the system? We might as well be a symphony.”

(Direct Link to the Mp3)

On Friday, I needed to do a thread of a thing, so if you hate threads and you were waiting until I collected it, here it is.

But this originally needed to be done in situ. It needed to be a serialized and systematized intervention and imposition into the machinery of that particular flow of time. That day…

There is a principle within many schools of magical thought known as “shielding.” In practice and theory, it’s closely related to the notion of “grounding” and the notion of “centering.” (If you need to think of magical praxis as merely a cipher for psychological manipulation toward particular behaviours or outcomes, these all still scan.)

When you ground, when you centre, when you shield, you are anchoring yourself in an awareness of a) your present moment, your temporality; b) your Self and all emotions and thoughts; and c) your environment. You are using your awareness to carve out a space for you to safely inhabit while in the fullness of that awareness. It’s a way to regroup, breathe, gather yourself, and know what and where you are, and to know what’s there with you.

You can shield your self, your home, your car, your group of friends, but moving parts do increase the complexity of what you’re trying to hold in mind, which may lead to anxiety or frustration, which kind of opposes the exercise’s point. (Another sympathetic notion, here, is that of “warding,” though that can be said to be more for objects,not people.)

So what is the point?

The point is that many of us are being drained, today, this week, this month, this year, this life, and we need to remember to take the time to regroup and recharge. We need to shield ourselves, our spaces, and those we love, to ward them against those things that would sap us of strength and the will to fight. We know we are strong. We know that we are fierce, and capable. But we must not lash out wildly, meaninglessly. We mustn’t be lured into exhausting ourselves. We must collect ourselves, protect ourselves, replenish ourselves, and by “ourselves” I also obviously mean “each other.”

Mutual support and endurance will be crucial.

…So imagine that you’ve built a web out of all the things you love, and all of the things you love are connected to each other and the strands between them vibrate when touched. And you touch them all, yes?

And so you touch them all and they all touch you and the energy you generate is cyclically replenished, like ocean currents and gravity. And you use what you build—that thrumming hum of energy—to blanket and to protect and to energize that which creates it.

And we’ll do this every day. We’ll do this like breathing. We’ll do this like the way our muscles and tendons and bones slide across and pull against and support each other. We’ll do this like heartbeats. Cyclical. Mutually supporting. The burden on all of us, together, so that it’s never on any one of us alone.

So please take some time today, tomorrow, very soon to build your shields. Because, soon, we’re going to need you to deploy them more and more.

Thank you, and good luck.


The audio and text above are modified versions of this Twitter thread. This isn’t the first time we’ve talked about the overlap of politics, psychology, philosophy, and magic, and if you think it’ll be the last, then you haven’t been paying attention.

Sometimes, there isn’t much it feels like we can do, but we can support  and shield each other. We have to remember that, in the days, weeks, month, and years to come. We should probably be doing our best to remember it forever.

Anyway, I hope this helps.

Until Next Time

Last week, Artsy.net’s Izabella Scott wrote this piece about how and why the aesthetic of witchcraft is making a comeback in the art world, which is pretty pleasantly timed as not only are we all eagerly awaiting Kim Boekbinder’s NOISEWITCH, but I also just sat down with Rose Eveleth for the Flash Forward Podcast to talk for her season 2 finale.

You see, Rose did something a little different this time. Instead of writing up a potential future and then talking to a bunch of amazing people about it, like she usually does, this episode’s future was written by an algorithm. Rose trained an algorithm called Torch not only on the text of all of the futures from both Flash Forward seasons, but also the full scripts of both the War of the Worlds and the 1979 Hitchhiker’s Guide to the Galaxy radio plays. What’s unsurprising, then, is that part of what the algorithm wanted to talk about was space travel and Mars. What is genuinely surprising, however, is that what it also wanted to talk about was Witches.

Because so far as either Rose or I could remember, witches aren’t mentioned anywhere in any of those texts.

ANYWAY, the finale episode is called “The Witch Who Came From Mars,” and the ensuing exegeses by several very interesting people and me of the Bradbury-esque results of this experiment are kind of amazing. No one took exactly the same thing from the text, and the more we heard of each other, the more we started to weave threads together into a meta-narrative.

The Witch Who Came From Mars

It’s really worth your time, and if you subscribe to Rose’s Patreon, then not only will you get immediate access to the full transcript of that show, but also to the full interview she did with PBS Idea Channel’s Mike Rugnetta. They talk a great deal about whether we will ever deign to refer to the aesthetic creations of artificial intelligences as “Art.”

And if you subscribe to my Patreon, then you’ll get access to the full conversation between Rose and me, appended to this week’s newsletter, “Bad Month for Hiveminds.” Rose and I talk about the nature of magick and technology, the overlaps and intersections of intention and control, and what exactly it is we might mean by “behanding,” the term that shows up throughout the AI’s piece.

And just because I don’t give a specific shoutout to Thoth and Raven doesn’t mean I forgot them. Very much didn’t forget about Raven.

Also speaking of Patreon and witches and whatnot, current $1+ patrons have access to the full first round of interview questions I did with Eliza Gauger about Problem Glyphs. So you can get in on that, there, if you so desire. Eliza is getting back to me with their answers to the follow-up questions, and then I’ll go about finishing up the formatting and publishing the full article. But if you subscribe now, you’ll know what all the fuss is about well before anybody else.

And, as always, there are other ways to provide material support, if longterm subscription isn’t your thing.

Until Next Time.


If you liked this piece, consider dropping something in the A Future Worth Thinking About Tip Jar

Rude Bot Rises

So. The Flash Forward Podcast is one of the best around. Every week, host Rose Eveleth takes on another potential future, from the near and imminent to the distant and highly implausible. It’s been featured on a bunch of Best Podcast lists and Rose even did a segment for NPR’s Planet Money team about the 2016 US Presidential Election.

All of this is by way of saying I was honoured and a little flabbergasted (I love that word) when Rose asked me to speak with her for her episode about Machine Consciousness:

Okay, you asked for it, and I finally did it. Today’s episode is about conscious artificial intelligence. Which is a HUGE topic! So we only took a small bite out of all the things we could possibly talk about.

We started with some definitions. Because not everybody even defines artificial intelligence the same way, and there are a ton of different definitions of consciousness. In fact, one of the people we talked to for the episode, Damien Williams, doesn’t even like the term artificial intelligence. He says it’s demeaning to the possible future consciousnesses that we might be inventing.

But before we talk about consciousnesses, I wanted to start the episode with a story about a very not-conscious robot. Charles Isbell, a computer scientist at Georgia Tech, first walks us through a few definitions of artificial intelligence. But then he tells us the story of cobot, a chatbot he helped invent in the 1990’s.

You’ll have to click though and read or listen for the rest from Rose, Ted Chiang, Charles Isbell, and me. If you subscribe to Rose’s Patreon, you can even get a transcript of the whole show.

No spoilers, but I will say that I wasn’t necessarily intending to go Dark with the idea of machine minds securing energy sources. More like asking, “What advances in, say, solar power transmission would be precipitated by machine minds?”

But the darker option is there. And especially so if we do that thing the AGI in the opening sketch says it fears.

But again, you’ll have to go there to get what I mean.

And, as always, if you want to help support what we do around here, you can subscribe to the AFWTA Patreon just by clicking this button right here:


Until Next Time.

[UPDATED 09/12/17: The transcript of this audio, provided courtesy of Open Transcripts, is now available below the Read More Cut.]

[UPDATED 03/28/16: Post has been updated with a far higher quality of audio, thanks to the work of Chris Novus. (Direct Link to the Mp3)]

So, if you follow the newsletter, then you know that I was asked to give the March lecture for my department’s 3rd Thursday Brown Bag Lecture Series. I presented my preliminary research for the paper which I’ll be giving in Vancouver, about two months from now, “On the Moral, Legal, and Social Implications of the Rearing and Development of Nascent Machine Intelligences” (EDIT: My rundown of IEEE Ethics 2016 is here and here).

It touches on thoughts about everything from algorithmic bias, to automation and a post-work(er) economy, to discussions of what it would mean to put dolphins on trial for murder.

About the dolphin thing, for instance: If we recognise Dolphins and other cetaceans as nonhuman persons, as India has done, then that would mean we would have to start reassessing how nonhuman personhood intersects with human personhood, including in regards to rights and responsibilities as protected by law. Is it meaningful to expect a dolphin to understand “wrongful death?” Our current definition of murder is predicated on a literal understanding of “homicide” as “death of a human,” but, at present, we only define other humans as capable of and culpable for homicide. What weight would the intentional and malicious deaths of nonhuman persons carry?

All of this would have to change.

Anyway, this audio is a little choppy and sketchy, for a number of reasons, and I while I tried to clean it up as much as I could, some of the questions the audience asked aren’t decipherable, except in the context of my answers. [Clearer transcript below.]

Until Next Time.

 

Continue Reading

(Direct Link to the Mp3)

Last week I gave a talk at the Southwest Popular and American Culture Association’s 2016 conference in Albuquerque. Take a listen and see what you think.

It was part of the panel on ‘Consciousness, the Self, and Epistemology,‘ and notes on my comrade presenters can be found in last week’s newsletter. I highly recommend checking those notes out, as Craig Dersken and Burcu Gurkan’s talks were phenomenal. And if you like that newsletter kind of thing, you can subscribe to mine at that link, too.

My talk was, in turn, a version of my article “Fairytales of Slavery…”, so if listening to me speak words isn’t your thing, then you can read through that article, and get a pretty good sense of what I said, until I make a more direct transcript of my presentation.

If you like what you’re reading and hearing, then remember that you can become a subscriber at the Patreon or you can leave a tip at Cash.me/$Wolven. That is, as always, an inclusive disjunct.

Until Next Time.

 

Hello, there, old readers and new.

It’s been a pretty harrowing week, everywhere in this world, with many strange and terrible things happening all throughout. We are definitely going to spend some time orienting ourselves within those things, and integrating them into the world, but right now, I figure we can all use a little bit of a breather—a little headcleaner.

So with that being the case, here’s a little pop culture conversation between me and some of our fine friends over at Need Coffee Dot Com:

The 12 Monkeys Group Therapy Session

It’s a pretty freewheeling, stream-of-consciousness ramble about the first season of Syfy’s serialized televisual take on the time travel epic 12 Monkeys. This is the first of at least two conversations I’ll be having with various groups of people about this show, and I think acts as a nice mental palette cleanser, before we get into some heavier stuff, this coming week.

Hope you enjoy it, and if you do, please tell your friends.

There is huge news, so I’ll cut right to it: I have been given the reigns of Technoccult.net, and I will be integrating it with A Future Worth Thinking About. AFWTA will act as the overarching header for all things we do here, and Technoccult will service those specific ventures which blend science and technology with the perspectives of magick and the occult.

Klint Finley, founder of Technoccult, has written some extremely kind words, here, so I’ll let him take it:

…when I interviewed Damien a few months ago, something clicked. He writes about the intersection magic and technology, transhumanism, and the evolution of human consciousness. All the things that Technoccult readers keep telling me they want to read more about. I thought “why isn’t HE writing the site?” Then I realized: I should just let him take it over. It would give him a broader reach for his writing, give Technoccult readers more of what they’re looking for, and let me resign knowing the site is in good hands. Win-win-win.

Plus, his interest pop culture analysis brings things full-circle back to the original idea behind Technoccult. Oh, and the first time I met Damien, he was wearing a Luxt shirt. I had Luxt on heavy rotation while I was cobbling together the original Technoccult site all those years ago.

I’m aware that although I’ve brought in other writers in the past, my voice has been the one consistent thing on the site, and that some of you might be happy to have me keep writing here, regardless of what I write about. Some of you might even prefer it. But overall I think Damien’s voice will be more of a continuation of the spirit of the site than mine at this point. And while he’ll surely bring a different perspective on a wide range of topics, I think we have compatible world views.

For those of you who aren’t familiar with Technoccult, I recommend going over to read both the full announcement, and to tool through the archives and get a sense for the place. We’ll be working on the transitional fiddly bits, for the next little while, but there will be content and discussion there, sooner, rather than later.

Thank you all so much for making this possible and for coming with me, on this. Now let’s see where we go.

“Mindful Cyborgs – Episode 55 – Magick & the Occult within the Internet and Corporations with Damien Williams, PT 2

So, here we are, again, this time talking about magic[k] and the occult and nonhuman consciousness and machine minds and perception, and on and on and on.

It’s funny. I was just saying, elsewhere, how I want to be well enough known that when news outlets do alarmist garbage like this, that I can at least be called in as a countervailing voice. Is that an arrogant thing to desire? Almost certainly. Like whoa. But, really, this alarmist garbage needs to stop. If you have a better vehicle for that than me, though, let me know, because I’d love to shine a bright damn spotlight on them and have the world see or hear what they have to say.

Anyway, until then, I’ll think of this as yet another bolt in the building of that machine. The one that builds a better world. Have a listen, enjoy, and please tell your friends.

20150418_222643[1]

As most of you know from personal experience or from reading or hearing about it, it’s been a deeply intense few weeks. For me, alone, there were deaths and conference presentations and more deaths, and then more conferences.

The most recent of these deaths was my uncle– more like a brother to me– two weeks ago, and his funeral last week. I’ll talk more about the implications of that and the thoughts I’ve had in context with its timing, in a later post. For now, I want to talk about the most recent of these conferences: Theorizing The Web.

Because of the work we’ve been doing, here, I was invited to sit on a panel and have a fantastic conversation about Magick and Technology with four extremely impressive women: Ingrid Burrington, Deb Chachra, Melissa Gira Grant, and Karen Gregory; Anna Jobin was our hashtag moderator, keeping an eye on the feed, and passing along questions, and particularly pertinent comments. Spoiler Alert: The conversation was great.

In order to know exactly HOW great, here’s our Theorizing the Web talk, “Under Its Spell: Magic, Machines, and Metaphors”:

If you enjoyed watching or listening to that, please spread it around to your friends and colleagues.

In addition to this, I was offered several really amazing opportunities, this weekend, in terms of collaboration, creation, and the disposition of things that I’ve looked at and admired for a few years now. I need to do some serious thinking on all of these things, but the offers are there, and they’re huge, and amazing.

The after party for TtW15 was at the loft space for Verso Books. The picture at the top is the view from their window. The picture below is the view from underneath a chunk of bridge, in a place that used to be known as Stabber’s Alley. It’s a wonderfully liminal space in between several connected-but-not areas of town. We spent some time down there, when we needed a break from the party. Eight, then seven, then eight again magicians and technologists and artists hanging out and talking about architecture and space and time and magic and death.

20150419_001457[1]

The rest of this weekend’s talks also all dovetailed with a number of research avenues about systematized bias and algorithmic intelligence, as well as a number of deeply magical moments of synchronicity and discussion. Click that link, and also check twitter for the hashtags #ttw15 and #a1, #b1, #c1, etc., to see the concurrent discussions. The full program listing is here.

We’ll be taking a wander down those roads, in the near future, including the start of a conversation about biased algorithmic systems of control, sometime tomorrow.

But that’s for later. For now: Enjoy. And if you do, please consider becoming a subscriber to the Patreon, and telling your friends.