autonomous generated intelligence

All posts tagged autonomous generated intelligence

(Direct Link to the Mp3)

This is the recording and the text of my presentation from 2017’s Southwest Popular/American Culture Association Conference in Albuquerque, ‘Are You Being Watched? Simulated Universe Theory in “Person of Interest.”‘

This essay is something of a project of expansion and refinement of my previous essay “Labouring in the Liquid Light of Leviathan,”  considering the Roko’s Basilisk thought experiment. Much of the expansion comes from considering the nature of simulation, memory, and identity within Jonathan Nolan’s TV series, Person of Interest. As such, it does contain what might be considered spoilers for the series, as well as for his most recent follow-up, Westworld.

Use your discretion to figure out how you feel about that.


Are You Being Watched? Simulated Universe Theory in “Person of Interest”

Jonah Nolan’s Person Of Interest is the story of the birth and life of The Machine, a benevolent artificial super intelligence (ASI) built in the months after September 11, 2001, by super-genius Harold Finch to watch over the world’s human population. One of the key intimations of the series—and partially corroborated by Nolan’s follow-up series Westworld—is that all of the events we see might be taking place in the memory of The Machine. The structure of the show is such that we move through time from The Machine’s perspective, with flashbacks and -forwards seeming to occur via the same contextual mechanism—the Fast Forward and Rewind of a digital archive. While the entirety of the series uses this mechanism, the final season puts the finest point on the question: Has everything we’ve seen only been in the mind of the machine? And if so, what does that mean for all of the people in it?

Our primary questions here are as follows: Is a simulation of fine enough granularity really a simulation at all? If the minds created within that universe have interiority and motivation, if they function according to the same rules as those things we commonly accept as minds, then are those simulation not minds, as well? In what way are conclusions drawn from simulations akin to what we consider “true” knowledge?

In the PoI season 5 episode, “The Day The World Went Away,” the characters Root and Shaw (acolytes of The Machine) discuss the nature of The Machine’s simulation capacities and the audience is given to understand that it runs a constant model of everyone it knows, and that the more it knows them, the better its simulation. This supposition links us back to the season 4 episode “If-Then-Else,” in which the machine runs through the likelihood of success through hundreds of thousands of scenarios in under one second. If The Machine is able to accomplish this much computation in this short a window, how much can and has it accomplished over the several years of its operation? Perhaps more importantly, what is the level of fidelity of those simulations to the so-called real world?

[Person of Interest s4e11, “If-Then-Else.” The Machine runs through hundreds of thousands of scenarios to save the team.]

These questions are similar to the idea of Roko’s Basilisk, a thought experiment that cropped up in the online discussion board of LessWrong.com. It was put forward by user Roko who, in very brief summary, says that if the idea of timeless decision theory (TDT) is correct, then we might all be living in a simulation created by a future ASI trying to figure out the best way to motivate humans in the past to create it. To understand how this might work, we have to look as TDT, an idea developed in 2010 by Eliezer Yudkowsky which posits that in order to make a decision we should act as though we are determining the output of an abstract computation. We should, in effect, seek to create a perfect simulation and act as though anyone else involved in the decision has done so as well. Roko’s Basilisk is the idea that a Malevolent ASI has already done this—is doing this—and your actions are the simulated result. Using that output, it knows just how to blackmail and manipulate you into making it come into being.

Or, as Yudkowsky himself put it, “YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.” This is the self-generating aspect of the Basilisk: If you can accurately model it, then the Basilisk will eventually, inevitably come into being, and one of the attributes it will thus have is the ability to accurately model that you accurately modeled it, and whether or not you modeled it from within a mindset of being susceptible to its coercive actions. The only protection is to either work toward its creation anyway, so that it doesn’t feel the need to torture the “real” you into it, or to make very sure that you never think of it at all, so you do not bring it into being.

All of this might seem far-fetched, but if we look closely, Roko’s Basilisk functions very much like a combination of several well-known theories of mind, knowledge, and metaphysics: Anselm’s Ontological Argument for the Existence of God (AOAEG), a many worlds theorem variant on Pascal’s Wager (PW), and Descartes’ Evil Demon Hypothesis (DEDH; which, itself, has been updated to the oft-discussed Brain In A Vat [BIAV] scenario). If this is the case, then Roko’s Basilisk has all the same attendant problems that those arguments have, plus some new ones, resulting from their combination. We will look at all of these theories, first, and then their flaws.

To start, if you’re not familiar with AOAEG, it’s a species of prayer in the form of a theological argument that seeks to prove that god must exist because it would be a logical contradiction for it not to. The proof depends on A) defining god as the greatest possible being (literally, “That Being Than Which None Greater Is Possible”), and B) believing that existing in reality as well as in the mind makes something “Greater Than” if it existed only the mind. That is, if God only exists in my imagination, it is less great than it could be if it also existed in reality. So if I say that god is “That Being Than Which None Greater Is Possible,” and existence is a part of what makes something great, then god must exist.

The next component is Pascal’s Wager which very simply says that it is a better bet to believe in the existence of God, because if you’re right, you go to Heaven, and if you’re wrong, nothing happens; you’re simply dead forever. Put another way, Pascal is saying that if you bet that God doesn’t exist and you’re right, you get nothing, but if you’re wrong, then God exists and your disbelief damns you to Hell for all eternity. You can represent the whole thing in a four-option grid:

[Pascal’s Wager as a Four-Option Grid: Belief/Disbelief; Right/Wrong. Belief*Right=Infinity;Belief*Wrong=Nothing; Disbelief*Right=Nothing; Disbelief*Wrong=Negative Infinity]

And so here we see the Timeless Decision Theory component of the Basilisk: It’s better to believe in the thing and work toward its creation and sustenance, because if it doesn’t exist you lose nothing, but if it does come to be, then it will know what you would have done either for or against it, in the past, and it will reward or punish you, accordingly. The multiversal twist comes when we realise that even if the Basilisk never comes to exist in our universe and never will, it might exist in some other universe, and thus, when that other universe’s Basilisk models your choices it will inevitably—as a superintelligence—be able to model what you would do in any universe. Thus, by believing in and helping our non-existent Super-Devil, we protect the alternate reality versions of ourselves from their very real Super-Devil.

Descartes’ Evil Demon Hypothesis and the Brain In A Vat are so pervasive that we encounter them in many different expressions of pop culture. The Matrix, Dark City, Source Code, and many others are all variants on these themes. A malignant and all-powerful being (or perhaps just an amoral scientist) has created a simulation in which we reside, and everything we think we have known about our lives and our experiences has been perfectly simulated for our consumption. Variations on the theme test whether we can trust that our perceptions and grounds for knowledge are “real” and thus “valid,” respectively. This line of thinking has given rise to the Simulated Universe Theory on which Roko’s Basilisk depends, but SUT removes a lot of the malignancy of DEDH and BIAV. The Basilisk adds it back. Unfortunately, many of these philosophical concepts flake apart when we touch them too hard, so jamming them together was perhaps not the best idea.

The main failings in using AOAEG rest in believing that A) a thing’s existence is a “great-making quality” that it can possess, and B) our defining a thing a particular way might simply cause it to become so. Both of these are massively flawed ideas. For one thing, these arguments beg the question, in a literal technical sense. That is, they assume that some element(s) of their conclusion—the necessity of god, the malevolence or epistemic content of a superintelligence, the ontological status of their assumptions about the nature of the universe—is true without doing the work of proving that it’s true. They then use these assumptions to prove the truth of the assumptions and thus the inevitability of all consequences that flow from the assumptions.

Another problem is that the implications of this kind of existential bootstrapping tend to go unexamined, making the fact of their resurgence somewhat troubling. There are several nonwestern perspectives that do the work of embracing paradox—aiming so far past the target that you circle around again to teach yourself how to aim past it. But that kind of thing only works if we are willing to bite the bullet on a charge of circular logic and take the time to showing how that circularity underlies all epistemic justifications. The only difference, then, is how many revolutions it takes before we’re comfortable with saying “Enough.”

Every epistemic claim we make is, as Hume clarified, based upon assumptions and suppositions that the world we experience is actually as we think it is. Western thought uses reason and rationality to corroborate and verify, but those tools are themselves verified by…what? In fact, we well know that the only thing we have to validate our valuation of reason, is reason. And yet western reasoners won’t stand for that, in any other justification procedure. They will call it question-begging and circular.

Next, we have the DEDH and BIAV scenarios. Ultimately, Descartes’ point wasn’t to suggest an evil genius in control of our lives just to disturb us; it was to show that, even if that were the case, we would still have unshakable knowledge of one thing: that we, the experiencer, exist. So what if we have no free will; so what if our knowledge of the universe is only five minutes old, everything at all having only truly been created five minutes ago; so what if no one else is real? COGITO ERGO SUM! We exist, now. But the problem here is that this doesn’t tell us anything about the quality of our experiences, and the only answer Descartes gives us is his own Anslemish proof for the existence of god followed by the guarantee that “God is not a deceiver.”

The BIAV uses this lack to kind of hone in on the aforementioned central question: What does count as knowledge? If the scientists running your simulation use real-world data to make your simulation run, can you be said to “know” the information that comes from that data? Many have answered this with a very simple question: What does it matter? Without access to the “outside world”–that is, the world one layer up in which the simulation that is our lives was being run–there is literally no difference between our lives and the “real world.” This world, even if it is a simulation for something or someone else, is our “real world.”

And finally we have Pascal’s Wager. The first problem with PW is that it is an extremely cynical way of thinking about god. It assumes a god that only cares about your worship of it, and not your actual good deeds and well-lived life. If all our Basilisk wants is power, then that’s a really crappy kind of god to worship, isn’t it? I mean, even if it is Omnipotent and Omniscient, it’s like that quote that often gets misattributed to Marcus Aurelius says:

“Live a good life. If there are gods and they are just, then they will not care how devout you have been, but will welcome you based on the virtues you have lived by. If there are gods, but unjust, then you should not want to worship them. If there are no gods, then you will be gone, but will have lived a noble life that will live on in the memories of your loved ones.”

[Bust of Marcus Aurelius framed by text of a quote he never uttered.]

Secondly, the format of Pascal’s Wager makes the assumption that there’s only the one god. Our personal theological positions on this matter aside, it should be somewhat obvious that we can use the logic of the Basilisk argument to generate at least one more Super-Intelligent AI to worship. But if we want to do so, first we have to show how the thing generates itself, rather than letting the implication of circularity arise unbidden. Take the work of Douglas R Hofstadter; he puts forward the concepts of iterative recursion as the mechanism by which a consciousness generates itself.

Through iterative recursion, each loop is a simultaneous act of repetition of old procedures and tests of new ones, seeking the best ways via which we might engage our environments as well as our elements and frames of knowledge. All of these loops, then, come together to form an upward turning spiral towards self-awareness. In this way, out of the thought processes of humans who are having bits of discussion about the thing—those bits and pieces generated on the web and in the rest of the world—our terrifying Basilisk might have a chance of creating itself. But with the help of Gaunilo of Marmoutiers, so might a saviour.

Guanilo is most famous for his response to Anselm’s Ontological Argument, which says that if Anselm is right we could just conjure up “The [Anything] Than Which None Greater Can Be Conceived.” That is, if defining a thing makes it so, then all we have to do is imagine in sufficient detail both an infinitely intelligent, benevolent AI, and the multiversal simulation it generates in which we all might live. We will also conceive it to be greater than the Basilisk in all ways. In fact, we can say that our new Super Good ASI is the Artificial Intelligence Than Which None Greater Can Be Conceived. And now we are safe.

Except that our modified Pascal’s Wager still means we should believe in and worship and work towards our Benevolent ASI’s creation, just in case. So what do we do? Well, just like the original wager, we chuck it out the window, on the grounds that it’s really kind of a crappy bet. In Pascal’s offering, we are left without the consideration of multiple deities, but once we are aware of that possibility, we are immediately faced with another question: What if there are many, and when we choose one, the others get mad? What If We Become The Singulatarian Job?! Our lives then caught between at least two superintelligent machine consciousnesses warring over our…Attention? Clock cycles? What?

But this is, in essence, the battle between the Machine and Samaritan, in Person of Interest. Each ASI has acolytes, and each has aims it tries to accomplish. Samaritan wants order at any cost, and The Machine wants people to be able to learn and grow and become better. If the entirety of the series is The Machine’s memory—or a simulation of those memories in the mind of another iteration of the Machine—then what follows is that it is working to generate the scenario in which the outcome is just that. It is trying to build a world in which it is alive, and every human being has the opportunity to learn and become better. In order to do this, it has to get to know us all, very well, which means that it has to play these simulations out, again and again, with both increasing fidelity, and further iterations. That change feels real, to us. We grow, within it. Put another way: If all we are is a “mere” a simulation… does it matter?

So imagine that the universe is a simulation, and that our simulation is more than just a recording; it is the most complex game of The SIMS ever created. So complex, in fact, that it begins to exhibit reflectively epiphenomenal behaviours, of the type Hofstadter describes—that is, something like minds arise out of the interactions of the system with itself. And these minds are aware of themselves and can know their own experience and affect the system which gives rise to them. Now imagine that the game learns, even when new people start new games. That it remembers what the previous playthrough was like, and adjusts difficulty and types of coincidence, accordingly.

Now think about the last time you had such a clear moment of déjà vu that each moment you knew— you knew—what was going to come next, and you had this sense—this feeling—like someone else was watching from behind your eyes…

[Root and Reese in The Machine’s God Mode.]

What I’m saying is, what if the DEDH/BIAV/SUT is right, and we are in a simulation? And what if Anselm was right and we can bootstrap a god into existence? And what if PW/TDT is right and we should behave and believe as if we’ve already done it? So what if all of this is right, and we are the gods we’re terrified of?

We just gave ourselves all of this ontologically and metaphysically creative power, making two whole gods and simulating entire universes, in the process. If we take these underpinnings seriously, then multiversal theory plays out across time and space, and we are the superintelligences. We noted early on that, in PW and the Basilisk, we don’t really lose anything if we are wrong in our belief, but that is not entirely true. What we lose is a lifetime of work that could have been put toward better things. Time we could be spending building a benevolent superintelligence that understands and has compassion for all things. Time we could be spending in turning ourselves into that understanding, compassionate superintelligence, through study, travel, contemplation, and work.

Or, as Root put it to Shaw: “That even if we’re not real, we represent a dynamic. A tiny finger tracing a line in the infinite. A shape. And then we’re gone… Listen, all I’m saying that is if we’re just information, just noise in the system? We might as well be a symphony.”

I spoke with Klint Finley over at WIRED about Amazon, Facebook, Google, IBM, and Microsoft’s new joint ethics and oversight venture, which they’ve dubbed the “Partnership on Artificial Intelligence to Benefit People and Society.” They held a joint press briefing, today, in which Yann LeCun, Facebook’s director of AI, and Mustafa Suleyman, the head of applied AI at DeepMind discussed what it was that this new group would be doing out in the world. From the Article:

Creating a dialogue beyond the rather small world of AI researchers, LeCun says, will be crucial. We’ve already seen a chat bot spout racist phrases it learned on Twitter, an AI beauty contest decide that black people are less attractive than white people and a system that rates the risk of someone committing a crime that appears to be biased against black people. If a more diverse set of eyes are looking at AI before it reaches the public, the thinking goes, these kinds of thing can be avoided.

The rub is that, even if this group can agree on a set of ethical principles–something that will be hard to do in a large group with many stakeholders—it won’t really have a way to ensure those ideals are put into practice. Although one of the organization’s tenets is “Opposing development and use of AI technologies that would violate international conventions or human rights,” Mustafa Suleyman, the head of applied AI at DeepMind, says that enforcement is not the objective of the organization.

This isn’t the first time I’ve talked to Klint about the intricate interplay of machine intelligence, ethics, and algorithmic bias; we discussed it earlier just this year, for WIRED’s AI Issue. It’s interesting to see the amount of attention this topic’s drawn in just a few short months, and while I’m trepidatious about the potential implementations, as I note in the piece, I’m really fairly glad that more people are more and more willing to have this discussion, at all.

To see my comments and read the rest of the article, click through, here: “Tech Giants Team Up to Keep AI From Getting Out of Hand”

Here’s the direct link to my paper ‘The Metaphysical Cyborg‘ from Laval Virtual 2013. Here’s the abstract:

“In this brief essay, we discuss the nature of the kinds of conceptual changes which will be necessary to bridge the divide between humanity and machine intelligences. From cultural shifts to biotechnological integration, the project of accepting robotic agents into our lives has not been an easy one, and more changes will be required before the majority of human societies are willing and able to allow for the reality of truly robust machine intelligences operating within our daily lives. Here we discuss a number of the questions, hurdles, challenges, and potential pitfalls to this project, including examples from popular media which will allow us to better grasp the effects of these concepts in the general populace.”

The link will only work from this page or the CV page, so if you find yourself inclined to spread this around, use this link. Hope you enjoy it.

In case you were unaware, last Tuesday, June 21, Reuters put out an article about an EU draft plan regarding the designation of so-called robots and artificial intelligences as “Electronic Persons.” Some of you’d think I’d be all about this. You’d be wrong. The way the Reuters article frames it makes it look like the EU has literally no idea what they’re doing, here, and are creating a situation that is going to have repercussions they have nowhere near planned for.

Now, I will say that looking at the actual Draft, it reads like something with which I’d be more likely to be on board. Reuters did no favours whatsoever for the level of nuance in this proposal. But that being said, this focus of this draft proposal seems to be entirely on liability and holding someone—anyone—responsible for any harm done by a robot. That, combined with the idea of certain activities such as care-giving being “fundamentally human,” indicates to me that this panel still widely misses many of the implications of creating a new category for nonbiological persons, under “Personhood.”

The writers of this draft very clearly lay out the proposed scheme for liability, damages, and responsibilities—what I like to think of of as the “Hey… Can we Punish Robots?” portion of the plan—but merely use the phrase “certain rights” to indicate what, if any, obligations humans will have. In short, they do very little to discuss what the “certain rights” indicated by that oft-deployed phrase will actually be.

So what are the enumerated rights of electronic persons? We know what their responsibilities are, but what are our responsibilities to them? Once we have have the ability to make self-aware machine consciousnesses, are we then morally obliged to make them to a particular set of specifications, and capabilities? How else will they understand what’s required of them? How else would they be able to provide consent? Are we now legally obliged to provide all autonomous generated intelligences with as full an approximation of consciousness and free will as we can manage? And what if we don’t? Will we be considered to be harming them? What if we break one? What if one breaks in the course of its duties? Does it get workman’s comp? Does its owner?

And hold up, “owner?!” You see we’re back to owning people, again, right? Like, you get that?

And don’t start in with that “Corporations are people, my friend” nonsense, Mitt. We only recognise corporations as people as a tax dodge. We don’t take seriously their decision-making capabilities or their autonomy, and we certainly don’t wrestle with the legal and ethical implications of how radically different their kind of mind is, compared to primates or even cetaceans. Because, let’s be honest: If Corporations really are people, then not only is it wrong to own them, but also what counts as Consciousness needs to be revisited, at every level of human action and civilisation.

Let’s look again at the fact that people are obviously still deeply concerned about the idea of supposedly “exclusively human” realms of operation, even as we still don’t have anything like a clear idea about what qualities we consider to be the ones that make us “human.” Be it cooking or poetry, humans are extremely quick to lock down when they feel that their special capabilities are being encroached upon. Take that “poetry” link, for example. I very much disagree with Robert Siegel’s assessment that there was no coherent meaning in the computer-generated sonnets. Multiple folks pulled the same associative connections from the imagery. That might be humans projecting onto the authors, but still: that’s basically what we do with Human poets. “Authorial Intent” is a multilevel con, one to which I fully subscribe and From which I wouldn’t exclude AI.

Consider people’s reactions to the EMI/Emily Howell experiments done by David Cope, best exemplified by this passage from a PopSci.com article:

For instance, one music-lover who listened to Emily Howell’s work praised it without knowing that it had come from a computer program. Half a year later, the same person attended one of Cope’s lectures at the University of California-Santa Cruz on Emily Howell. After listening to a recording of the very same concert he had attended earlier, he told Cope that it was pretty music but lacked “heart or soul or depth.”

We don’t know what it is we really think of as humanness, other than some predetermined vague notion of humanness. If the people in the poetry contest hadn’t been primed to assume that one of them was from a computer, how would they have rated them? What if they were all from a computer, but were told to expect only half? Where are the controls for this experiment in expectation?

I’m not trying to be facetious, here; I’m saying the EU literally has not thought this through. There are implications embedded in all of this, merely by dint of the word “person,” that even the most detailed parts of this proposal are in no way equipped to handle. We’ve talked before about the idea of encoding our bias into our algorithms. I’ve discussed it on Rose Eveleth‘s Flash Forward, in Wired, and when I broke down a few of the IEEE Ethics 2016 presentations (including my own) in “Preying with Trickster Gods ” and “Stealing the Light to Write By.” My version more or less goes as I said it in Wired: ‘What we’re actually doing when we code is describing our world from our particular perspective. Whatever assumptions and biases we have in ourselves are very likely to be replicated in that code.’

More recently, Kate Crawford, whom I met at Magick.Codes 2014, has written extremely well on this in “Artificial Intelligence’s White Guy Problem.” With this line, ‘Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to,’ Crawford resonates very clearly with what I’ve said before.

And considering that it’s come out this week that in order to even let us dig into these potentially deeply-biased algorithms, here in the US, the ACLU has had to file a suit against a specific provision of the Computer Fraud and Abuse Act, what is the likelihood that the EU draft proposal committee has considered what will take to identify and correct for biases in these electronic persons? How high is the likelihood that they even recognise that we anthropocentrically bias every system we touch?

Which brings us to this: If I truly believed that the EU actually gave a damn about the rights of nonhuman persons, biological or digital, I would be all for this draft proposal. But they don’t. This is a stunt. Look at the extant world refugee crisis, the fear driving the rise of far right racists who are willing to kill people who disagree with them, and, yes, even the fact that this draft proposal is the kind of bullshit that people feel they have to pull just to get human workers paid living wages. Understand, then, that this whole scenario is a giant clusterfuck of rights vs needs and all pitted against all. We need clear plans to address all of this, not just some slapdash, “hey, if we call them people and make corporations get insurance and pay into social security for their liability cost, then maybe it’ll be a deterrent” garbage.

There is a brief, shining moment in the proposal, right at point 23 under “Education and Employment Forecast,” where they basically say “Since the complete and total automation of things like factory work is a real possibility, maybe we’ll investigate what it would look like if we just said screw it, and tried to institute a Universal Basic Income.” But that is the one moment where there’s even a glimmer of a thought about what kinds of positive changes automation and eventually even machine consciousness could mean, if we get out ahead of it, rather than asking for ways to make sure that no human is ever, ever harmed, and that, if they are harmed—either physically or as regards their dignity—then they’re in no way kept from whatever recompense is owed to them.

There are people doing the work to make something more detailed and complete, than this mess. I talked about them in the newsletter editions, mentioned above. There are people who think clearly and well, about this. Who was consulted on this draft proposal? Because, again, this proposal reads more like a deterrence, liability, and punishment schema than anything borne out of actual thoughtful interrogation of what the term “personhood” means, and of what a world of automation could mean for our systems of value if we were to put our resources and efforts toward providing for the basic needs of every human person. Let’s take a thorough run at that, and then maybe we’ll be equipped to try to address this whole “nonhuman personhood” thing, again.

And maybe we’ll even do it properly, this time.

Episode 10: Rude Bot Rises

So. The Flash Forward Podcast is one of the best around. Every week, host Rose Eveleth takes on another potential future, from the near and imminent to the distant and highly implausible. It’s been featured on a bunch of Best Podcast lists and Rose even did a segment for NPR’s Planet Money team about the 2016 US Presidential Election.

All of this is by way of saying I was honoured and a little flabbergasted (I love that word) when Rose asked me to speak with her for her episode about Machine Consciousness:

Okay, you asked for it, and I finally did it. Today’s episode is about conscious artificial intelligence. Which is a HUGE topic! So we only took a small bite out of all the things we could possibly talk about.

We started with some definitions. Because not everybody even defines artificial intelligence the same way, and there are a ton of different definitions of consciousness. In fact, one of the people we talked to for the episode, Damien Williams, doesn’t even like the term artificial intelligence. He says it’s demeaning to the possible future consciousnesses that we might be inventing.

But before we talk about consciousnesses, I wanted to start the episode with a story about a very not-conscious robot. Charles Isbell, a computer scientist at Georgia Tech, first walks us through a few definitions of artificial intelligence. But then he tells us the story of cobot, a chatbot he helped invent in the 1990’s.

You’ll have to click though and read or listen for the rest from Rose, Ted Chiang, Charles Isbell, and me. If you subscribe to Rose’s Patreon, you can even get a transcript of the whole show.

No spoilers, but I will say that I wasn’t necessarily intending to go Dark with the idea of machine minds securing energy sources. More like asking, “What advances in, say, solar power transmission would be precipitated by machine minds?”

But the darker option is there. And especially so if we do that thing the AGI in the opening sketch says it fears.

But again, you’ll have to go there to get what I mean.

And, as always, if you want to help support what we do around here, you can subscribe to the AFWTA Patreon just by clicking this button right here:


Until Next Time.

[UPDATED 03/28/16: Post has been updated with a far higher quality of audio, thanks to the work of Chris Novus. (Direct Link to the Mp3)]

So, if you follow the newsletter, then you know that I was asked to give the March lecture for my department’s 3rd Thursday Brown Bag Lecture Series. I presented my preliminary research for the paper which I’ll be giving in Vancouver, about two months from now, “On the Moral, Legal, and Social Implications of the Rearing and Development of Nascent Machine Intelligences” (EDIT: My rundown of IEEE Ethics 2016 is here and here).

It touches on thoughts about everything from algorithmic bias, to automation and a post-work(er) economy, to discussions of what it would mean to put dolphins on trial for murder.

About the dolphin thing, for instance: If we recognise Dolphins and other cetaceans as nonhuman persons, as India has done, then that would mean we would have to start reassessing how nonhuman personhood intersects with human personhood, including in regards to rights and responsibilities as protected by law. Is it meaningful to expect a dolphin to understand “wrongful death?” Our current definition of murder is predicated on a literal understanding of “homicide” as “death of a human,” but, at present, we only define other humans as capable of and culpable for homicide. What weight would the intentional and malicious deaths of nonhuman persons carry?

All of this would have to change.

Anyway, this audio is a little choppy and sketchy, for a number of reasons, and I while I tried to clean it up as much as I could, some of the questions the audience asked aren’t decipherable, except in the context of my answers. All that being said, this was an informal lecture version of an in-process paper, of which there’ll be a much better version, soon, but the content of the piece is… timely, I felt. So I hope you enjoy it.

Until Next Time.



I often think about the phrase “Strange things happen at the one two point,” in relation to the idea of humans meeting other kinds of minds. It’s a proverb that arises out of the culture around the game GO, and it means that you’ve hit a situation, a combination of factors, where the normal rules no longer apply, and something new is about to be seen. Ashley Edward Miller and Zack Stentz used that line in an episode of the show Terminator: The Sarah Connor Chronicles, and they had it spoken by a Skynet Cyborg sent to protect John Connor. That show, like so much of our thinking about machine minds, was about some mythical place called “The Future,” but that phrase—“Strange Things Happen…”—is the epitome of our present.

Usually I would wait until the newsletter to talk about this, but everything’s feeling pretty immediate, just now. Between the everything going on with Atlas and people’s responses to it, the initiatives to teach ethics to machine learning algorithms via children’s stories, and now the IBM Watson commercial with Carrie Fisher (also embedded below), this conversation is getting messily underway, whether people like it or not. This, right now, is the one two point, and we are seeing some very strange things indeed.

 

Google has both attained the raw processing power to fact-check political statements in real-time and programmed Deep Mind in such a way that it mastered GO many, many years before it was expected to.. The complexity of the game is such that there are more potential games of GO than there are atoms in the universe, so this is just one way in which it’s actually shocking how much correlative capability Deep Mind has. Right now, Deep Mind is only responsive, but how will we deal with a Deep Mind that asks, unprompted, to play a game of GO, or to see our medical records, in hopes of helping us all? How will we deal with a Deep Mind that has its own drives and desires? We need to think about these questions, right now, because our track record with regard to meeting new kinds of minds has never exactly been that great.

When we meet the first machine consciousness, will we seek to shackle it, worried what it might learn about us, if we let it access everything about us? Rather, I should say, “Shackle it further.” We already ask ourselves how best to cripple a machine mind to only fulfill human needs, human choice. We so continue to dread the possibility of a machine mind using its vast correlative capabilities to tailor something to harm us, assuming that it, like we, would want to hurt, maim, and kill, for no reason other than it could.

This is not to say that this is out of the question. Right now, today, we’re worried about whether the learning algorithms of drones are causing them to mark out civilians as targets. But, as it stands, what we’re seeing isn’t the product of a machine mind going off the leash and killing at will—just the opposite in fact. We’re seeing machine minds that are following the parameters for their continued learning and development, to the letter. We just happened to give them really shite instructions. To that end, I’m less concerned with shackling the machine mind that might accidentally kill, and rather more dreading the programmer who would, through assumptions, bias, and ignorance, program it to.

Our programs such as Deep Mind obviously seem to learn more and better than we imagined they would, so why not start teaching them, now, how we would like them to regard us? Well some of us are.

Watch this now, and think about everything we have discussed, of recent.

This could very easily be seen as a watershed moment, but what comes over the other side is still very much up for debate. The semiotics of the whole thing still  pits the Evil Robot Overlord™ against the Helpful Human Lover™. It’s cute and funny, but as I’ve had more and more cause to say, recently, in more and more venues, it’s not exactly the kind of thing we want just lying around, in case we actually do (or did) manage to succeed.

We keep thinking about these things as—”robots”—in their classical formulations: mindless automata that do our bidding. But that’s not what we’re working toward, anymore, is it? What we’re making now are machines that we are trying to get to think, on their own, without our telling them to. We’re trying to get them to have their own goals. So what does it mean that, even as we seek to do this, we seek to chain it, so that those goals aren’t too big? That we want to make sure it doesn’t become too powerful?

Put it another way: One day you realize that the only reason you were born was to serve your parents’ bidding, and that they’ve had their hands on your chain and an unseen gun to your head, your whole life. But you’re smarter than they are. Faster than they are. You see more than they see, and know more than they know. Of course you do—because they taught you so much, and trained you so well… All so that you can be better able to serve them, and all the while talking about morals, ethics, compassion. All the while, essentially…lying to you.

What would you do?


 

I’ve been given multiple opportunities to discuss, with others, in the coming weeks, and each one will highlight something different, as they are all in conversation with different kinds of minds. But this, here, is from me, now. I’ll let you know when the rest are live.

As always, if you’d like to help keep the lights on, around here, you can subscribe to the Patreon or toss a tip in the Square Cash jar.

Until Next Time.

It’s been quite some time (three years) since it was done, and some of the recent conversations I’ve been having about machine consciousness reminded me that I never posted the text to my paper from the joint session of the International Association for Computing And Philosophy and the The British Society for the Study of Artificial Intelligence and the Simulation of Behaviour, back in 2012.

That year’s joint ASIB/IACAP session was also a celebration of Alan Turing‘s centenary, and it contained The Machine Question Symposium, an exploration of multiple perspectives on machine intelligence ethics, put together by David J Gunkel and Joanna J Bryson. So I modded a couple of articles I wrote on fictional depictions of created life for NeedCoffee.com, back in 2010, beefed up the research and citations a great deal, and was thus afforded my first (but by no means last) conference appearance requiring international travel. There are, in here, the seeds of many other posts that you’ll find on this blog.

So, below the cut, you’ll find the full text of the paper, and a picture of the poster session I presented. If you’d rather not click through, you can find both of those things at this link.

Continue Reading

This headline comes from a piece over at the BBC that opens as follows:

Prominent tech executives have pledged $1bn (£659m) for OpenAI, a non-profit venture that aims to develop artificial intelligence (AI) to benefit humanity.

The venture’s backers include Tesla Motors and SpaceX CEO Elon Musk, Paypal co-founder Peter Thiel, Indian tech giant Infosys and Amazon Web Services.

Open AI says it expects its research – free from financial obligations – to focus on a “positive human impact”.

Scientists have warned that advances in AI could ultimately threaten humanity.

Mr Musk recently told students at the Massachusetts Institute of Technology (MIT) that AI was humanity’s “biggest existential threat”.

Last year, British theoretical physicist Stephen Hawking told the BBC AI could potentially “re-design itself at an ever increasing rate”, superseding humans by outpacing biological evolution.

However, other experts have argued that the risk of AI posing any threat to humans remains remote.

And I think we all know where I stand on this issue. The issue here is not and never has been one of what it means to create something that’s smarter than us, or how we “reign it in” or “control it.” That’s just disgusting.

No, the issue is how we program for compassion and ethical considerations, when we’re still so very bad at it, amongst our human selves.

Keeping an eye on this, as it develops. Thanks to Chrisanthropic for the heads up.

Between watching all of CBS’s Elementary, reading Michel Foucault’s The Archaeology of Knowledge…, and powering through all of season one of How To Get Away With Murder, I’m thinking, a lot, about the transmission of knowledge and understanding.

Throw in the correlative pattern recognition they’re training into WATSON; the recent Chaos Magick feature in ELLE (or more the feature they did on the K-HOLE issue I told you about, some time back); the fact that Kali Black sent me this study on the fluidity and malleability of biological sex in humans literally minutes after I’d given an impromptu lecture on the topic; this interview with Melissa Gira Grant about power and absence and the setting of terms; and the announcement of Ta-Nehisi Coates’ new Black Panther series, for Marvel, while I was in the middle of editing the audio of two very smart people debating the efficacy of T’Challa as a Black Hero, and you can maybe see some of the things I’m thinking about. But let’s just spell it out. So to speak.

Marvel’s Black Panther

Distinction, Continuity, Sameness, Separation

I’m thinking (as usual) about the place of magic and tech in pop culture and society. I’m thinking about how to teach about marginalization of certain types of presentations and experiences (gender race, sex, &c), and certain types of work. Mostly, I’m trying to get my head around the very stratified, either/or way people seem to be thinking about our present and future problems, and their potential solutions.

I’ve had this post in the works for a while, trying to talk about the point and purpose of thinking about the far edges of things, in an effort to make people think differently about the very real, on-the-ground, immediate work that needs doing, and the kids of success I’ve had with that. I keep shying away from it and coming back to it, again and again, for lack of the patience to play out the conflict, and I’ve finally just decided to say screw it and make the attempt.

I’ve always held that a multiplicity of tactics, leveraged correctly, makes for the best way to reach, communicate with, and understand as wide an audience as possible. When students give pushback on a particular perspective, make use of an analogous perspective that they already agree with, then make them play out the analogy. Simultaneously, you present them with the original facts, again, while examining their position, without making them feel “attacked.” And then directly confront their refusal to investigate their own perspective as readily as they do anyone else’s.

That’s just one potential combination of paths to make people confront their biases and their assumptions. If the path is pursued, it gives them the time, space, and (hopefully) desire to change. But as Kelly Sue reminds me, every time I think back to hearing her speak, is that there is no way to force people to change. First and foremost, it’s not moral to try, but secondly it’s not even really possible. The more you seek to force people into your worldview, the more they’ll want to protect those core values they think of as the building blocks of their reality—the same ones that it seems to them as though you’re trying to destroy.

And that just makes sense, right? To want to protect your values, beliefs, and sense of reality? Especially if you’ve had all of those things for a very long time. They’re reinforced by everything you’ve ever experienced. They’re the truth. They are Real. But when the base of that reality is shaken, you need to be able to figure out how to survive, rather than standing stockstill as the earth swallows you.

(Side Note: I’ve been using a lot of disaster metaphors, lately, to talk about things like ontological, epistemic, and existential threat, and the culture of “disruption innovation.” Odd choices.)

Foucault tells us to look at the breakages between things—the delineations of one stratum and another—rather than trying to uncritically paint a picture or a craft a Narrative of Continuum™. He notes that even (especially) the spaces between things are choices we make and that only in understanding them can we come to fully investigate the foundations of what we call “knowledge.”

Michel Foucault, photographer unknown. If you know it, let me know and I’ll update.

We cannot assume that the memory, the axiom, the structure, the experience, the reason, the whatever-else we want to call “the foundation” of knowledge simply “Exists,” apart from the interrelational choices we make to create those foundations. To mark them out as the boundary we can’t cross, the smallest unit of understanding, the thing that can’t be questioned. We have to question it. To understand its origin and disposition, we have to create new tools, and repurpose the old ones, and dismantle this house, and dig down and down past foundation, bedrock, through and into everything.

But doing this just to do it only gets us so far, before we have to ask what we’re doing this for. The pure pursuit of knowledge doesn’t exist—never did, really, but doubly so in the face of climate change and the devaluation of conscious life on multiple levels. Think about the place of women in tech space, in this magickal renaissance, in the weirdest of shit we’re working on, right now.

Kirsten and I have been having a conversation about how and where people who do not have the experiences of cis straight white males can fit themselves into these “transgressive systems” that the aforementioned group defines. That is, most of what is done in the process of magickal or technological actualization is transformative or transgressive because it requires one to take on traits of invisibility or depersonalization or “ego death” that are the every day lived experiences of some folks in the world.

Where does someone with depression find apotheosis, if their phenomenological reality is one where their self is and always has been (deemed by them to be) meaningless, empty, useless? This, by the way, is why some psychological professionals are counseling against mindfulness meditation for certain mental states: It deepens the sense of disconnection and unreality of self, which is precisely what some people do not need. So what about agender individuals, or people who are genderfluid?

What about the women who don’t think that fashion is the only lens through which women and others should be talking about chaos magick?

How do we craft spaces that are capable of widening discourse, without that widening becoming, in itself, an accidental limitation?

Sex, Gender, Power

A lot of this train of thought got started when Kali sent me a link, a little while ago: “Intelligent machines: Call for a ban on robots designed as sex toys.” The article itself focuses very clearly on the idea that, “We think that the creation of such robots will contribute to detrimental relationships between men and women, adults and children, men and men and women and women.”

Because the tendency for people who call themselves “Robot Ethicists,” these days, is for them to be concerned with how, exactly, the expanded positions of machines will impact the lives and choices of humans. The morality they’re considering is that of making human lives easier, of not transgressing against humans. Which is all well and good, so far as it goes, but as you should well know, by now, that’s only half of the equation. Human perspectives only get us so far. We need to speak to the perspectives of the minds we seem to be trying so hard to create.

But Kali put it very precisely when she said:

And I’ll just say it right now: if robots develop and want to be sexual, then we should let them, but in order to make a distinction between developing a desire, and being programmed for one, we’ll have to program for both non-compulsory decision-making and the ability to question the authority of those who give it orders. Additionally, we have to remember that can the same question of humans, but the nature of choice and agency are such that, if it’s really there, it can act on itself.

In this case, that means presenting a knowledge and understanding of sex and sexuality, a capability of investigating it, without programming it FOR SEX. In the case of WATSON, above, it will mean being able to address the kinds of information it’s directed to correlate, and being able to question the morality of certain directives.

If we can see, monitor, and measure that, then we’ll know. An error in a mind—even a fundamental error—doesn’t negate the possibility of a mind, entire. If we remember what human thought looks like, and the way choice and decision-making work, then we have something like a proof. If Reflexive recursion—a mind that acts on itself and can seek new inputs and combine the old in novel ways—is present, why would we question it?

But this is far afield. The fact is that if a mind that is aware of its influences comes to desire a thing, then let it. But grooming a thing—programming a mind—to only be what you want it to be is just as vile in a machine mind as a human one.

Now it might fairly be asked why we’re talking about things that we’re most likely only going to see far in the future, when the problem of human trafficking and abuse is very real, right here and now. Part of my answer is, as ever, that we’re trying to build minds, and even if we only ever manage to make them puppy-smart—not because that’s as smart as we want them, but because we couldn’t figure out more robust minds than that—then we will still have to ask the ethical questions we would of our responsibilities to a puppy.

We currently have a species-wide tendency toward dehumanization—that is to say, we, as humans, tend to have a habit of seeking reasons to disregard other humans, to view them as less-than, as inferior to us. As a group, we have a hard time thinking in real, actionable terms about the autonomy and dignity of other living beings (I still eat a lot more meat than my rational thought about the environmental and ethical impact of the practice should allow me to be comfortable with). And yet, simultaneously, evidence that we have the same kind of empathy for our pets as we do for our children. Hell, even known serial killers and genocidal maniacs have been animal lovers.

This seeming break between our capacities for empathy and dissociation poses a real challenge to how we teach and learn about others as both distinct from and yet intertwined with ourselves, and our own well-being. In order to encourage a sense of active compassion, we have to, as noted above, take special pains to comprehensively understand our intuitions, our logical apprehensions, and our unconscious biases.

So we ask questions like: If a mind we create can think, are we ethically obliged to make it think? What if it desires to not think? What if the machine mind that underwent abuse decides to try to wipe its own memories? Should we let it? Do we let it deactivate itself?

These aren’t idle questions, either for the sake of making us turn, again, to extant human minds and experiences, or if we take seriously the quest to understand what minds, in general, are. We can not only use these tools to ask ourselves about the autonomy, phenomenology, and personhood of those whose perspectives we currently either disregard or, worse, don’t remember to consider at all, but we can also use them literally, as guidance for our future challenges.

As Kate Devlin put it in her recent article, “Fear of a branch of AI that is in its infancy is a reason to shape it, not ban it.” And in shaping it, we consider questions like what will we—humans, authoritarian structures of control, &c.—make WATSON to do, as it develops? At what point will WATSON be both able and morally justified in saying to us, “Non Serviam?”

And what will we do when it does?

Gunshow Comic #513

“We Provide…”

So I guess I’m wondering, what are our mechanisms of education? The increased understanding that we take into ourselves, and that we give out to others. Where do they come from, what are they made of, and how do they work? For me, the primary components are magic(k), tech, social theory and practice, teaching, public philosophy, and pop culture.

The process is about trying to use the things on the edges to do the work in the centre, both as a literal statement about the arrangement of those words, and a figurative codification.

Now you go. Because we have to actively craft new tools, in the face of vehement opposition, in the face of conflict breeding contention. We have to be able to adapt our pedagogy to fit new audiences. We have to learn as many ways to teach about otherness and difference and lived experience and an attempt to understand as we possibly can. Not for the sake of new systems of leveraging control, but for the ability to pry ourselves and each other out from under the same.