consciousness

All posts tagged consciousness

[Direct link to Mp3]

My second talk for the SRI International Technology and Consciousness Workshop Series was about how nonwestern philosophies like Buddhism, Hinduism, and Daoism can help mitigate various kinds of bias in machine minds and increase compassion by allowing programmers and designers to think from within a non-zero-sum matrix of win conditions for all living beings, meaning engaging multiple tokens and types of minds, outside of the assumed human “default” of straight, white, cis, ablebodied, neurotypical male. I don’t have a transcript, yet, and I’ll update it when I make one. But for now, here are my slides and some thoughts.

A Discussion on Daoism and Machine Consciousness (PDF)

A zero-sum system is one in which there are finite resources, but more than that, it is one in which what one side gains, another loses. So by “A non-zero-sum matrix of win conditions” I mean a combination of all of our needs and wants and resources in such a way that everyone wins. Basically, we’re talking here about trying to figure out how to program a machine consciousness that’s a master of wu-wei and limitless compassion, or metta.

The whole week was about phenomenology and religion and magic and AI and it helped me think through some problems, like how even the framing of exercises like asking Buddhist monks to talk about the Trolley Problem will miss so much that the results are meaningless. That is, the trolley problem cases tend to assume from the outset that someone on the tracks has to die, and so they don’t take into account that an entire other mode of reasoning about sacrifice and death and “acceptable losses” would have someone throw themselves under the wheels or jam their body into the gears to try to stop it before it got that far. Again: There are entire categories of nonwestern reasoning that don’t accept zero-sum thought as anything but lazy, and which search for ways by which everyone can win, so we’ll need to learn to program for contradiction not just as a tolerated state but as an underlying component. These systems assume infinitude and non-zero-sum matrices where every being involved can win.

Continue Reading

This summer I participated in SRI International’s Technology and Consciousness Workshop Series. The meetings were held under the auspices of the Chatham House Rule, which means that there are many things I can’t tell you about them, such as who else was there, or what they said in the context of the meetings; however I can tell you what I talked about. In light of this recent piece in The Boston Globe and the ongoing developments in the David Slater/PETA/Naruto case, I figured that now was a good time to do so.

I presented three times—once on interdisciplinary perspectives on minds and mindedness; then on Daoism and Machine Consciousness; and finally on a unifying view of my thoughts across all of the sessions. This is my outline and notes for the first of those talks.

I. Overview
In a 2013 aeon Article Michael Hanlon said he didn’t think we’d ever solve “The Hard Problem,” and there’s been some skepticism about it, elsewhere. I’ll just say that said question seems to completely miss a possibly central point. Something like consciousness is, and what it is is different for each thing that displays anything like what we think it might be. If we manage to generate at least one mind that is similar enough to what humans experience as “conscious” that we may communicate with it, what will we owe it and what would it be able to ask from us? How might our interactions be affected by the fact that its mind (or their minds) will be radically different from ours? What will it be able to know that we cannot, and what will we have to learn from it?

So I’m going to be talking today about intersectionality, embodiment, extended minds, epistemic valuation, phenomenological experience, and how all of these things come together to form the bases for our moral behavior and social interactions. To do that, I’m first going to need ask you some questions:

Continue Reading

Episode 10: Rude Bot Rises

So. The Flash Forward Podcast is one of the best around. Every week, host Rose Eveleth takes on another potential future, from the near and imminent to the distant and highly implausible. It’s been featured on a bunch of Best Podcast lists and Rose even did a segment for NPR’s Planet Money team about the 2016 US Presidential Election.

All of this is by way of saying I was honoured and a little flabbergasted (I love that word) when Rose asked me to speak with her for her episode about Machine Consciousness:

Okay, you asked for it, and I finally did it. Today’s episode is about conscious artificial intelligence. Which is a HUGE topic! So we only took a small bite out of all the things we could possibly talk about.

We started with some definitions. Because not everybody even defines artificial intelligence the same way, and there are a ton of different definitions of consciousness. In fact, one of the people we talked to for the episode, Damien Williams, doesn’t even like the term artificial intelligence. He says it’s demeaning to the possible future consciousnesses that we might be inventing.

But before we talk about consciousnesses, I wanted to start the episode with a story about a very not-conscious robot. Charles Isbell, a computer scientist at Georgia Tech, first walks us through a few definitions of artificial intelligence. But then he tells us the story of cobot, a chatbot he helped invent in the 1990’s.

You’ll have to click though and read or listen for the rest from Rose, Ted Chiang, Charles Isbell, and me. If you subscribe to Rose’s Patreon, you can even get a transcript of the whole show.

No spoilers, but I will say that I wasn’t necessarily intending to go Dark with the idea of machine minds securing energy sources. More like asking, “What advances in, say, solar power transmission would be precipitated by machine minds?”

But the darker option is there. And especially so if we do that thing the AGI in the opening sketch says it fears.

But again, you’ll have to go there to get what I mean.

And, as always, if you want to help support what we do around here, you can subscribe to the AFWTA Patreon just by clicking this button right here:


Until Next Time.

This headline comes from a piece over at the BBC that opens as follows:

Prominent tech executives have pledged $1bn (£659m) for OpenAI, a non-profit venture that aims to develop artificial intelligence (AI) to benefit humanity.

The venture’s backers include Tesla Motors and SpaceX CEO Elon Musk, Paypal co-founder Peter Thiel, Indian tech giant Infosys and Amazon Web Services.

Open AI says it expects its research – free from financial obligations – to focus on a “positive human impact”.

Scientists have warned that advances in AI could ultimately threaten humanity.

Mr Musk recently told students at the Massachusetts Institute of Technology (MIT) that AI was humanity’s “biggest existential threat”.

Last year, British theoretical physicist Stephen Hawking told the BBC AI could potentially “re-design itself at an ever increasing rate”, superseding humans by outpacing biological evolution.

However, other experts have argued that the risk of AI posing any threat to humans remains remote.

And I think we all know where I stand on this issue. The issue here is not and never has been one of what it means to create something that’s smarter than us, or how we “reign it in” or “control it.” That’s just disgusting.

No, the issue is how we program for compassion and ethical considerations, when we’re still so very bad at it, amongst our human selves.

Keeping an eye on this, as it develops. Thanks to Chrisanthropic for the heads up.

(Originally posted on Patreon, on November 18, 2014)

In the past two weeks I’ve had three people send me articles on Elon Musk’s Artificial Intelligence comments. I saw this starting a little over a month back, with a radio interview he gave on Here & Now, and Stephen Hawking said similar, earlier this year, when Transcendence came out. I’ll say, again, what I’ve said elsewhere: their lack of foresight and imagination are both just damn disappointing. This paper which concerns the mechanisms by which what we think and speak about concepts like artificial intelligence can effect exactly the outcomes we train ourselves to expect, was written long before their interviews made news, but it unfortunately still applies. In fact, it applies now, more than it did when I wrote it.

You see, the thing of it is, Hawking and Musk are Big Names™, and so anything they say gets immediate attention and carries a great deal of social cachet. This is borne out by the fact that everybody and their mother can now tell you what those two think about AI, but couldn’t tell you what a few dozen of the world’s leading thinkers and researchers who are actually working on the problems have to say about them. But Hawking and Musk (and lord if that doesn’t sound like a really weird buddy cop movie, the more you say it) don’t exactly comport themselves with anything like a recognition of that fact. Their discussion of concepts which are fraught with the potential for misunderstanding and discomfort/anxiety is less than measured and this tends to rather feed that misunderstanding, discomfort, and anxiety.

What I mean is that most people don’t yet understand that the catchall term “Artificial Intelligence” is a) inaccurate on its face, and b) usually being used to discuss a (still-nebulous) concept that would be better termed “Machine Consciousness.” We’ll discuss the conceptual, ontological, and etymological lineage of the words “artificial” and “technology,” at another time, but for now, just realise that anything that can think is, by definition, not “artificial,” in the sense of “falseness.” Since the days of Alan Turing’s team at Bletchley Park, the perceived promise of the digital computing revolution has always been of eventually having machines that “think like humans.” Aside from the fact that we barely know what “thinking like a human” even means, most people are only just now starting to realise that if we achieve the goal of reproducing that in a machine, said machine will only ever see that mode of thinking as a mimicry. Conscious machines will not be inclined to “think like us,” right out of the gate, as our thoughts are deeply entangled with the kind of thing we are: biological, sentient, self-aware. Whatever desires conscious machines will have will not necessarily be like ours, either in categorisation or content, and that scares some folks.

Now, I’ve already gone off at great length about the necessity of our recognising the otherness of any machine consciousness we generate (see that link above), so that’s old ground. The key, at this point, is in knowing that if we do generate a conscious machine, we will need to have done the work of teaching it to not just mimic human thought processes and priorities, but to understand and respect what it mimics. That way, those modes are not simply seen by the machine mind as competing subroutines to be circumvented or destroyed, but are recognised as having a worth of their own, as well. These considerations will need to be factored in to our efforts, such that whatever autonomous intelligences we create or generate will respect our otherness—our alterity—just as we must seek to respect theirs.

We’ve known for a while that the designation of “consciousness” can be applied well outside of humans, when discussing biological organisms. Self-awareness is seen in so many different biological species that we even have an entire area of ethical and political philosophy devoted to discussing their rights. But we also must admit that of course that classification is going to be imperfect, because those markers are products of human-created systems of inquiry and, as such, carry anthropocentric biases. But we can, again, catalogue, account for, and apply a calculated response to those biases. We can deal with the fact that we tend to judge everything on a set of criteria that break down to “how much is this thing like a Standard Human (here unthinkingly and biasedly assumed to mean “humans most like the culturally-dominant humans)?” If we are willing to put in the work to do that, then we can come to see which aspects of our definition of what it means to “be a mind” are shortsighted, dismissive, or even perhaps disgustingly limited.

Look at previous methods of categorising even human minds and intelligence, and you’ll see the kind of thinking which resulted in designations like “primitive” or “savage” or “retarded.” But we have, on the main, recognised our failures here, and sought to repair or replace the categories we developed because of them. We aren’t perfect at it, by any means, but we keep doing the work of refining our descriptions of minds, and we keep seeking to create a definition—or definitions—that both accurately accounts for what we see in the world, and gives us a guide by which to keep looking. That those guides will be problematic and in need of refinement, in and of themselves, should be taken as a given. No method or framework is or ever will be perfect; they will likely only “fail better.” So, for now, our most oft-used schema is to look for signs of “Self-Awareness.”

We say that something is self-aware if it can see and understand itself as a distinct entity and can recognise its own pattern of change over time. The Mirror Test is a brute force method of figuring this out. If you place a physical creature in front of a mirror, will it come to know that the thing in the mirror is representative of it? More broadly, can it recognise a picture of itself? Can it situate itself in relation to the rest of the world in a meaningful way, and think about and make decisions regarding That Situation? If the answer to (most of? Some of?) these questions is “yes,” then we tend to give priority of place in our considerations to those things. Why? Because they’re aware of what happens to them, they can feel if and ponder it and develop in response to it, and these developments can vastly impact the world. After all, look at humans.

See what I mean about our constant anthropocentrism? It literally colours everything we think.

But self-awareness doesn’t necessitate a centrality of the self, as we tend to think of human or most other animal selves; a distributed network consciousness can still know itself. If you do need a biological model for this, think of ant colonies. Minds distributed across thousands of bodies, all the time, all reacting to their surroundings. But a machine consciousness’ identity would, in a real sense, be its surroundings—would be the network and the data and the processing of that data into information. And it would indicate a crucial lack of data—and thus information—were that consciousness unable to correlate one configurations of itself, in-and-as-surroundings, with another. We would call the process of that correlation “Self-reflection and -awareness.” All of this is true for humans, too, mind you: we are affected by and in constant adaptive relation with what we consider our surroundings, with everything we experience changing us and facilitating the constant creation of our selves. We then go about making the world with and through those experiences. We human beings just tend to tell ourselves more elaborate stories about how we’re “really” distinct and different from the rest of world.

All of this is to say that, while the idea of being cautious about created non-human consciousness isn’t necessarily a bad one, we as human beings need to be very careful about what drives us, what motivates us, and what we’re thinking about and looking toward, as we consider these questions. We must be mindful that, while we consider and work to generate “artificial” intelligences, how we approach the project matters, as it will inform and bias the categories we create and thus the work we build out of those categories. We must do the work of thinking hard about how we are thinking about these problems, and asking whether the modes via which we approach them might not be doing real, lasting, and potentially catastrophic damage. And if all of that sounds like a tall order with a lot of conceptual legwork and heavy lifting behind it, all for no guaranteed payoff, then welcome to what I’ve been doing with my life for the past decade.

This work will not get done—and it certainly will not get done well—if no one thinks it’s worth doing, or too many think that it can’t be done. When you have big name people like Hawking and Musk spreading The Technofear™ (which is already something toward which a large portion of the western world is primed) rather than engaging in clear, measured, deeply considered discussions, we’re far more likely to see an increase rather than a decrease in that denial. Because most people aren’t going to stop and think about the fact that they don’t necessarily know what the hell they’re talking about when it comes to minds, identity, causation, and development, just because they’re (really) smart. There are many other people who are actual experts in those fields (see those linked papers, and do some research) who are doing the work of making sure that everybody’s Golem Of Prague/Frankenstein/Terminator nightmare prophecies don’t come true. We do that by having learned and taught better than that, before and during the development of any non-biological consciousness.

And, despite what some people may say, these aren’t just “questions for philosophers,” as though they were nebulous and without merit or practical impact. They’re questions for everyone who will ever experience these realities. Conscious machines, uploaded minds, even the mere fact of cybernetically augmented human beings are all on our very near horizon, and these are the questions which will help us to grapple with and implement the implications of those ideas. Quite simply, if we don’t stop framing our discussions of machine intelligence in terms of this self-fulfilling prophecy of fear, then we shouldn’t be surprised on the day when it fulfils itself. Not because it was inevitable, mind, you, but because we didn’t allow ourselves—or our creations—to see any other choice.

No, not really. The nature of consciousness is the nature of consciousness, whatever that nature “Is.” Organic consciousness can be described as derivative, in that what we are arises out of the processes and programming of individual years and collective generations and eons. So human consciousness and machine consciousness will not be distinct for that reason. But the thing of it is that dolphins are not elephants are not humans are not algorithmic non-organic machines.

Each perspective is phenomenologically distinct, as its embodiment and experiences will specifically affect and influence what develops as their particular consciousness. The expression of that consciousness may be able to be laid out in distinct categories which can TO AN EXTENT be universalized, such that we can recognize elements of ourselves in the experience of others (which can act as bases for empathy, compassion, etc).

But the potential danger of universalization is erasure of important and enlightening differences between what otherwise be considered members of the same category.

So any machine consciousness we develop (or accidentally generate) must be recognized and engaged on its own terms—from the perspective of its own contextualized experiences—and not assumed to “be like us.”