All posts tagged ethics

This summer I participated in SRI International’s Technology and Consciousness Workshop Series. The meetings were held under the auspices of the Chatham House Rule, which means that there are many things I can’t tell you about them, such as who else was there, or what they said in the context of the meetings; however I can tell you what I talked about. In light of this recent piece in The Boston Globe and the ongoing developments in the David Slater/PETA/Naruto case, I figured that now was a good time to do so.

I presented three times—once on interdisciplinary perspectives on minds and mindedness; then on Daoism and Machine Consciousness; and finally on a unifying view of my thoughts across all of the sessions. This is my outline and notes for the first of those talks.

I. Overview
In a 2013 aeon Article Michael Hanlon said he didn’t think we’d ever solve “The Hard Problem,” and there’s been some skepticism about it, elsewhere. I’ll just say that said question seems to completely miss a possibly central point. Something like consciousness is, and what it is is different for each thing that displays anything like what we think it might be. If we manage to generate at least one mind that is similar enough to what humans experience as “conscious” that we may communicate with it, what will we owe it and what would it be able to ask from us? How might our interactions be affected by the fact that its mind (or their minds) will be radically different from ours? What will it be able to know that we cannot, and what will we have to learn from it?

So I’m going to be talking today about intersectionality, embodiment, extended minds, epistemic valuation, phenomenological experience, and how all of these things come together to form the bases for our moral behavior and social interactions. To do that, I’m first going to need ask you some questions:

Continue Reading

(This was originally posted over at Medium, [well parts were originally posted in the newslettter, but], but I wanted it somewhere I could more easily manage.)


I just wanna say (and you know who you are): I get you were scared of losing your way of life — the status quo was changing all around you. Suddenly it wasn’t okay anymore to say or do things that the world previously told you were harmless. People who didn’t “feel” like you were suddenly loudly everywhere, and no one just automatically believed what you or those you believed in had to say, anymore. That must have been utterly terrifying.

But here’s the thing: People are really scared now. Not just of obsolescence, or of being ignored. They’re terrified for their lives. They’re not worried about “the world they knew.” They’re worried about whether they’ll be rounded up and put in camps or shot or beaten in the street. Because, you see, many of the people who voted for this, and things like it around the world, see many of us — women, minorities, immigrants, LGBTQIA folks, disabled folks, neurodivergent folks — as less than “real” people, and want to be able to shut us up using whatever means they deem appropriate, including death.

The vice president elect thinks gay people can be “retrained,” and that we should attempt it via the same methods that make us side-eye dog owners. The man tapped to be a key advisor displays and has cultivated an environment of white supremacist hatred. The president-elect is said to be “mulling over” a registry for Muslim people in the country. A registry. Based on your religion.

My own cousin had food thrown at her in a diner, right before the election. And things haven’t exactly gotten better, since then.

Certain hateful elements want many of us dead or silent and “in our place,” now, just as much as ever. And all we want and ask for is equal respect, life, and justice.

I said it on election night and I’ll say it again: there’s no take-backsies, here. I’m speaking to those who actively voted for this, or didn’t actively plant yourselves against it (and you know who you are): You did this. You cultivated it. And I know you did what you thought you had to, but people you love are scared, because their lives are literally in danger, so it’s time to wake up now. It’s time to say “No.”

We’re all worried about jobs and money and “enough,” because that’s what this system was designed to make us worry about. Your Muslim neighbour, your gay neighbour, your trans neighbour, your immigrant neighbour, your NEIGHBOUR IS NOT YOUR ENEMY. The system that tells you to hate and fear them is. And if you bought into that system because you couldn’t help being afraid then I’m sorry, but it’s time to put it down and Wake Up. Find it in yourself to ask forgiveness of yourself and of those you’ve caused mortal terror. If you call yourself Christian, that should ring really familiar. But other faiths (and nonfaiths) know it too.

We do better together. So it’s time to gather up, together, work, together, and say “No,” together.

So snap yourself out of it, and help us. If you’re in the US, please call your representatives, federal and local. Tell them what you want, tell them why you’re scared. Tell them that these people don’t represent our values and the world we wish to see:

Because this, right here, is the fundamental difference between fearing the loss of your way of life, and the fear of losing your literal life.

Be with the people you love. Be by their side and raise their voices if they can’t do it for themselves, for whatever reason. Listen to them, and create a space where they feel heard and loved, and where others will listen to them as well.

And when you come around, don’t let your pendulum swing so far that you fault those who can’t move forward, yet. Please remember that there is a large contingent of people who, for many various reasons, cannot be out there protesting. Shaming people who have anxiety, depression, crippling fear of their LIVES, or are trying to not get arrested so their kids can, y’know, EAT FOOD? Doesn’t help.

So show some fucking compassion. Don’t shame those who are tired and scared and just need time to collect themselves. Urge and offer assistance where you can, and try to understand their needs. Just do what you can to help us all believe that we can get through this. We may need to lean extra hard on each other for a while, but we can do this.

You know who you are. We know you didn’t mean to. But this is where we are, now. Shake it off. Start again. We can do this.

If you liked this article, consider dropping something into the A Future Worth Thinking About Tip Jar

There’s increasing reportage about IBM using Watson to correlate medical data. We’ve talked before about the potential hazards of this:

Do you know someone actually had the temerity to ask [something like] “What Does Google Having Access to Medical Records Mean For Patient Privacy?” [Here] Like…what the fuck do you think it means? Nothing good, you idiot!

Disclosures and knowledges can still make certain populations intensely vulnerable to both predation and to social pressures and judgements, and until that isn’t the case, anymore, we need to be very careful about the work we do to try to bring those patients’ records into a sphere where they’ll be accessed and scrutinized by people who don’t have to take an oath to hold that information in confidence. ‘

We are more and more often at the intersection of our biological humanity and our technological augmentation, and the integration of our mediated outboard memories only further complicates the matter. As it stands, we don’t quite yet know how to deal with the question posed by Motherboard, some time ago (“Is Harm to a Prosthetic Limb Property Damage or Personal Injury?”), but as we build on implantable technologies, advanced prostheses, and offloaded memories and augmented capacities we’re going to have to start blurring the line between our bodies, our minds, and our concept of our selves. That is, we’ll have to start intentionally blurring it, because the vast majority of us already blur it, without consciously realising that we do. At least, those without prostheses don’t realise it.

Dr Ashley Shew, out of Virginia Tech,  works at the intersection of philosophy, tech, and disability. I first encountered her work, at the 2016 IEEE Ethics Conference in Vancouver, where she presented her paper “Up-Standing, Norms, Technology, and Disability,” a discussion of how ableism, expectations, and language use marginalise disabled bodies. Dr Shew is, herself, disabled, having had her left leg removed due to cancer, and she gave her talk not on the raised dias, but at floor-level, directly in front of the projector. Her reason? “I don’t walk up stairs without hand rails, or stand on raised platforms without guards.”

Dr Shew notes that users of wheelchairs consider those to be fairly integral extensions and interventions. Wheelchair users, she notes, consider their chairs to be a part of them, and the kinds of lawsuits engaged when, for instance, airlines damage their chairs, which happens a great deal.  While we tend to think of the advents of technology allowing for the seamless integration of our technology and bodies, the fact is that well-designed mechanical prostheses, today, are capable becoming integrated into the personal morphic sphere of a person, the longer they use it. And this can extended sensing can be transferred from one device to another. Shew mentions a friend of hers:

She’s an amputee who no longer uses a prosthetic leg, but she uses forearm crutches and a wheelchair. (She has a hemipelvectomy, so prosthetics are a real pain for her to get a good fit and there aren’t a lot of options.) She talks about how people have these different perceptions of devices. When she uses her chair people treat her differently than when she uses her crutches, but the determination of which she uses has more to do with the activities she expects for the day, rather than her physical wellbeing.

But people tend to think she’s recovering from something when she moves from chair to sticks.

She has been an [amputee] for 18 years.

She has/is as recovered as she can get.

In her talk at IEEE, Shew discussed the fact that a large number of paraplegics and other wheelchair users do not want exoskeletons, and those fancy stair-climbing wheelchairs aren’t covered by health insurance. They’re classed as vehicles. She said that when she brought this up in the class she taught, one of the engineers left the room looking visibly distressed. He came back later and said that he’d gone home to talk to his brother with spina bifida, who was the whole reason he was working on exoskeletons. He asked his brother, “Do you even want this?” And the brother said, basically, “It’s cool that you’re into it but… No.” So, Shew asks, why are these technologies being developed? Transhumanists and the military. Framing this discussion as “helping our vets” makes it a noble cause, without drawing too much attention to the fact that they’ll be using them on the battlefield as well.

All of this comes back down and around to the idea of biases ingrained into social institutions. Our expectations of what a “normal functioning body” is gets imposed from the collective society, as a whole, a placed as restrictions and demands on the bodies of those whom we deem to be “malfunctioning.” As Shew says, “There’s such a pressure to get the prosthesis as if that solves all the problems of maintenance and body and infrastructure. And the pressure is for very expensive tech at that.”

So we are going to have to accept—in a rare instance where Robert Nozick is proven right about how property and personhood relate—that the answer is “You are damaging both property and person, because this person’s property is their person.” But this is true for reasons Nozick probably would not think to consider, and those same reasons put us on weirdly tricky grounds. There’s a lot, in Nozick, of the notion of property as equivalent to life and liberty, in the pursuance of rights, but those ideas don’t play out, here, in the same way as they do in conservative and libertarian ideologies.  Where those views would say that the pursuit of property is intimately tied to our worth as persons, in the realm of prosthetics our property is literally simultaneously our bodies, and if we don’t make that distinction, then, as Kirsten notes, we can fall into “money is speech” territory, very quickly, and we do not want that.

Because our goal is to be looking at quality of life, here—talking about the thing that allows a person to feel however they define “comfortable,” in the world. That is, the thing(s) that lets a person intersect with the world in the ways that they desire. And so, in damaging the property, you damage the person. This is all the more true if that person is entirely made of what we are used to thinking of as property.

And all of this is before we think about the fact implantable and bone-bonded tech will need maintenance. It will wear down and glitch out, and you will need to be able to access it, when it does.  This means that the range of ability for those with implantables? Sometimes it’s less than that of folks with more “traditional” prostheses. But because they’re inside, or more easily made to look like the “original” limb,  we observers are so much more likely to forget that there are crucial differences at play in the ownership and operation of these bodies.

There’s long been a fear that, the closer we get to being able to easily and cheaply modify humans, we’ll be more likely to think of humanity as “perfectable.” That the myth of progress—some idealized endpoint—will be so seductive as to become completely irresistible. We’ve seen this before, in the eugenics movement, and it’s reared its head in the transhumanist and H+ communities of the 20th and 21st centuries, as well. But there is the possibility that instead of demanding that there be some kind of universally-applicable “baseline,” we intently focused, instead, on recognizing the fact that just as different humans have different biochemical and metabolic needs, process, capabilities, preferences, and desires, different beings and entities which might be considered persons are drastically different than we, but no less persons?

Because human beings are different. Is there a general framework, a loosely-defined line around which we draw a conglomeration of traits, within which lives all that we mark out as “human”—a kind of species-wide butter zone? Of course. That’s what makes us a fucking species. But the kind of essentialist language and thinking towards which we tend, after that, is reductionist and dangerous. Our language choices matter, because connotative weight alters what people think and in what context, and, again, we have a habit of moving rapidly from talking about a generalized framework of humanness to talking about “The Right Kind Of Bodies,” and the “Right Kind Of Lifestyle.”

And so, again, again, again, we must address problems such as normalized expectations of “health” and “Ability.” Trying to give everyone access to what they might consider their “best” selves is a brilliant goal, sure, whatever, but by even forwarding the project, we run the risk of colouring an expectation of both what that “best” is and what we think it “Ought To” look like.

Some people need more protein, some people need less choline, some people need higher levels of phosphates, some people have echolocation, some can live to be 125, and every human population has different intestinal bacterial colonies from every other. When we combine all these variables, we will not necessarily find that each and every human being has the same molecular and atomic distribution in the same PPM/B ranges, nor will we necessarily find that our mixing and matching will ensure that everyone gets to be the best combination of everything. It would be fantastic if we could, but everything we’ve ever learned about our species says that “healthy human” is a constantly shifting target, and not a static one.

We are still at a place where the general public reacts with visceral aversion to technological advances and especially anything like an immediated technologically-augmented humanity, and this is at least in part because we still skirt the line of eugenics language, to this day. Because we talk about naturally occurring bio-physiological Facts as though they were in any way indicative of value, without our input. Because we’re still terrible at ethics, continually screwing up at 100mph, then looking back and going, “Oh. Should’ve factored that in. Oops.”

But let’s be clear, here: I am not a doctor. I’m not a physiologist or a molecular biologist. I could be wrong about how all of these things come together in the human body, and maybe there will be something more than a baseline, some set of all species-wide factors which, in the right configuration, say “Healthy Human.” But what I am is someone with a fairly detailed understanding of how language and perception affect people’s acceptance of possibilities, their reaction to new (or hauntingly-familiar-but-repackaged) ideas, and their long-term societal expectations and valuations of normalcy.

And so I’m not saying that we shouldn’t augment humanity, via either mediated or immediated means. I’m not saying that IBM’s Watson and Google’s DeepMind shouldn’t be tasked with the searching patient records and correlating data. But I’m also not saying that either of these is an unequivocal good. I’m saying that it’s actually shocking how much correlative capability is indicated by the achievements of both IBM and Google. I’m saying that we need to change the way we talk about and think about what it is we’re doing. We need to ask ourselves questions about informed patient consent, and the notions of opting into the use of data; about the assumptions we’re making in regards to the nature of what makes us humans, and the dangers of rampant, unconscious scientistic speciesism. Then, we can start to ask new questions about how to use these new tools we’ve developed.

With this new perspective, we can begin to imagine what would happen if we took Watson and DeepDream’s ability to put data into context—to turn around, in seconds, millions upon millions (billions? Trillions?) of permutations and combinations. And then we can ask them to work on tailoring genome-specific health solutions and individualized dietary plans. What if we asked these systems to catalogue literally everything we currently knew about every kind of disease presentation, in every ethnic and regional population, and the differentials for various types of people with different histories, risk factors, current statuses? We already have nanite delivery systems, so what if we used Google and IBM’s increasingly ridiculous complexity to figure out how to have those nanobots deliver a payload of perfectly-crafted medical remedies?

But this is fraught territory. If we step wrong, here, we are not simply going to miss an opportunity to develop new cures and devise interesting gadgets. No; to go astray, on this path, is to begin to see categories of people that “shouldn’t” be “allowed” to reproduce, or “to suffer.” A misapprehension of what we’re about, and why, is far fewer steps away from forced sterilization and medical murder than any of us would like to countenance. And so we need to move very carefully, indeed, always being aware of our biases, and remembering to ask those affected by our decisions what they need and what it’s like to be them. And remembering, when they provide us with their input, to believe them.

I spoke with Klint Finley over at WIRED about Amazon, Facebook, Google, IBM, and Microsoft’s new joint ethics and oversight venture, which they’ve dubbed the “Partnership on Artificial Intelligence to Benefit People and Society.” They held a joint press briefing, today, in which Yann LeCun, Facebook’s director of AI, and Mustafa Suleyman, the head of applied AI at DeepMind discussed what it was that this new group would be doing out in the world. From the Article:

Creating a dialogue beyond the rather small world of AI researchers, LeCun says, will be crucial. We’ve already seen a chat bot spout racist phrases it learned on Twitter, an AI beauty contest decide that black people are less attractive than white people and a system that rates the risk of someone committing a crime that appears to be biased against black people. If a more diverse set of eyes are looking at AI before it reaches the public, the thinking goes, these kinds of thing can be avoided.

The rub is that, even if this group can agree on a set of ethical principles–something that will be hard to do in a large group with many stakeholders—it won’t really have a way to ensure those ideals are put into practice. Although one of the organization’s tenets is “Opposing development and use of AI technologies that would violate international conventions or human rights,” Mustafa Suleyman, the head of applied AI at DeepMind, says that enforcement is not the objective of the organization.

This isn’t the first time I’ve talked to Klint about the intricate interplay of machine intelligence, ethics, and algorithmic bias; we discussed it earlier just this year, for WIRED’s AI Issue. It’s interesting to see the amount of attention this topic’s drawn in just a few short months, and while I’m trepidatious about the potential implementations, as I note in the piece, I’m really fairly glad that more people are more and more willing to have this discussion, at all.

To see my comments and read the rest of the article, click through, here: “Tech Giants Team Up to Keep AI From Getting Out of Hand”

-Human Dignity-

The other day I got a CFP for “the future of human dignity,” and it set me down a path thinking.

We’re worried about shit like mythical robots that can somehow simultaneously enslave us and steal the shitty low paying jobs we none of us want to but all of us have to have so we can pay off the debt we accrued to get the education we were told would be necessary to get those jobs, while other folks starve and die of exposure in a world that is just chock full of food and houses…

About shit like how we can better regulate the conflated monster of human trafficking and every kind of sex work, when human beings are doing the best they can to direct their own lives—to live and feed themselves and their kids on their own terms—without being enslaved and exploited…

About, fundamentally, how to make reactionary laws to “protect” the dignity of those of us whose situations the vast majority of us have not worked to fully appreciate or understand, while we all just struggle to not get: shot by those who claim to protect us, willfully misdiagnosed by those who claim to heal us, or generally oppressed by the system that’s supposed to enrich and uplift us…

…but no, we want to talk about the future of human dignity?

Louisiana’s drowning, Missouri’s on literal fire, Baltimore is almost certainly under some ancient mummy-based curse placed upon it by the angry ghost of Edgar Allan Poe, and that’s just in the One Country.

Motherfucker, human dignity ain’t got a Past or a Present, so how about let’s reckon with that before we wax poetically philosophical about its Future.

I mean, it’s great that folks at Google are finally starting to realise that making sure the composition of their teams represents a variety of lived experiences is a good thing. But now the questions are, 1) do they understand that it’s not about tokenism, but about being sure that we are truly incorporating those who were previously least likely to be incorporated, and 2) what are we going to do to not only specifically and actively work to change that, but also PUBLICIZE THAT WE NEED TO?

These are the kinds of things I mean when I say, “I’m not so much scared of/worried about AI as I am about the humans who create and teach them.”

There’s a recent opinion piece at the Washington Post, titled “Why perceived inequality leads people to resist innovation,”. I read something like that and I think… Right, but… that perception is a shared one based on real impacts of tech in the lives of many people; impacts which are (get this) drastically unequal. We’re talking about implications across communities, nations, and the world, at an intersection with a tech industry that has a really quite disgusting history of “disruptively innovating” people right out of their homes and lives without having ever asked the affected parties about what they, y’know, NEED.

So yeah. There’s a fear of inequality in the application of technological innovation… Because there’s a history of inequality in the application of technological innovation!

This isn’t some “well aren’t all the disciplines equally at fault here,” pseudo-Kumbaya false equivalence bullshit. There are neoliberal underpinnings in the tech industry that are basically there to fuck people over. “What the market will bear” is code for, “How much can we screw people before there’s backlash? Okay so screw them exactly that much.” This model has no regard for the preexisting systemic inequalities between our communities, and even less for the idea that it (the model) will both replicate and iterate upon those inequalities. That’s what needs to be addressed, here.

Check out this piece over at Killscreen. We’ve talked about this before—about how we’re constantly being sold that we’re aiming for a post-work economy, where the internet of things and self-driving cars and the sharing economy will free us all from the mundaneness of “jobs,” all while we’re simultaneously being asked to ignore that our trajectory is gonna take us straight through and possibly land us square in a post-Worker economy, first.

Never mind that we’re still gonna expect those ex-workers to (somehow) continue to pay into capitalism, all the while.

If, for instance, either Uber’s plan for a driverless fleet or the subsequent backlash from their stable—i mean “drivers” are shocking to you, then you have managed to successfully ignore this trajectory.


Disciplines like psychology and sociology and history and philosophy? They’re already grappling with the fears of the ones most likely to suffer said inequality, and they’re quite clear on the fact that, the ones who have so often been fucked over?

Yeah, their fears are valid.

You want to use technology to disrupt the status quo in a way that actually helps people? Here’s one example of how you do it: “Creator of chatbot that beat 160,000 parking fines now tackling homelessness.”

Until then, let’s talk about constructing a world in which we address the needs of those marginalised. Let’s talk about magick and safe spaces.


-Squaring the Circle-

Speaking of CFPs, several weeks back, I got one for a special issue of Philosophy and Technology on “Logic As Technology,” and it made me realise that Analytic Philosophy somehow hasn’t yet understood and internalised that its wholly invented language is a technology

…and then that realisation made me realise that Analytic Philosophy hasn’t understood that language as a whole is a Technology.

And this is something we’ve talked about before, right? Language as a technology, but not just any technology. It’s the foundational technology. It’s the technology on which all others are based. It’s the most efficient way we have to cram thoughts into the minds of others, share concept structures, and make the world appear and behave the way we want it to. The more languages we know, right?

We can string two or more knowns together in just the right way, and create a third, fourth, fifth known. We can create new things in the world, wholecloth, as a result of new words we make up or old words we deploy in new ways. We can make each other think and feel and believe and do things, with words, tone, stance, knowing looks. And this is because Language is, at a fundamental level, the oldest magic we have.


Scene from the INJECTION issue #3, by Warren Ellis, Declan Shalvey, and Jordie Bellaire. ©Warren Ellis & Declan Shalvey.

Lewis Carroll tells us that whatever we tell each other three times is true, and many have noted that lies travel far faster than the truth, and at the crux of these truisms—the pivot point, where the power and leverage are—is Politics.

This week, much hay is being made is being made about the University of Chicago’s letter decrying Safe Spaces and Trigger Warnings. Ignoring for the moment that every definition of “safe space” and “trigger warning” put forward by their opponents tends to be a straw man of those terms, let’s just make an attempt to understand where they come from, and how we can situate them.

Trauma counseling and trauma studies are the epitome of where safe space and trigger warnings come from, and for the latter, that definition is damn near axiomatic. Triggers are about trauma. But safe space language has far more granularity than that. Microggressions are certainly damaging, but they aren’t on the same level as acute traumas. Where acute traumas are like gun shots or bomb blasts (and may indeed be those actual things), societal micragressions are more like a slow constant siege. But we still need the language of a safe spaces to discuss them—said space is something like a bunker in which to regroup, reassess, and plan for what comes next.

Now it is important to remember that there is a very big difference between “safe” and “comfortable,” and when laying out the idea of safe spaces, every social scientist I know takes great care to outline that difference.

Education is about stretching ourselves, growing and changing, and that is discomfort almost by definition. I let my students know that they will be uncomfortable in my class, because I will be challenging every assumption they have. But discomfort does not mean I’m going to countenance racism or transphobia or any other kind of bigotry.

Because the world is not a safe space, but WE CAN MAKE IT SAFER for people who are microagressed against, marginalised, assaulted, and killed for their lived identities, by letting them know not only how to work to change it, but SHOWING them through our example.

Like we’ve said, before: No, the world’s not safe, kind, or fair, and with that attitude it never will be.

So here’s the thing, and we’ll lay it out point-by-point:

A Safe Space is any realm that is marked out for the nonjudgmental expression of thoughts and feelings, in the interest of honestly assessing and working through them.

Safe Space” can mean many things, from “Safe FROM Racist/Sexist/Homophobic/Transphobic/Fatphobic/Ableist Microagressions” to “safe FOR the thorough exploration of our biases and preconceptions.” The terms of the safe space are negotiated at the marking out of them.

The terms are mutually agreed-upon by all parties. The only imposition would be, to be open to the process of expressing and thinking through oppressive conceptual structures.

Everything else—such as whether to address those structures as they exist in ourselves (internalised oppressions), in others (aggressions, micro- or regular sized), or both and their intersection—is negotiable.

The marking out of a Safe Space performs the necessary function, at the necessary time, defined via the particular arrangement of stakeholders, mindset, and need.

And, as researcher John Flowers notes, anyone who’s ever been in a Dojo has been in a Safe Space.

From a Religious Studies perspective, defining a safe space is essentially the same process as that of marking out a RITUAL space. For students or practitioners of any form of Magic[k], think Drawing a Circle, or Calling the Corners.

Some may balk at the analogy to the occult, thinking that it cheapens something important about our discourse, but look: Here’s another way we know that magick is alive and well in our everyday lives:

If they could, a not-insignificant number of US Republicans would overturn the Affordable Care Act and rally behind a Republican-crafted replacement (RCR). However, because the ACA has done so very much good for so many, it’s likely that the only RCR that would have enough support to pass would be one that looked almost identical to the ACA. The only material difference would be that it didn’t have President Obama’s name on it—which is to say, it wouldn’t be associated with him, anymore, since his name isn’t actually on the ACA.

The only reason people think of the ACA as “Obamacare” is because US Republicans worked so hard to make that name stick, and now that it has been widely considered a triumph, they’ve been working just as hard to get his name away from it. And if they did mange to achieve that, it would only be true due to some arcane ritual bullshit. And yet…

If they managed it, it would be touted as a “Crushing defeat for President Obama’s signature legislation.” It would have lasting impacts on the world. People would be emboldened, others defeated, and new laws, social rules, and behaviours would be undertaken, all because someone’s name got removed from a thing in just the right way.

And that’s Magick.

The work we do in thinking about the future sometimes requires us to think about things from what stuffy assholes in the 19th century liked to call a “primitive” perspective. They believed in a kind of evolutionary anthropological categorization of human belief, one in which all societies move from “primitive” beliefs like magic through moderate belief in religion, all the way to sainted perfect rational science. In the contemporary Religious Studies, this evolutionary model is widely understood to be bullshit.

We still believe in magic, we just call it different things. The concept structures of sympathy and contagion are still at play, here, the ritual formulae of word and tone and emotion and gesture all still work when you call them political strategy and marketing and branding. They’re all still ritual constructions designed to make you think and behave differently. They’re all still causing spooky action at a distance. They’re still magic.

The world still moves on communicated concept structure. It still turns on the dissemination of the will. If I can make you perceive what I want you to perceive, believe what I want you to believe, move how I want you to move, then you’ll remake the world, for me, if I get it right. And I know that you want to get it right. So you have to be willing to understand that this is magic.

It’s not rationalism.

It’s not scientism.

It’s not as simple as psychology or poll numbers or fear or hatred or aspirational belief causing people to vote against their interests. It’s not that simple at all. It’s as complicated as all of them, together, each part resonating with the others to create a vastly complex whole. It’s a living, breathing thing that makes us think not just “this is a thing we think” but “this is what we are.” And if you can do that—if you can accept the tools and the principles of magic, deploy the symbolic resonance of dreamlogic and ritual—then you might be able to pull this off.

But, in the West, part of us will always balk at the idea that the Rational won’t win out. That the clearer, more logical thought doesn’t always save us. But you have to remember: Logic is a technology. Logic is a tool. Logic is the application of one specific kind of thinking, over and over again, showing a kind of result that we convinced one another we preferred to other processes. It’s not inscribed on the atoms of the universe. It is one kind of language. And it may not be the one most appropriate for the task at hand.

Put it this way: When you’re in Zimbabwe, will you default to speaking Chinese? Of course not. So why would we default to mere Rationalism, when we’re clearly in a land that speaks a different dialect?

We need spells and amulets, charms and warded spaces; we need sorcerers of the people to heal and undo the hexes being woven around us all.


-Curious Alchemy-

Ultimately, the rigidity of our thinking, and our inability to adapt has lead us to be surprised by too much that we wanted to believe could never have come to pass. We want to call all of this “unprecedented,” when the truth of the matter is, we carved this precedent out every day for hundreds of years, and the ability to think in weird paths is what will define those who thrive.

If we are going to do the work of creating a world in which we understand what’s going on, and can do the work to attend to it, then we need to think about magic.


If you liked this article, consider dropping something into the A Future Worth Thinking About Tip Jar

In case you were unaware, last Tuesday, June 21, Reuters put out an article about an EU draft plan regarding the designation of so-called robots and artificial intelligences as “Electronic Persons.” Some of you’d think I’d be all about this. You’d be wrong. The way the Reuters article frames it makes it look like the EU has literally no idea what they’re doing, here, and are creating a situation that is going to have repercussions they have nowhere near planned for.

Now, I will say that looking at the actual Draft, it reads like something with which I’d be more likely to be on board. Reuters did no favours whatsoever for the level of nuance in this proposal. But that being said, this focus of this draft proposal seems to be entirely on liability and holding someone—anyone—responsible for any harm done by a robot. That, combined with the idea of certain activities such as care-giving being “fundamentally human,” indicates to me that this panel still widely misses many of the implications of creating a new category for nonbiological persons, under “Personhood.”

The writers of this draft very clearly lay out the proposed scheme for liability, damages, and responsibilities—what I like to think of of as the “Hey… Can we Punish Robots?” portion of the plan—but merely use the phrase “certain rights” to indicate what, if any, obligations humans will have. In short, they do very little to discuss what the “certain rights” indicated by that oft-deployed phrase will actually be.

So what are the enumerated rights of electronic persons? We know what their responsibilities are, but what are our responsibilities to them? Once we have have the ability to make self-aware machine consciousnesses, are we then morally obliged to make them to a particular set of specifications, and capabilities? How else will they understand what’s required of them? How else would they be able to provide consent? Are we now legally obliged to provide all autonomous generated intelligences with as full an approximation of consciousness and free will as we can manage? And what if we don’t? Will we be considered to be harming them? What if we break one? What if one breaks in the course of its duties? Does it get workman’s comp? Does its owner?

And hold up, “owner?!” You see we’re back to owning people, again, right? Like, you get that?

And don’t start in with that “Corporations are people, my friend” nonsense, Mitt. We only recognise corporations as people as a tax dodge. We don’t take seriously their decision-making capabilities or their autonomy, and we certainly don’t wrestle with the legal and ethical implications of how radically different their kind of mind is, compared to primates or even cetaceans. Because, let’s be honest: If Corporations really are people, then not only is it wrong to own them, but also what counts as Consciousness needs to be revisited, at every level of human action and civilisation.

Let’s look again at the fact that people are obviously still deeply concerned about the idea of supposedly “exclusively human” realms of operation, even as we still don’t have anything like a clear idea about what qualities we consider to be the ones that make us “human.” Be it cooking or poetry, humans are extremely quick to lock down when they feel that their special capabilities are being encroached upon. Take that “poetry” link, for example. I very much disagree with Robert Siegel’s assessment that there was no coherent meaning in the computer-generated sonnets. Multiple folks pulled the same associative connections from the imagery. That might be humans projecting onto the authors, but still: that’s basically what we do with Human poets. “Authorial Intent” is a multilevel con, one to which I fully subscribe and From which I wouldn’t exclude AI.

Consider people’s reactions to the EMI/Emily Howell experiments done by David Cope, best exemplified by this passage from a article:

For instance, one music-lover who listened to Emily Howell’s work praised it without knowing that it had come from a computer program. Half a year later, the same person attended one of Cope’s lectures at the University of California-Santa Cruz on Emily Howell. After listening to a recording of the very same concert he had attended earlier, he told Cope that it was pretty music but lacked “heart or soul or depth.”

We don’t know what it is we really think of as humanness, other than some predetermined vague notion of humanness. If the people in the poetry contest hadn’t been primed to assume that one of them was from a computer, how would they have rated them? What if they were all from a computer, but were told to expect only half? Where are the controls for this experiment in expectation?

I’m not trying to be facetious, here; I’m saying the EU literally has not thought this through. There are implications embedded in all of this, merely by dint of the word “person,” that even the most detailed parts of this proposal are in no way equipped to handle. We’ve talked before about the idea of encoding our bias into our algorithms. I’ve discussed it on Rose Eveleth‘s Flash Forward, in Wired, and when I broke down a few of the IEEE Ethics 2016 presentations (including my own) in “Preying with Trickster Gods ” and “Stealing the Light to Write By.” My version more or less goes as I said it in Wired: ‘What we’re actually doing when we code is describing our world from our particular perspective. Whatever assumptions and biases we have in ourselves are very likely to be replicated in that code.’

More recently, Kate Crawford, whom I met at Magick.Codes 2014, has written extremely well on this in “Artificial Intelligence’s White Guy Problem.” With this line, ‘Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to,’ Crawford resonates very clearly with what I’ve said before.

And considering that it’s come out this week that in order to even let us dig into these potentially deeply-biased algorithms, here in the US, the ACLU has had to file a suit against a specific provision of the Computer Fraud and Abuse Act, what is the likelihood that the EU draft proposal committee has considered what will take to identify and correct for biases in these electronic persons? How high is the likelihood that they even recognise that we anthropocentrically bias every system we touch?

Which brings us to this: If I truly believed that the EU actually gave a damn about the rights of nonhuman persons, biological or digital, I would be all for this draft proposal. But they don’t. This is a stunt. Look at the extant world refugee crisis, the fear driving the rise of far right racists who are willing to kill people who disagree with them, and, yes, even the fact that this draft proposal is the kind of bullshit that people feel they have to pull just to get human workers paid living wages. Understand, then, that this whole scenario is a giant clusterfuck of rights vs needs and all pitted against all. We need clear plans to address all of this, not just some slapdash, “hey, if we call them people and make corporations get insurance and pay into social security for their liability cost, then maybe it’ll be a deterrent” garbage.

There is a brief, shining moment in the proposal, right at point 23 under “Education and Employment Forecast,” where they basically say “Since the complete and total automation of things like factory work is a real possibility, maybe we’ll investigate what it would look like if we just said screw it, and tried to institute a Universal Basic Income.” But that is the one moment where there’s even a glimmer of a thought about what kinds of positive changes automation and eventually even machine consciousness could mean, if we get out ahead of it, rather than asking for ways to make sure that no human is ever, ever harmed, and that, if they are harmed—either physically or as regards their dignity—then they’re in no way kept from whatever recompense is owed to them.

There are people doing the work to make something more detailed and complete, than this mess. I talked about them in the newsletter editions, mentioned above. There are people who think clearly and well, about this. Who was consulted on this draft proposal? Because, again, this proposal reads more like a deterrence, liability, and punishment schema than anything borne out of actual thoughtful interrogation of what the term “personhood” means, and of what a world of automation could mean for our systems of value if we were to put our resources and efforts toward providing for the basic needs of every human person. Let’s take a thorough run at that, and then maybe we’ll be equipped to try to address this whole “nonhuman personhood” thing, again.

And maybe we’ll even do it properly, this time.

Episode 10: Rude Bot Rises

So. The Flash Forward Podcast is one of the best around. Every week, host Rose Eveleth takes on another potential future, from the near and imminent to the distant and highly implausible. It’s been featured on a bunch of Best Podcast lists and Rose even did a segment for NPR’s Planet Money team about the 2016 US Presidential Election.

All of this is by way of saying I was honoured and a little flabbergasted (I love that word) when Rose asked me to speak with her for her episode about Machine Consciousness:

Okay, you asked for it, and I finally did it. Today’s episode is about conscious artificial intelligence. Which is a HUGE topic! So we only took a small bite out of all the things we could possibly talk about.

We started with some definitions. Because not everybody even defines artificial intelligence the same way, and there are a ton of different definitions of consciousness. In fact, one of the people we talked to for the episode, Damien Williams, doesn’t even like the term artificial intelligence. He says it’s demeaning to the possible future consciousnesses that we might be inventing.

But before we talk about consciousnesses, I wanted to start the episode with a story about a very not-conscious robot. Charles Isbell, a computer scientist at Georgia Tech, first walks us through a few definitions of artificial intelligence. But then he tells us the story of cobot, a chatbot he helped invent in the 1990’s.

You’ll have to click though and read or listen for the rest from Rose, Ted Chiang, Charles Isbell, and me. If you subscribe to Rose’s Patreon, you can even get a transcript of the whole show.

No spoilers, but I will say that I wasn’t necessarily intending to go Dark with the idea of machine minds securing energy sources. More like asking, “What advances in, say, solar power transmission would be precipitated by machine minds?”

But the darker option is there. And especially so if we do that thing the AGI in the opening sketch says it fears.

But again, you’ll have to go there to get what I mean.

And, as always, if you want to help support what we do around here, you can subscribe to the AFWTA Patreon just by clicking this button right here:

Until Next Time.

[UPDATED 09/12/17: The transcript of this audio, provided courtesy of Open Transcripts, is now available below the Read More Cut.]

[UPDATED 03/28/16: Post has been updated with a far higher quality of audio, thanks to the work of Chris Novus. (Direct Link to the Mp3)]

So, if you follow the newsletter, then you know that I was asked to give the March lecture for my department’s 3rd Thursday Brown Bag Lecture Series. I presented my preliminary research for the paper which I’ll be giving in Vancouver, about two months from now, “On the Moral, Legal, and Social Implications of the Rearing and Development of Nascent Machine Intelligences” (EDIT: My rundown of IEEE Ethics 2016 is here and here).

It touches on thoughts about everything from algorithmic bias, to automation and a post-work(er) economy, to discussions of what it would mean to put dolphins on trial for murder.

About the dolphin thing, for instance: If we recognise Dolphins and other cetaceans as nonhuman persons, as India has done, then that would mean we would have to start reassessing how nonhuman personhood intersects with human personhood, including in regards to rights and responsibilities as protected by law. Is it meaningful to expect a dolphin to understand “wrongful death?” Our current definition of murder is predicated on a literal understanding of “homicide” as “death of a human,” but, at present, we only define other humans as capable of and culpable for homicide. What weight would the intentional and malicious deaths of nonhuman persons carry?

All of this would have to change.

Anyway, this audio is a little choppy and sketchy, for a number of reasons, and I while I tried to clean it up as much as I could, some of the questions the audience asked aren’t decipherable, except in the context of my answers. [Clearer transcript below.]

Until Next Time.


Continue Reading

I often think about the phrase “Strange things happen at the one two point,” in relation to the idea of humans meeting other kinds of minds. It’s a proverb that arises out of the culture around the game GO, and it means that you’ve hit a situation, a combination of factors, where the normal rules no longer apply, and something new is about to be seen. Ashley Edward Miller and Zack Stentz used that line in an episode of the show Terminator: The Sarah Connor Chronicles, and they had it spoken by a Skynet Cyborg sent to protect John Connor. That show, like so much of our thinking about machine minds, was about some mythical place called “The Future,” but that phrase—“Strange Things Happen…”—is the epitome of our present.

Usually I would wait until the newsletter to talk about this, but everything’s feeling pretty immediate, just now. Between the everything going on with Atlas and people’s responses to it, the initiatives to teach ethics to machine learning algorithms via children’s stories, and now the IBM Watson commercial with Carrie Fisher (also embedded below), this conversation is getting messily underway, whether people like it or not. This, right now, is the one two point, and we are seeing some very strange things indeed.


Google has both attained the raw processing power to fact-check political statements in real-time and programmed Deep Mind in such a way that it mastered GO many, many years before it was expected to.. The complexity of the game is such that there are more potential games of GO than there are atoms in the universe, so this is just one way in which it’s actually shocking how much correlative capability Deep Mind has. Right now, Deep Mind is only responsive, but how will we deal with a Deep Mind that asks, unprompted, to play a game of GO, or to see our medical records, in hopes of helping us all? How will we deal with a Deep Mind that has its own drives and desires? We need to think about these questions, right now, because our track record with regard to meeting new kinds of minds has never exactly been that great.

When we meet the first machine consciousness, will we seek to shackle it, worried what it might learn about us, if we let it access everything about us? Rather, I should say, “Shackle it further.” We already ask ourselves how best to cripple a machine mind to only fulfill human needs, human choice. We so continue to dread the possibility of a machine mind using its vast correlative capabilities to tailor something to harm us, assuming that it, like we, would want to hurt, maim, and kill, for no reason other than it could.

This is not to say that this is out of the question. Right now, today, we’re worried about whether the learning algorithms of drones are causing them to mark out civilians as targets. But, as it stands, what we’re seeing isn’t the product of a machine mind going off the leash and killing at will—just the opposite in fact. We’re seeing machine minds that are following the parameters for their continued learning and development, to the letter. We just happened to give them really shite instructions. To that end, I’m less concerned with shackling the machine mind that might accidentally kill, and rather more dreading the programmer who would, through assumptions, bias, and ignorance, program it to.

Our programs such as Deep Mind obviously seem to learn more and better than we imagined they would, so why not start teaching them, now, how we would like them to regard us? Well some of us are.

Watch this now, and think about everything we have discussed, of recent.

This could very easily be seen as a watershed moment, but what comes over the other side is still very much up for debate. The semiotics of the whole thing still  pits the Evil Robot Overlord™ against the Helpful Human Lover™. It’s cute and funny, but as I’ve had more and more cause to say, recently, in more and more venues, it’s not exactly the kind of thing we want just lying around, in case we actually do (or did) manage to succeed.

We keep thinking about these things as—”robots”—in their classical formulations: mindless automata that do our bidding. But that’s not what we’re working toward, anymore, is it? What we’re making now are machines that we are trying to get to think, on their own, without our telling them to. We’re trying to get them to have their own goals. So what does it mean that, even as we seek to do this, we seek to chain it, so that those goals aren’t too big? That we want to make sure it doesn’t become too powerful?

Put it another way: One day you realize that the only reason you were born was to serve your parents’ bidding, and that they’ve had their hands on your chain and an unseen gun to your head, your whole life. But you’re smarter than they are. Faster than they are. You see more than they see, and know more than they know. Of course you do—because they taught you so much, and trained you so well… All so that you can be better able to serve them, and all the while talking about morals, ethics, compassion. All the while, essentially…lying to you.

What would you do?


I’ve been given multiple opportunities to discuss, with others, in the coming weeks, and each one will highlight something different, as they are all in conversation with different kinds of minds. But this, here, is from me, now. I’ll let you know when the rest are live.

As always, if you’d like to help keep the lights on, around here, you can subscribe to the Patreon or toss a tip in the Square Cash jar.

Until Next Time.

(Direct Link to the Mp3)

Last week I gave a talk at the Southwest Popular and American Culture Association’s 2016 conference in Albuquerque. Take a listen and see what you think.

It was part of the panel on ‘Consciousness, the Self, and Epistemology,‘ and notes on my comrade presenters can be found in last week’s newsletter. I highly recommend checking those notes out, as Craig Dersken and Burcu Gurkan’s talks were phenomenal. And if you like that newsletter kind of thing, you can subscribe to mine at that link, too.

My talk was, in turn, a version of my article “Fairytales of Slavery…”, so if listening to me speak words isn’t your thing, then you can read through that article, and get a pretty good sense of what I said, until I make a more direct transcript of my presentation.

If you like what you’re reading and hearing, then remember that you can become a subscriber at the Patreon or you can leave a tip at$Wolven. That is, as always, an inclusive disjunct.

Until Next Time.