rose eveleth

All posts tagged rose eveleth

[Direct link to Mp3]

[09/22/17: This post has been updated with a transcript, courtesy of Open Transcripts]

Back on March 13th, 2017, I gave an invited guest lecture, titled:

TECHNOLOGY, DISABILITY, AND HUMAN AUGMENTATION

‘Please join Dr. Ariel Eisenberg’s seminar, “American Identities: Disability,” and [the] Interdisciplinary Studies Department for an hour-long conversation with Damien Williams on disability and the normalization of technology usage, “means-well” technological innovation, “inspiration porn,” and other topics related to disability and technology.’

It was kind of an extemporaneous riff on my piece “On the Ins and Outs of Human Augmentation,” and it gave me the opportunity to namedrop Ashley Shew, Natalie Kane, and Rose Eveleth.

The outline looked a little like this:

  • Foucault and Normalization
    • Tech and sociological pressures to adapt to the new
      • Starts with Medical tech but applies Everywhere; Facebook, Phones, Etc.
  • Zoltan Istvan: In the Transhumanist Age, We Should Be Repairing Disabilities Not Sidewalks
  • All Lead To: Ashley Shew’s “Up-Standing Norms
    • Listening to the Needs and Desires of people with disabilities.
      • See the story Shew tells about her engineering student, as related in the AFWTA Essay
    • Inspiration Porn: What is cast by others as “Triumphing” over “Adversity” is simply adapting to new realities.
      • Placing the burden on the disabled to be an “inspiration” is dehumanizing;
      • means those who struggle “have no excuse;”
      • creates conditions for a “who’s got it worse” competition
  • John Locke‘s Empiricism: Primary and Secondary Qualities
    • Primary qualities of biology and physiology lead to secondary qualities of society and culture
      • Gives rise to Racism and Ableism, when it later combines with misapplied Darwinism to be about the “Right Kinds” of bodies and minds.
        • Leads to Eugenics: Forced sterilization, medical murder, operating and experimenting on people without their knowledge or consent.
          • “Fixing” people to make them “normal, again”
  • Natalie Kane‘s “Means Well Technology
    • Design that doesn’t take into account the way that people will actually live with and use new tech.
      • The way tech normalizes is never precisely the way designers want it to
        • William Gibson’s quote “The street finds its own uses for things.”
  • Against Locke: Embrace Phenomenological Ethics and Epistemology (Feminist Epistemology and Ethics)
    • Lived Experience and embodiment as crucial
    • The interplay of Self and and Society
  • Ship of Theseus: Identity, mind, extensions, and augmentations change how we think of ourselves and how society thinks of us
    • See the story Shew tells about her friend with the hemipelvectomy, as related in the aforementioned AFWTA Essay

The whole thing went really well (though, thinking back, I’m not super pleased with my deployment of Dennett). Including Q&A, we got about an hour and forty minutes of audio, available at the embed and link above.

Also, I’m apparently the guy who starts off every talk with some variation on “This is a really convoluted interplay of ideas, but bear with me; it all comes together.”

The audio transcript is below the cut. Enjoy.

Continue Reading

Last week, Artsy.net’s Izabella Scott wrote this piece about how and why the aesthetic of witchcraft is making a comeback in the art world, which is pretty pleasantly timed as not only are we all eagerly awaiting Kim Boekbinder’s NOISEWITCH, but I also just sat down with Rose Eveleth for the Flash Forward Podcast to talk for her season 2 finale.

You see, Rose did something a little different this time. Instead of writing up a potential future and then talking to a bunch of amazing people about it, like she usually does, this episode’s future was written by an algorithm. Rose trained an algorithm called Torch not only on the text of all of the futures from both Flash Forward seasons, but also the full scripts of both the War of the Worlds and the 1979 Hitchhiker’s Guide to the Galaxy radio plays. What’s unsurprising, then, is that part of what the algorithm wanted to talk about was space travel and Mars. What is genuinely surprising, however, is that what it also wanted to talk about was Witches.

Because so far as either Rose or I could remember, witches aren’t mentioned anywhere in any of those texts.

ANYWAY, the finale episode is called “The Witch Who Came From Mars,” and the ensuing exegeses by several very interesting people and me of the Bradbury-esque results of this experiment are kind of amazing. No one took exactly the same thing from the text, and the more we heard of each other, the more we started to weave threads together into a meta-narrative.

Episode 20: The Witch Who Came From Mars

It’s really worth your time, and if you subscribe to Rose’s Patreon, then not only will you get immediate access to the full transcript of that show, but also to the full interview she did with PBS Idea Channel’s Mike Rugnetta. They talk a great deal about whether we will ever deign to refer to the aesthetic creations of artificial intelligences as “Art.”

And if you subscribe to my Patreon, then you’ll get access to the full conversation between Rose and me, appended to this week’s newsletter, “Bad Month for Hiveminds.” Rose and I talk about the nature of magick and technology, the overlaps and intersections of intention and control, and what exactly it is we might mean by “behanding,” the term that shows up throughout the AI’s piece.

And just because I don’t give a specific shoutout to Thoth and Raven doesn’t mean I forgot them. Very much didn’t forget about Raven.

Also speaking of Patreon and witches and whatnot, current $1+ patrons have access to the full first round of interview questions I did with Eliza Gauger about Problem Glyphs. So you can get in on that, there, if you so desire. Eliza is getting back to me with their answers to the follow-up questions, and then I’ll go about finishing up the formatting and publishing the full article. But if you subscribe now, you’ll know what all the fuss is about well before anybody else.

And, as always, there are other ways to provide material support, if longterm subscription isn’t your thing.

Until Next Time.


If you liked this piece, consider dropping something in the A Future Worth Thinking About Tip Jar

In case you were unaware, last Tuesday, June 21, Reuters put out an article about an EU draft plan regarding the designation of so-called robots and artificial intelligences as “Electronic Persons.” Some of you’d think I’d be all about this. You’d be wrong. The way the Reuters article frames it makes it look like the EU has literally no idea what they’re doing, here, and are creating a situation that is going to have repercussions they have nowhere near planned for.

Now, I will say that looking at the actual Draft, it reads like something with which I’d be more likely to be on board. Reuters did no favours whatsoever for the level of nuance in this proposal. But that being said, this focus of this draft proposal seems to be entirely on liability and holding someone—anyone—responsible for any harm done by a robot. That, combined with the idea of certain activities such as care-giving being “fundamentally human,” indicates to me that this panel still widely misses many of the implications of creating a new category for nonbiological persons, under “Personhood.”

The writers of this draft very clearly lay out the proposed scheme for liability, damages, and responsibilities—what I like to think of of as the “Hey… Can we Punish Robots?” portion of the plan—but merely use the phrase “certain rights” to indicate what, if any, obligations humans will have. In short, they do very little to discuss what the “certain rights” indicated by that oft-deployed phrase will actually be.

So what are the enumerated rights of electronic persons? We know what their responsibilities are, but what are our responsibilities to them? Once we have have the ability to make self-aware machine consciousnesses, are we then morally obliged to make them to a particular set of specifications, and capabilities? How else will they understand what’s required of them? How else would they be able to provide consent? Are we now legally obliged to provide all autonomous generated intelligences with as full an approximation of consciousness and free will as we can manage? And what if we don’t? Will we be considered to be harming them? What if we break one? What if one breaks in the course of its duties? Does it get workman’s comp? Does its owner?

And hold up, “owner?!” You see we’re back to owning people, again, right? Like, you get that?

And don’t start in with that “Corporations are people, my friend” nonsense, Mitt. We only recognise corporations as people as a tax dodge. We don’t take seriously their decision-making capabilities or their autonomy, and we certainly don’t wrestle with the legal and ethical implications of how radically different their kind of mind is, compared to primates or even cetaceans. Because, let’s be honest: If Corporations really are people, then not only is it wrong to own them, but also what counts as Consciousness needs to be revisited, at every level of human action and civilisation.

Let’s look again at the fact that people are obviously still deeply concerned about the idea of supposedly “exclusively human” realms of operation, even as we still don’t have anything like a clear idea about what qualities we consider to be the ones that make us “human.” Be it cooking or poetry, humans are extremely quick to lock down when they feel that their special capabilities are being encroached upon. Take that “poetry” link, for example. I very much disagree with Robert Siegel’s assessment that there was no coherent meaning in the computer-generated sonnets. Multiple folks pulled the same associative connections from the imagery. That might be humans projecting onto the authors, but still: that’s basically what we do with Human poets. “Authorial Intent” is a multilevel con, one to which I fully subscribe and From which I wouldn’t exclude AI.

Consider people’s reactions to the EMI/Emily Howell experiments done by David Cope, best exemplified by this passage from a PopSci.com article:

For instance, one music-lover who listened to Emily Howell’s work praised it without knowing that it had come from a computer program. Half a year later, the same person attended one of Cope’s lectures at the University of California-Santa Cruz on Emily Howell. After listening to a recording of the very same concert he had attended earlier, he told Cope that it was pretty music but lacked “heart or soul or depth.”

We don’t know what it is we really think of as humanness, other than some predetermined vague notion of humanness. If the people in the poetry contest hadn’t been primed to assume that one of them was from a computer, how would they have rated them? What if they were all from a computer, but were told to expect only half? Where are the controls for this experiment in expectation?

I’m not trying to be facetious, here; I’m saying the EU literally has not thought this through. There are implications embedded in all of this, merely by dint of the word “person,” that even the most detailed parts of this proposal are in no way equipped to handle. We’ve talked before about the idea of encoding our bias into our algorithms. I’ve discussed it on Rose Eveleth‘s Flash Forward, in Wired, and when I broke down a few of the IEEE Ethics 2016 presentations (including my own) in “Preying with Trickster Gods ” and “Stealing the Light to Write By.” My version more or less goes as I said it in Wired: ‘What we’re actually doing when we code is describing our world from our particular perspective. Whatever assumptions and biases we have in ourselves are very likely to be replicated in that code.’

More recently, Kate Crawford, whom I met at Magick.Codes 2014, has written extremely well on this in “Artificial Intelligence’s White Guy Problem.” With this line, ‘Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to,’ Crawford resonates very clearly with what I’ve said before.

And considering that it’s come out this week that in order to even let us dig into these potentially deeply-biased algorithms, here in the US, the ACLU has had to file a suit against a specific provision of the Computer Fraud and Abuse Act, what is the likelihood that the EU draft proposal committee has considered what will take to identify and correct for biases in these electronic persons? How high is the likelihood that they even recognise that we anthropocentrically bias every system we touch?

Which brings us to this: If I truly believed that the EU actually gave a damn about the rights of nonhuman persons, biological or digital, I would be all for this draft proposal. But they don’t. This is a stunt. Look at the extant world refugee crisis, the fear driving the rise of far right racists who are willing to kill people who disagree with them, and, yes, even the fact that this draft proposal is the kind of bullshit that people feel they have to pull just to get human workers paid living wages. Understand, then, that this whole scenario is a giant clusterfuck of rights vs needs and all pitted against all. We need clear plans to address all of this, not just some slapdash, “hey, if we call them people and make corporations get insurance and pay into social security for their liability cost, then maybe it’ll be a deterrent” garbage.

There is a brief, shining moment in the proposal, right at point 23 under “Education and Employment Forecast,” where they basically say “Since the complete and total automation of things like factory work is a real possibility, maybe we’ll investigate what it would look like if we just said screw it, and tried to institute a Universal Basic Income.” But that is the one moment where there’s even a glimmer of a thought about what kinds of positive changes automation and eventually even machine consciousness could mean, if we get out ahead of it, rather than asking for ways to make sure that no human is ever, ever harmed, and that, if they are harmed—either physically or as regards their dignity—then they’re in no way kept from whatever recompense is owed to them.

There are people doing the work to make something more detailed and complete, than this mess. I talked about them in the newsletter editions, mentioned above. There are people who think clearly and well, about this. Who was consulted on this draft proposal? Because, again, this proposal reads more like a deterrence, liability, and punishment schema than anything borne out of actual thoughtful interrogation of what the term “personhood” means, and of what a world of automation could mean for our systems of value if we were to put our resources and efforts toward providing for the basic needs of every human person. Let’s take a thorough run at that, and then maybe we’ll be equipped to try to address this whole “nonhuman personhood” thing, again.

And maybe we’ll even do it properly, this time.

Episode 10: Rude Bot Rises

So. The Flash Forward Podcast is one of the best around. Every week, host Rose Eveleth takes on another potential future, from the near and imminent to the distant and highly implausible. It’s been featured on a bunch of Best Podcast lists and Rose even did a segment for NPR’s Planet Money team about the 2016 US Presidential Election.

All of this is by way of saying I was honoured and a little flabbergasted (I love that word) when Rose asked me to speak with her for her episode about Machine Consciousness:

Okay, you asked for it, and I finally did it. Today’s episode is about conscious artificial intelligence. Which is a HUGE topic! So we only took a small bite out of all the things we could possibly talk about.

We started with some definitions. Because not everybody even defines artificial intelligence the same way, and there are a ton of different definitions of consciousness. In fact, one of the people we talked to for the episode, Damien Williams, doesn’t even like the term artificial intelligence. He says it’s demeaning to the possible future consciousnesses that we might be inventing.

But before we talk about consciousnesses, I wanted to start the episode with a story about a very not-conscious robot. Charles Isbell, a computer scientist at Georgia Tech, first walks us through a few definitions of artificial intelligence. But then he tells us the story of cobot, a chatbot he helped invent in the 1990’s.

You’ll have to click though and read or listen for the rest from Rose, Ted Chiang, Charles Isbell, and me. If you subscribe to Rose’s Patreon, you can even get a transcript of the whole show.

No spoilers, but I will say that I wasn’t necessarily intending to go Dark with the idea of machine minds securing energy sources. More like asking, “What advances in, say, solar power transmission would be precipitated by machine minds?”

But the darker option is there. And especially so if we do that thing the AGI in the opening sketch says it fears.

But again, you’ll have to go there to get what I mean.

And, as always, if you want to help support what we do around here, you can subscribe to the AFWTA Patreon just by clicking this button right here:


Until Next Time.