[This is a in-process pre-print of an as-yet-published paper, a version of which was presented at the Gender, Bodies, and Technology 2019 Conference.]


The history of biotechnological intervention on the human body has always been tied to conceptual frameworks of disability and mental health, but certain biases and assumptions have forcibly altered and erased the public awareness of that understanding. As humans move into a future of climate catastrophe, space travel, and constantly shifting understanding s of our place in the world, we will be increasingly confronted with concerns over who will be used as research subjects, concerns over whose stakeholder positions will be acknowledged and preferenced, and concerns over the kinds of changes that human bodies will necessarily undergo as they adapt to their changing environments, be they terrestrial or interstellar. Who will be tested, and how, so that we can better understand what kinds of bodyminds will be “suitable” for our future modes of existence?[1] How will we test the effects of conditions like pregnancy and hormone replacement therapy (HRT) in space, and what will happen to our bodies and minds after extended exposure to low light, zero gravity, high-radiation environments, or the increasing warmth and wetness of our home planet?

During the June 2018 “Decolonizing Mars” event at the Library of Congress in Washington, DC, several attendees discussed the fact that the bodyminds of disabled folx might be better suited to space life, already being oriented to pushing off of surfaces and orienting themselves to the world in different ways, and that the integration of body and technology wouldn’t be anything new for many people with disabilities. In that context, I submit that cyborgs and space travel are, always have been, and will continue to be about disability and marginalization, but that Western society’s relationship to disabled people has created a situation in which many people do everything they can to conceal that fact from the popular historical narratives about what it means for humans to live and explore. In order to survive and thrive, into the future, humanity will have to carefully and intentionally take this history up, again, and consider the present-day lived experience of those beings—human and otherwise—whose lives are and have been most impacted by the socioethical contexts in which we talk about technology and space.

This paper explores some history and theories about cyborgs—humans with biotechnological interventions which allow them to regulate their own internal bodily process—and how those compare to the realities of how we treat and consider currently-living people who are physically enmeshed with technology. I’ll explore several ways in which the above-listed considerations have been alternately overlooked and taken up by various theorists, and some of the many different strategies and formulations for integrating these theories into what will likely become everyday concerns in the future. In fact, by exploring responses from disabilities studies scholars and artists who have interrogated and problematized the popular vision of cyborgs, the future, and life in space, I will demonstrate that our clearest path toward the future of living with biotechnologies is a reengagement with the everyday lives of disabled and other marginalized persons, today.

Continue Reading

2017 SRI Technology and Consciousness Workshop Series Final Report

So, as you know, back in the summer of 2017 I participated in SRI International’s Technology and Consciousness Workshop Series. This series was an eight week program of workshops the current state of the field around, the potential future paths toward, and the moral and social implications of the notion of conscious machines. To do this, we brought together a rotating cast of dozens of researchers in AI, machine learning, psychedelics research, ethics, epistemology, philosophy of mind, cognitive computing, neuroscience, comparative religious studies, robotics, psychology, and much more.

Image of a rectangular name card with a stylized "Technology & Consciousness" logo, at the top, the name Damien Williams in bold in the middle, and SRI International italicized at the bottom; to the right a blurry wavy image of what appears to be a tree with a person standing next to it and another tree in the background to the left., all partially mirrored in a surface at the bottom of the image.

[Image of my name card from the Technology & Consciousness workshop series.]

We traveled from Arlington, VA, to Menlo Park, CA, to Cambridge, UK, and back, and while my primary role was that of conference co-ordinator and note-taker (that place in the intro where it says I “maintained scrupulous notes?” Think 405 pages/160,656 words of notes, taken over eight 5-day weeks of meetings), I also had three separate opportunities to present: Once on interdisciplinary perspectives on minds and mindedness; then on Daoism and Machine Consciousness; and finally on a unifying view of my thoughts across all of the sessions. In relation to this report, I would draw your attention to the following passage:

An objection to this privileging of sentience is that it is anthropomorphic “meat chauvinism”: we are projecting considerations onto technology that derive from our biology. Perhaps conscious technology could have morally salient aspects distinct from sentience: the basic elements of its consciousness could be different than ours.

All of these meetings were held under the auspices of the Chatham House Rule, which meant that there were many things I couldn’t tell you about them, such as the names of the other attendees, or what exactly they said in the context of the meetings. What I was able tell you, however, was what I talked about, and I did, several times. But as of this week, I can give you even more than that.

This past Thursday, SRI released an official public report on all of the proceedings and findings from the 2017 SRI Technology and Consciousness Workshop Series, and they have told all of the participants that they can share said report as widely as they wish. Crucially, that means that I can share it with you. You can either click this link, here, or read it directly, after the cut.

Continue Reading

[This paper was prepared for the 2019 Towards Conscious AI Systems Symposium co-located with the Association for the Advancement of Artificial Intelligence 2019 Spring Symposium Series.

Much of this work derived from my final presentation at the 2017 SRI Technology and Consciousness Workshop Series: “Science, Ethics, Epistemology, and Society: Gains for All via New Kinds of Minds”.]

Abstract. This paper explores the moral, epistemological, and legal implications of multiple different definitions and formulations of human and nonhuman consciousness. Drawing upon research from race, gender, and disability studies, including the phenomenological basis for knowledge and claims to consciousness, I discuss the history of the struggles for personhood among different groups of humans, as well as nonhuman animals, and systems. In exploring the history of personhood struggles, we have a precedent for how engagements and recognition of conscious machines are likely to progress, and, more importantly, a roadmap of pitfalls to avoid. When dealing with questions of consciousness and personhood, we are ultimately dealing with questions of power and oppression as well as knowledge and ontological status—questions which require a situated and relational understanding of the stakeholders involved. To that end, I conclude with a call and outline for how to place nuance, relationality, and contextualization before and above the systematization of rules or tests, in determining or applying labels of consciousness.

Keywords: Consciousness, Machine Consciousness, Philosophy of Mind, Phenomenology, Bodyminds

[Overlapping images of an Octopus carrying a shell, a Mantis Shrimp on the sea floor, and a Pepper Robot]

Continue Reading

We do a lot of work and have a lot of conversations around here with people working on the social implications of technology, but some folx sometimes still don’t quite get what I mean when I say that our values get embedded in our technological systems, and that the values of most internet companies, right now, are capitalist brand engagement and marketing. To that end, I want to take a minute to talk to you about something that happened, this week and just a heads-up, this conversation is going to mention sexual assault and the sexual predatory behaviour of men toward young girls.
Continue Reading

Hello from the Southern Blue Mountains of These Soon To Be Re-United States.

I realised that, for someone whose work and life and public face is as much about magic as mine is, I haven’t done a lot of intention- and will-setting, here. That is, I haven’t stated and formulated with a clear mind and intention what I want and will work to bring into existence.

Now, there are a lot of magical schools of thought that go in for the power of setting your intention, abstracting it out from yourself, and putting it into the universe as a kind of unconscious signal. Sigilizing, “let go and let god,” that whole kind of thing.

But there’s also something to be said for just straight-up clearly formulating the concepts and words to state what you want, and fixing your mind on what it will take to achieve. So here’s what I want, in 2019.

I want more people to understand, accept, and actualize the truth of Dr Safiya Noble’s statement that “If you are designing technology for society, and you don’t know anything about society, you are deeply unqualified” and Deb Chachra’s corollary that “Whether you realise it or not, the technology you’re designing *is* for society.” So what’s that mean to me? It means that I want technologists, designers, academics, politicians, and public personalities to start taking seriously the notion that we need truly interdisciplinary social-science- and humanities-focused programs at every level of education, community organization, and governance if we want the tools and systems which increasingly influence and shape our lives built, sustained, and maintained in beneficial ways.

And what do I mean by “beneficial,” here? I mean want these systems to not replicate and iterate on prejudicial, bigoted human biases, and I want them to actively reduce the harm done by those things. I mean I want tools and systems crafted and laws drafted not just by some engineer who took an ethics class once or some politician who reads WIRED, every so often, but by collaborative teams of people with interoperable kinds of knowledge and lived experience. I mean I want politicians recognizing that the vast majority of people do not in fact understand google’s or facebook’s or amazon’s algorithms or intentions, and that that is, in large part, because the people in charge of those entities do not want us to understand them.

I want people to heed those who try to trace a line from our history of using new technologies and new scientific models and stance to marginalize wide swathes of people, and I want those people who understand and research that to come together and work on something different, to build systems and carve paths that allow us to push back against the entrenched, deep-cut, least-resisting, lowest-common-denominator shit which constitutes the way that power and prejudice and assumptions thereof (racism, [dis-]ableism, sexism, homophobia, misogyny, fatphobia, xenophobia, transphobia, colourism, etc.) act on and shape and are the world in which we live.

I want us all to deconstruct and resist and dismantle the oppressive bullshit in our lives and the lives of those we love, and I want to build techno-socio-ethico-political systems built on compassion and care and justice and an engaged, intentional way of living. I want to be able to clearly communicate the good of that. I want people to easily understand the good in that, and understand the ease of that good, if we take the strengths of our alterities, our differing phenomenologies, our experiential othernesses, and respect them and weave them together into a mutually supportive interoperable whole.

I want to publish papers about these things, and I want people to read and appreciate them. I want to clearly demonstrate the linkages I see and make them into a compelling lens through which to act in the world.

I want to create beauty and joy and refuge and shelter for those who need it and I want to create a deep, rending, claws-and-teeth-filled defense against those who would threaten it, and I want those billion-toothed maws and gullets and the pressure of those thousand-thousand-thousand eyes to act as a catalyst for those who would be willing to transform themselves. I want to build a community of people who are safe and cared-for and who understand the value of compassion, and understand that sometimes compassion means you burn a motherfucker to the ground.

I want to strengthen the bonds that need strengthening, and I want the strength to sever any that need severing. I want, as much as possible, for those those to be the same, for me, as they are for the other people involved.

I want to push back as meaningfully as we still can against the worst of what’s coming from what humans have done to this planet’s climate, and I want to do that from the position of safeguarding the most vulnerable among us, and I want to do so with an understanding that whatever we do, Right Now, is just a small interim step to buy us enough time to do the really hard systemic shit, as we move forward.

I want people to realize their stock options won’t stop them from suffocating to death in the 140°F/60°C heat and I want people to realize that there’s no money in Heaven and that even if there was, from all I read, that Joshua guy and his Dad don’t take too kindly to people who hurt the poor and marginalized or who wreck the place they were told to steward.

I want people to realise that the people who need to realize those things are the same sets of people.

I want a clear mind and a full heart and the will to take care of myself enough to keep trying to help make these things happen.

I want you happy and healthy and whole, however you define that for you.

I want Alexandria Ocasio-Cortez to begin what will be a five-year process of digging deep into both the communities she represents and the DC machinery (It has to be both). (And then I want her to get a place in the cabinet of whatever left-leaning progressive is in the White House in 2020, and help them win again in 2024. And then I want her to win in 2028.)

I want every person in a position of power to realized that they need to consult and heed the people and systems over whom they have power, to truly understand their needs.

I want everyone to have their individual and collective basic survival needs met so they can experience what that does for the scope of their desires and what they believe is possible.

I want the criminal indictment of the (at time of this writing) Current Resident of the Oval Office and every high level politician who enabled his position. I want the people least likely to understand and accept why this is necessary to quickly and fully understand and accept that this is necessary.

I want a just and kind and compassionate world and I want to be active within it.

I want to know what you want.

So what do you want?

As you already know, we went to the second Juvet A.I. Retreat, back in September. If you want to hear several of us talk about what we got up to at the then you’re in luck because here are several conversations conducted by Ben Byford of the Machine Ethics Podcast.

I am deeply grateful to Ben Byford for asking me to sit down and talk about this with him. I talk a great deal, and am surprisingly able to (cogently?) get on almost all of my bullshit—technology and magic and the occult, nonhuman personhood, the sham of gender and race and other social constructions of expected lived categories, the invisible architecture of bias, neurodiversity, and philosophy of mind—in a rather short window of time.So that’s definitely something… Continue Reading

Kirsten and I spent the week between the 17th and the 21st of September with 18 other utterly amazing people having Chatham House Rule-governed conversations about the Future of Artificial Intelligence.

We were in Norway, in the Juvet Landscape Hotel, which is where they filmed a lot of the movie Ex Machina, and it is even more gorgeous in person. None of the rooms shown in the film share a single building space. It’s astounding as a place of both striking architectural sensibility and also natural integration as they built every structure in the winter to allow the dormancy cycles of the plants and animals to dictate when and where they could build, rather than cutting anything down.

And on our first full day here, Two Ravens flew directly over my and Kirsten’s heads.


[Image of a rainbow rising over a bend in a river across a patchy overcast sky, with the river going between two outcropping boulders, trees in the foreground and on either bank and stretching off into the distance, and absolutely enormous mountains in the background]

I am extraordinarily grateful to Andy Budd and the other members of the Clear Left team for organizing this, and to Cennydd Bowles for opening the space for me to be able to attend, and being so forcefully enthused about the prospect of my attending that he came to me with a full set of strategies in hand to get me to this place. That kind of having someone in your corner means the world for a whole host of personal reasons, but also more general psychological and socially important ones, as well.

I am a fortunate person. I am a person who has friends and resources and a bloody-minded stubbornness that means that when I determine to do something, it will more likely than not get fucking done, for good or ill.

I am a person who has been given opportunities to be in places many people will never get to see, and have conversations with people who are often considered legends in their fields, and start projects that could very well alter the shape of the world on a massive scale.

Yeah, that’s a bit of a grandiose statement, but you’re here reading this, and so you know where I’ve been and what I’ve done.

I am a person who tries to pay forward what I have been given and to create as many spaces for people to have the opportunities that I have been able to have.

I am not a monetarily wealthy person, measured against my society, but my wealth and fortune are things that strike me still and make me take stock of it all and what it can mean and do, all over again, at least once a week, if not once a day, as I sit in tension with who I am, how the world perceives me, and what amazing and ridiculous things I have had, been given, and created the space to do, because and in violent spite of it all.

So when I and others come together and say we’re going to have to talk about how intersectional oppression and the lived experiences of marginalized peoples affect, effect, and are affected and effected BY the wider techoscientific/sociotechnical/sociopolitical/socioeconomic world and what that means for how we design, build, train, rear, and regard machine minds, then we are going to have to talk about how intersectional oppression and the lived experiences of marginalized peoples affect, effect, and are affected and effected by the wider techoscientific/sociotechnical/sociopolitical/socioeconomic world and what that means for how we design, build, train, rear, and regard machine minds.

So let’s talk about what that means.

Continue Reading

So The U.S. Transhumanist Party recently released some demographic info on their first 1,000 members, and while they seem to be missing some some rather crucial demographic markers, here, such as age and ethnicity, the gender breakdown is about what you’d expect.

I mention this because back at the end of June I attended the Decolonizing Mars Unconference, at the Library of Congress in D.C. It was the first time I had been in those buildings since I was a small child, and it was for such an amazing reason.

We discussed many topics, all in the interest of considering what it would really mean to travel through space to another planet, and to put humans and human interests there, longterm. Fundamentally, our concern was, is it even possible to do all of this without reproducing the worst elements of the colonialist projects we’ve seen on Earth, thus far, and if so, how do we do that?

[Image of Mars as seen from space, via JPL]

Continue Reading

Previously, I told you about The Human Futures and Intelligent Machines Summit at Virginia Tech, and now that it’s over, I wanted to go ahead and put the full rundown of the events all in one place.

The goals for this summit were to start looking at the ways in which issues of algorithms, intelligent machine systems, human biotech, religion, surveillance, and more will intersect and affect us in the social, academic, political spheres. The big challenge in all of this was seen as getting better at dealing with this in the university and public policy sectors, in America, rather than the seeming worse we’ve gotten, so far.

Here’s the schedule. Full notes, below the cut.

Friday, June 8, 2018

  • Josh Brown on “the distinction between passive and active AI.”
  • Daylan Dufelmeier on “the potential ramifications of using advanced computing in the criminal justice arena…”
  • Mario Khreiche on the effects of automation, Amazon’s Mechanical Turk, and the Microlabor market.
  • Aaron Nicholson on how technological systems are used to support human social outcomes, specifically through the lens of policing  in the city of Atlanta
  • Ralph Hall on “the challenges society will face if current employment and income trends persist into the future.”
  • Jacob Thebault-Spieker on “how pro-urban and pro-wealth biases manifest in online systems, and  how this likely influences the ‘education’ of AI systems.”
  • Hani Awni on the sociopolitical of excluding ‘relational’ knowledge from AI systems.

Saturday, June 9, 2018

  • Chelsea Frazier on rethinking our understandings of race, biocentrism, and intelligence in relation to planetary sustainability and in the face of increasingly rapid technological advancement.
  • Ras Michael Brown on using the religions technologies of West Africa and the West African Diaspora to reframe how we think about “hybrid humanity.”
  • Damien Williams on how best to use interdisciplinary frameworks in the creation of machine intelligence and human biotechnological interventions.
  • Sara Mattingly-Jordan on the implications of the current global landscape in AI ethics regulation.
  • Kent Myers on several ways in which the intelligence community is engaging with human aspects of AI, from surveillance to sentiment analysis.
  • Emma Stamm on the idea that datafication of the self and what about us might be uncomputable.
  • Joshua Earle on “Morphological Freedom.”

Continue Reading

I talked with Hewlett Packard Enterprise’s Curt Hopkins, for their article “4 obstacles to ethical AI (and how to address them).” We spoke about the kinds of specific tools and techniques by which people who populate or manage artificial intelligence design teams can incorporate expertise from the humanities and social sciences. We also talked about compelling reasons why they should do this, other than the fact that they’re just, y’know, very good ideas.

From the Article:

To “bracket out” bias, Williams says, “I have to recognize how I create systems and code my understanding of the world.” That means making an effort early on to pay attention to the data entered. The more diverse the group, the less likely an AI system is to reinforce shared bias. Those issues go beyond gender and race; they also encompass what you studied, the economic group you come from, your religious background, all of your experiences.

That becomes another reason to diversify the technical staff, says Williams. This is not merely an ethical act. The business strategy may produce more profit because the end result may be a more effective AI. “The best system is the one that best reflects the wide range of lived experiences and knowledge in the world,” he says.

[Image of two blank, white, eyeless faces, partially overlapping each other.]

To be clear, this is an instance in which I tried to find capitalist reasons that would convince capitalist people to do the right thing. To that end, you should imagine that all of my sentences start with “Well if we’re going to continue to be stuck with global capitalism until we work to dismantle it…” Because they basically all did.

I get how folx might think that framing would be a bit of a buzzkill for a tech industry audience, but I do want to highlight and stress something: Many of the ethical problems we’re concerned with mitigating or ameliorating are direct products of the capitalist system in which we are making these choices and building these technologies.

All of that being said, I’m not the only person there with something interesting to say, and you should go check out the rest of my and other people’s comments.

Until Next Time.