Late last month, I was at Theorizing the Web, in NYC, to moderate Panel B3, “Bot Phenomenology,” in which I was very grateful to moderate a panel of people I was very lucky to be able to bring together. Johnathan Flowers, Emma Stamm, and Robin Zebrowski were my interlocutors in a discussion about the potential nature of nonbiological phenomenology. Machine consciousness. What robots might feel.
I led them through with questions like “What do you take phenomenology to mean?” and “what do you think of the possibility of a machine having a phenomenology of its own?” We discussed different definitions of “language” and “communication” and “body,” and unfortunately didn’t have a conversation about how certain definitions of those terms mean that what would be considered language between cats would be a cat communicating via signalling to humans.
It was a really great conversation and the Live Stream video for this is here, and linked below (for now, but it may go away at some point, to be replaced by a static youtube link; when I know that that’s happened, I will update links and embeds, here).
Though “What It’s Like To Be a Bot” starts from the specific proposition of bots, it is actually a more general investigation of the proposition of nonhuman and nonbiological consciousness and experience, and the question of what it means to think of someone as a person.
From the Article:
We are minds in bodies, and bodyminds in the world in which we live, and consciousnesses in the world and relationships we create. Any proposed set of physiological and neurological bases for consciousness will not be able to adequately describe what we are or what we observe in all cases. Just as some humans are born with completely different structures of brains than others and still have what we think of as consciousness, so too must we be prepared for nonbiological components to act as potential substrates of consciousness. There may not be any particular thing that makes humans uniquely conscious, or any single organizational structure that is universally necessary for consciousness in any sort of being.
If there is no one configuration of physical form and experiential knowledge that gives rise to consciousness, there cannot be any single test for consciousness either: The Turing Test itself fails. A statistically significant number of humans fail such tests for “normal” personhood. The claim there must be one and only one “right” way to exist opens the door to eugenics and other forms of bigotry. In history, many have been excluded from definitions of personhood based on who is accepted by the local or wider community or who enjoys legal rights and protections. We’ve seen this happen with African Americans, indigenous peoples, women, disabled people, neuro-divergent folks, and LGBTQIA people. Some are still denied personhood to this day.
There’s much more like that, at the link, and, again, this will be a part of a much longer article, if not dissertation chapter, in the very near future, so stay tuned.
In addition to all of that, I have another quote about the philosophical and sociopolitical implications of machine intelligence in this extremely well-written piece by K.G. Orphanides at WIRED UK. From the Article:
Williams, a specialist in the ethics and philosophy of nonhuman consciousness, argues that such systems need to be built differently to avoid a a corporate race for the best threat analysis and response algorithms which [will be] likely to [see the world as] a “zero-sum game” where only one side wins. “This is not a perspective suited to devise, for instance, a thriving flourishing life for everything on this planet, or a minimisation of violence and warfare,” he adds.
And there’s much more about this, from many others, at that link. Well worth your time, if you haven’t already.
Until Next Time.