This summer I participated in SRI International’s Technology and Consciousness Workshop Series. The meetings were held under the auspices of the Chatham House Rule, which means that there are many things I can’t tell you about them, such as who else was there, or what they said in the context of the meetings; however I can tell you what talked about. In light of this recent piece in The Boston Globe and the ongoing developments in the David Slater/PETA/Naruto case, I figured that now was a good time to do so.
I presented three times—once on interdisciplinary perspectives on minds and mindedness; then on Daoism and Machine Consciousness; and finally on a unifying view of my thoughts across all of the sessions. This is my outline and notes for the first of those talks.
In a 2013 aeon Article Michael Hanlon said he didn’t think we’d ever solve “The Hard Problem,” and there’s been some skepticism about it, elsewhere. I’ll just say that said question seems to completely miss a possibly central point. Something like consciousness is, and what it is is different for each thing that displays anything like what we think it might be. If we manage to generate at least one mind that is similar enough to what humans experience as “conscious” that we may communicate with it, what will we owe it and what would it be able to ask from us? How might our interactions be affected by the fact that its mind (or their minds) will be radically different from ours? What will it be able to know that we cannot, and what will we have to learn from it?
So I’m going to be talking today about intersectionality, embodiment, extended minds, epistemic valuation, phenomenological experience, and how all of these things come together to form the bases for our moral behavior and social interactions. To do that, I’m first going to need ask you some questions: