philosophy

All posts tagged philosophy

So by now you’re likely to have encountered something about the NYT Op-Ed Piece calling for a field of study that focuses on the impact of AI and algorithmic systems, a stance that elides the existence of not only communications and media studies people who focus on this work, but the whole entire disciplines of Philosophy of Technology and STS (rendered variously as “Science and Technology Studies” or “Science Technology and Society,” depending on a number of factors, but if you talk about STS, you’ll get responses from all of the above, about the same topics). While Dr. O’Neil has since tried to reframe this editorial as a call for businesses, governments, and the public to pay more attention to those people and groups, many have observed that such an argument exists nowhere in the article itself. Instead what we have is lines like academics (seemingly especially those in the humanities) are “asleep at the wheel.”

Instead of “asleep at the wheel” try “painfully awake on the side of the road at 5am in a part of town lyft and uber won’t come to, trying to flag down a taxi driver or hitchhike or any damn thing just please let me make this meeting so they can understand some part of what needs to be done.”* The former ultimately frames the humanities’ and liberal arts’ lack of currency and access as “well why aren’t you all speaking up more.” The latter gets more to the heart of “I’m sorry we don’t fund your departments or engage with your research or damn near ever heed your recommendations that must be so annoying for you oh my gosh.”

But Dr O’Neil is not the only one to write or say something along these lines—that there is somehow no one, or should be someone out here doing the work of investigating algorithmic bias, or infrastructure/engineering ethics, or any number of other things that people in philosophy of technology and STS are definitely already out here talking about. So I figured this would be, at the least, a good opportunity to share with you something discussing the relationship between science and technology, STS practitioners’ engagement with the public, and the public’s engagement of technoscience. Part 1 of who knows how many.

[Cover of the journal Techné: Research in Philosophy and Technology]

The relationship between technology and science is one in which each intersects with, flows into, shapes, and affects the other. Not only this, but both science and technology shape and are shaped by the culture in which they arise and take part. Viewed through the lens of the readings we’ll discuss it becomes clear that many scientists and investigators at one time desired a clear-cut relationship between science and technology in which one flows from the other, with the properties of the subcategory being fully determined by those of the framing category, and sociocultural concerns play no part.

Many investigators still want this clarity and certainty, but in the time since sociologists, philosophers, historians, and other investigators from the humanities and so called soft sciences began looking at the history and contexts of the methods of science and technology, it has become clear that these latter activities do not work in an even and easily rendered way. When we look at the work of Sergio Sismondo, Trevor J. Pinch and Wiebe E. Bijker, Madeline Akrich, and Langdon Winner, we can see that the social dimensions and intersections of science, culture, technology, and politics are and always have been crucially entwined.

In Winner’s seminal “Do Artifacts Have Politics?”(1980), we can see what counts as a major step forward along the path toward a model which takes seriously the social construction of science and technology, and the way in which we go about embedding our values, beliefs, and politics into the systems we make. On page 127, Winner states,

The things we call “technologies” are ways of building order in our world… Consciously or not, deliberately or inadvertently, societies choose structures for technologies that influence how people are going to work, communicate, travel, consume, [etc.]… In the processes by which structuring decisions are made, different people … possess unequal degrees of power [and] levels of awareness.

By this, Winner means to say that everything we do in the construction of the culture of scientific discovery and technological development is modulated by the sociocultural considerations that get built into them, and those constructed things go on to influence the nature of society, in turn. As a corollary to this, we can see a frame in which the elements within the frame—including science and technology—will influence and modulate each other, in the process of generating and being generated by the sociopolitical frame. Science will be affected by the tools it uses to make its discoveries, and the tools we use will be modulated and refined as our understandings change.

Pinch and Bijker write very clearly about the multidirectional interactions of science, technology, and society in their 1987 piece, [The Social Construction of Technological Systems,] using the history of the bicycle as their object of study. Through their investigation of the messy history of bicycles, “safety bicycles,” inflated rubber tires, bicycle racing, and PR ad copy, Pinch and Bijker show that science and technology aren’t clearly distinguished anymore, if they ever were. They show how scientific studies of safety were less influential on bicycle construction and adoption than the social perception [of] the devices, meaning that politics and public perception play a larger role in what gets studied, created, and adopted than we used to admit.

They go on to highlight a kind of multidirectionality and interpretive flexibility, which they say we achieve by looking at the different social groups that intersect with the technology, and the ways in which they do so (pg. 34). When we do this, we will see that each component group is concerned with different problems and solutions, and that each innovation made to address these concerns alters the landscape of the problem space. How we define the problem dictates the methods we will use and the technology that we create to seek a solution to it.

[Black and white figures comparing the frames of a Whippet Spring Frame bicycle (left) and a Singer Xtraordinary bicycle (right), from “The Social Construction of Facts and Artifacts: Or How the Sociology of Science and the Sociology of Technology Might Benefit Each Other” by Trevor J. Binch and Wiebe E. Bijker, 1987]


Akrich’s 1997 “The De-Scription of Technical Objects” (published, perhaps unsurprisingly, in a volume coedited by Bijker), engages the moral valences of technological intervention, and the distance between intent in design and “on the ground” usage. In her investigation of how people in Burkina Faso, French Polynesia, and elsewhere make use of technology such as generators and light boxes, we again see a complex interplay between the development of a scientific or technological process and the public adoption of it. On page 221 Akrich notes, “…the conversion of sociotechnical facts into facts pure and simple depends on the ability to turn technical objects into black boxes. In other words, as they become indispensable, objects also have to efface themselves.” That is, in order for the public to accept the scientific or technological interventions, those interventions had to become an invisible part of the framework of the public’s lives. Only when the public no longer had to think about these interventions did they become paradoxically “seen,” understood, as “good” science and technology.

In Sismondo’s “Science and Technology Studies and an Engaged Program” (2008) he spends some time discussing the social constructivist position that we’ve begun laying out, above—the perspective that everything we do and all the results we obtain from the modality of “the sciences” are constructed in part by that mode. Again, this would mean that “constructed” would describe both the data we organize out of what we observe, and what we initially observe at all. From page 15, “Not only data but phenomena themselves are constructed in laboratories—laboratories are places of work, and what is found in them is not nature but rather the product of much human effort.”

But Sismondo also says that this is only one half of the picture, then going on to discuss the ways in which funding models, public participation, and regulatory concerns can and do alter the development and deployment of science and technology. On page 19 he discusses a model developed in Denmark in the 1980’s:

Experts and stakeholders have opportunities to present information to the panel, but the lay group has full control over its report. The consensus conference process has been deemed a success for its ability to democratize technical decision-making without obviously sacrificing clarity and rationality, and it has been extended to other parts of Europe, Japan, and the United States…

This all merely highlights the fact that, if the public is going to be engaged, then the public ought to be as clear and critical as possible in its understanding of the exchanges that give rise to the science and technology on which they are asked to comment.

The non-scientific general public’s understanding of the relationship between science and technology is often characterized much as I described at the beginning of this essay. That is, it is often said that the public sees the relationship as a clear and clean move from scientific discoveries or breakthroughs to a device or other application of those principles. However, this casting does not take into account the variety of things that the public will often call technology, such as the Internet, mobile phone applications, autonomous cars, and more.

While there are scientific principles at play within each of those technologies, it still seems a bit bizarre to cast them merely as “applied science.” They are not all devices or other single physical instantiations of that application, and even those that are singular are the applications of multiple sciences, and also concrete expressions of social functions. Those concretions have particular psychological impacts, and philosophical implications, which need to be understood by both their users and their designers. Every part affects every other part, and each of those parts is necessarily filtered through human perspectives.

The general public needs to understand that every technology humans create will necessarily carry within it the hallmarks of human bias. Regardless of whether there is an objective reality at which science points, the sociocultural and sociopolitical frameworks in which science gets done will influence what gets investigated. Those same sociocultural and sociopolitical frameworks will shape the tools and instruments and systems—the technology—used to do that science. What gets done will then become a part of the scientific and technological landscape to which society and politics will then have to react. In order for the public to understand this, we have to educate about the history of science, the nature of social scientific methods, and the impact of implicit bias.

My own understanding of the relationship between science and technology is as I have outlined: A messy, tangled, multivalent interaction in which each component influences and is influenced by every other component, in near simultaneity. This framework requires a willingness to engage multiple perspectives and disciplines, and to perhaps reframe the normative project of science and technology to one that appreciates and encourages a multiplicity of perspectives, and no single direction of influence between science, technology and society. Once people understand this—that science and technology generate each other while influencing and being influenced by society—we do the work of engaging them in a nuanced and mindful way, working together to prevent the most egregious depredations of technoscientific development, or at least to agilely respond to them, as they arise.

But to do this, researchers in the humanities need to be heeded. In order to be heeded, people need to know that we exist, and that we have been doing this work for a very, very long time. The named field of Philosophy of Technology has been around for 70 years, and it in large parta foregrounded the concerns taken up and explored by STS. Here are just a few names of people to look at in this extensive history: Martin Heidegger, Bruno Latour, Don Ihde, Ian Hacking, Joe Pitt, and more recently, Ashley Shew, Shannon Vallor, Robin Zebrowski, John P. Sullins, John Flowers, Matt Brown, Shannon Conley, Lee Vinsel, Jacques Ellul, Andrew Feenberg, Batya Friedman, Geoffrey C. Bowker and Susan Leigh Star, Rob Kling, Phil Agre, Lucy Suchman, Joanna Bryson, David Gunkel, so many others. Langdon Winner published “Do Artifacts Have Politics” 37 years ago. This episode of the You Are Not So Smart podcast, along with Shannon Vallor and Alistair Croll, has all of us talking about the public impact of the aforementioned.

What I’m saying is that many of us are trying to do the work, out here. Instead of pretending we don’t exist, try using large platforms (Like the NYT opinion page, and well read blogs) to highlight the very real work being attempted. I know for a fact the NYT has received submission articles about philosophy of tech and STS. Engage them. Discuss these topics in public, and know that there are many voices trying to grapple with and understand this world, and we have been, for a really damn long time.

So you see that we are still talking about learning and thinking in public. About how we go about getting people interested and engaged in the work of the technology that affects their lives. But there is a lot at the base of all this about what people think of as “science” or “expertise” and where they think that comes from, and what they think of those who engage in or have it. If we’re going to do this work, we have to be able to have conversations with people who not only don’t value what we do, but who think what we value is wrongheaded, or even evil. There is a lot going on in the world, right now, in regards to science and knowability. For instance, late last year there was a revelation about the widespread use of Dowsing by UK water firms (though if you ask anybody in the US, you’ll find it’s still in use, here, too).

And then this guy was trying to use systems of fluid dynamics and aeronautics to launch himself in a rocket to prove that the earth is flat and that science isn’t real. Yeah. And while there’s a much deeper conversation to be had here about whether the social construction of the category of “science” can be understood as distinct from a set of methodologies and formulae, but i really don’t think this guy is talking about having that conversation.

So let’s also think about the nature of how laboratory science is constructed, and what it can do for us.

In his 1983 “Give Me a Laboratory and I Will Move The World,” Bruno Latour makes the claim that labs have their own agency. What Latour is asserting, here, is that the forces which coalesce within the framework of a lab become active agents in their own right. They are not merely subject to the social and political forces that go into their creation, but they are now active participants in the framing and reframing of those forces. He believes that the nature of inscription—the combined processes of condensing, translating, and transmitting methods, findings, and pieces of various knowledges—is a large part of what gives the laboratory this power, and he highlights this when he says:

The strength gained in the laboratory is not mysterious. A few people much weaker than epidemics can become stronger if they change the scale of the two actors—making the microbes big, and the epizootic small—and others dominate the events through the inscription devices that make each of the steps readable. The change of scale entails an acceleration in the number of inscriptions you can get. …[In] a year Pasteur could multiply anthrax outbreaks. No wonder that he became stronger than veterinarians. For every statistic they had, he could mobilize ten of them. (pg. 163—164)

This process of inscription is crucial for Latour; not just for the sake of what the laboratory can do of its own volition, but also because it is the mechanism by which scientists may come to understand and translate the values and concerns of another, which is, for him, the utmost value of science. In rendering the smallest things such as microbes and diseases legible on a large scale, and making largescale patterns individually understandable and reproducible, the presupposed distinctions of “macro” and “micro” are shown to be illusory. Latour believes that it is only through laboratory engagement that we can come to fully understand the complexities of these relationships (pg. 149).

When Latour begins laying out his project, he says sociological methods can offer science the tools to more clearly translate human concerns into a format with which science can grapple. “He who is able to translate others’ interests into his own language carries the day.” (pg. 144). However, in the process of detailing what it is that Pasteurian laboratory scientists do in engaging the various stakeholders in farms, agriculture, and veterinary medicine, it seems that he is has only described half of the project. Rather than merely translating the interests of others into our own language, evidence suggests that we must also translate our interests back into the language of our interlocutor.

So perhaps we can recast Latour’s statement as, “whomsoever is able to translate others’ interests into their own language and is equally able to translate their own interests into the language of another, carries the day.” Thus we see that the work done in the lab should allow scientists and technicians to increase the public’s understanding both of what it is that technoscience actually does and why it does it, by presenting material that can speak to many sets of values.

Karin Knorr-Cetina’s assertion in her 1995 article “Laboratory Studies: The Cultural Approach to the Study of Science” is that laboratory is an “enhanced” environment. In many ways this follows directly from Latour’s conceptualization of labs. Knorr-Cetina says that the constructed nature of the lab ‘“improves upon” the natural order,’ because said natural order is, in itself, malleable, and capable of being understood and rendered in a multiplicity of ways (pg. 9). If laboratories are never engaging the objects they study “as they occur in nature,” this means that labs are always in the process of shaping what they study, in order to better study it (ibid). This framing of the engagement of laboratory science is clarified when she says:

Detailed description [such as that done in laboratories] deconstructs—not out of an interest in critique but because it cannot but observe the intricate labor that goes into the creation of a solid entity, the countless nonsolid ingredients from which it derives, the confusion and negotiation that often lie at its origin, and the continued necessity of stabilizing and congealing. Constructionist studies have revealed the ordinary working of things that are black-boxed as “objective” facts and “given” entities, and they have uncovered the mundane processes behind systems that appear monolithic, awe inspiring, inevitable. (pg. 12)

Thus, the laboratory is one place in which the irregularities and messiness of the “natural world” are ordered in such a ways as to be able to be studied at all. However, Knorr-Cetina clarifies that “nothing epistemically special” is happening, in a lab (pg. 16). That is, while a laboratory helps us to better recognize nonhuman agents (“actants”) and forces at play in the creation of science, this is merely a fact of construction; everything that a scientist does in a lab is open to scrutiny and capable of being understood. If this is the case, then the “enhancement” gained via the conditions of the laboratory environment is merely a change in degree, rather than a difference in kind, as Latour seems to assert.

[Stock photo image of hundreds of scallops and two scallop fishers on the deck of a boat in the St Brieuc Bay.]

[Stock photo image of hundreds of scallops and two scallop fishers on the deck of a boat in the St Brieuc Bay.]


In addition to the above explorations of what the field of laboratory studies has to offer, we can also look at the works of Michel Callon and Sharon Traweek. Though primarily concerned with describing the network of actors and their concerns in St Brieuc Bay scallop-fishing and -farming industries, Callon’s investigation can be seen as example of Latour’s principle of bringing the laboratory out in the world, both in terms of the subjects of Callon’s investigation and the methods of those subjects. While Callon himself might disagree with this characterization, we can trace the process of selection and enframing of subjects and the investigation of their translation procedures, which we can see on page 20, when he says,

We know that the ingredients of controversies are a mixture of considerations concerning both Society and Nature. For this reason we require the observer to use a single repertoire when they are described. The vocabulary chosen for these descriptions and explanations can be left to the discretion of the observer. He cannot simply repeat the analysis suggested by the actors he is studying. (Callon, 1984)

In this way, we can better understand how laboratory techniques have become a component even of the study and description of laboratories.

When we look at a work like Sharon Traweek’s Beamtimes and Lifetimes, we can see that she finds value in bringing ethnographic methodologies into laboratory studies, and perhaps even laboratory settings. She discusses the history of the laboratory’s influence, arcing back to WWI and WWII, where scientists were tasked with coming up with more and better weapons, with their successes being used to push an ever-escalating arms race. As this process continued, the characteristics of what made a “good lab scientist” were defined and then continually reinforced, as being “someone who did science like those people over there.” In the building of the laboratory community, certain traits and behaviours become seen as ideal, those who do not match those traits and expectations are regarded as necessarily doing inferior work. She says,

The field worker’s goal, then, is to find out what the community takes to be knowledge, sensible action, and morality, as well as how its members account for unpredictable information, disturbing actions, and troubling motives. In my fieldwork I wanted to discover the physicists’ “common sense” world view, what everyone in the community knows, and what every newcomer needs to learn in order to act in a sensible way, in order to be taken seriously. (pg. 8)

And this is also the danger of focusing too closely on the laboratory: the potential for myopia, for thinking that the best or perhaps even only way to study the work of scientists is to render that work through the lens of the lab.

While the lab is a fantastic tool and studies of it provide great insight, we must remember that we can learn a great deal about science and technology via contexts other than that of the lab. While Latour argues that laboratory science actually destabilizes the inside-the-lab/outside-the-lab distinction by showing that the tools and methods of the lab can be brought anywhere out into the world, it can be said that the distinction is reinstantiated by our focusing on laboratories as the sole path to understand scientists. Much the same can be said for the insistence that systems engineers are the sole best examples of how to engage technological development. Thinking that labs are the only resource we have means that we will miss the behavior of researchers at conferences, retreats, in journal articles, and other places where the norms of the scientific community are inscribed and reinforced. It might not be the case that scientists understand themselves as creating community rules, in these fora, but this does not necessarily mean that they are not doing so.

The kinds of understandings a group has about themselves will not always align with what observations and descriptions might be gleaned from another’s investigation of that group, but this doesn’t mean that one of those has to be “right” or “true” while the other is “wrong” and “false.” The interest in studying a discipline should come not from that group’s “power” to “correctly” describe the world, but from understanding more about what it is about whatever group is under investigation that makes it itself. Rather than seeking a single correct perspective, we should instead embrace the idea that a multiplicity of perspectives might all be useful and beneficial, and then ask “To What End?”

We’re talking about Values, here. We’re talking about the question of why whatever it is that matters to you, matters to you. And how you can understand that other people have different values from each other, and we can all learn to talk about what we care about in a way that helps us understand each other. That’s not neutral, though. Even that can be turned against us, when it’s done in bad faith. And we have to understand why someone would want to do that, too.

[Direct link to Mp3]

My second talk for the SRI International Technology and Consciousness Workshop Series was about how nonwestern philosophies like Buddhism, Hinduism, and Daoism can help mitigate various kinds of bias in machine minds and increase compassion by allowing programmers and designers to think from within a non-zero-sum matrix of win conditions for all living beings, meaning engaging multiple tokens and types of minds, outside of the assumed human “default” of straight, white, cis, ablebodied, neurotypical male. I don’t have a transcript, yet, and I’ll update it when I make one. But for now, here are my slides and some thoughts.

A Discussion on Daoism and Machine Consciousness (Slides as PDF)

(The translations of the Daoist texts referenced in the presentation are available online: The Burton Watson translation of the Chuang Tzu and the Robert G. Hendricks translation of the Tao Te Ching.)

A zero-sum system is one in which there are finite resources, but more than that, it is one in which what one side gains, another loses. So by “A non-zero-sum matrix of win conditions” I mean a combination of all of our needs and wants and resources in such a way that everyone wins. Basically, we’re talking here about trying to figure out how to program a machine consciousness that’s a master of wu-wei and limitless compassion, or metta.

The whole week was about phenomenology and religion and magic and AI and it helped me think through some problems, like how even the framing of exercises like asking Buddhist monks to talk about the Trolley Problem will miss so much that the results are meaningless. That is, the trolley problem cases tend to assume from the outset that someone on the tracks has to die, and so they don’t take into account that an entire other mode of reasoning about sacrifice and death and “acceptable losses” would have someone throw themselves under the wheels or jam their body into the gears to try to stop it before it got that far. Again: There are entire categories of nonwestern reasoning that don’t accept zero-sum thought as anything but lazy, and which search for ways by which everyone can win, so we’ll need to learn to program for contradiction not just as a tolerated state but as an underlying component. These systems assume infinitude and non-zero-sum matrices where every being involved can win.

Continue Reading

What is The Real?

I have been working on this piece for a little more than a month, since just after Christmas. What with one thing, and another, I kept refining it while, every week, it seemed more and more pertinent and timely. You see, we need to talk about ontology.

Ontology is an aspect of metaphysics, the word translating literally to “the study of what exists.” Connotatively, we might rather say, “trying to figure out what’s real.” Ontology necessarily intersects with studies of knowledge and studies of value, because in order to know what’s real you have to understand what tools you think are valid for gaining knowledge, and you have to know whether knowledge is even something you can attain, as such.

Take, for instance, the recent evolution of the catchphrase of “fake news,” the thinking behind it that allows people to call lies “alternative facts,” and the fact that all of these elements are already being rotated through several dimensions of meaning that those engaging in them don’t seem to notice. What I mean is that the inversion of the catchphrase “fake news” into a cipher for active confirmation bias was always going to happen. It and any consternation at it comprise a situation that is borne forth on a tide of intentional misunderstandings.

If you were using fake to mean, “actively mendacious; false; lies,” then there was a complex transformation happening here, that you didn’t get:

There are people who value the actively mendacious things you deemed “wrong”—by which you meant both “factually incorrect” and “morally reprehensible”—and they valued them on a nonrational, often actively a-rational level. By this, I mean both that they value the claims themselves, and that they have underlying values which cause them to make the claims. In this way, the claims both are valued and reinforce underlying values.

So when you called their values “fake news” and told them that “fake news” (again: their values) ruined the country, they—not to mention those actively preying on their a-rational valuation of those things—responded with “Nuh-uh! your values ruined the country! And that’s why we’re taking it back! MAGA! MAGA! Drumpfthulhu Fhtagn!”

Logo for the National Geographic Channel’s “IS IT REAL?” Many were concerned that NG Magazine were going to change their climate change coverage after they were bought by 21st Century Fox.

You see? They mean “fake news” along the same spectrum as they mean “Real America.” They mean that it “FEELS” “RIGHT,” not that it “IS” “FACT.”

Now, we shouldn’t forget that there’s always some measure of preference to how we determine what to believe. As John Flowers puts it, ‘Truth has always had an affective component to it: those things that we hold to be most “true” are those things that “fit” with our worldview or “feel” right, regardless of their factual veracity.

‘We’re just used to seeing this in cases of trauma, e.g.: “I don’t believe he’s dead,” despite being informed by a police officer.’

Which is precisely correct, and as such the idea that the affective might be the sole determinant is nearly incomprehensible to those of us who are used to thinking of facts as things that are verifiable by reference to externalities as well as values. At least, this is the case for those of us who even relativistically value anything at all. Because there’s also always the possibility that the engagement of meaning plays out in a nihilistic framework, in which we have neither factual knowledge nor moral foundation.

Epistemic Nihilism works like this: If we can’t ever truly know anything—that is, if factual knowledge is beyond us, even at the most basic “you are reading these words” kind of level—then there is no description of reality to be valued above any other, save what you desire at a given moment. This is also where nihilism and skepticism intersect. In both positions nothing is known, and it might be the case that nothing is knowable.

So, now, a lot has been written about not only the aforementioned “fake news,” but also its over-arching category of “post-truth,” said to be our present moment where people believe (or pretend to believe) in statements or feelings, independent of their truth value as facts. But these ideas are neither new nor unique. In fact, Simpsons Did It. More than that, though, people have always allowed their values to guide them to beliefs that contradict the broader social consensus, and others have always eschewed values entirely, for the sake of self-gratification. What might be new, right now, is the willfulness of these engagements, or perhaps their intersection. It might be the case that we haven’t before seen gleeful nihilism so forcefully become the rudder of gormless, value-driven decision-making.

Again, values are not bad, but when they sit unexamined and are the sole driver of decisions, they’re just another input variable to be gamed, by those of a mind to do so. People who believe that nothing is knowable and nothing matters will, at the absolute outside, seek their own amusement or power, though it may be said that nihilism in which one cares even about one’s own amusement is not genuine nihilism, but is rather “nihilism,” which is just relativism in a funny hat. Those who claim to value nothing may just be putting forward a front, or wearing a suit of armour in order to survive an environment where having your values known makes you a target.

If they act as though they believe there is no meaning, and no truth, then they can make you believe that they believe that nothing they do matters, and therefore there’s, no moral content to any action they take, and so no moral judgment can be made on them for it. In this case, convincing people to believe news stories they make up is in no way materially different from researching so-called facts and telling the rest of us that we should trust and believe them. And the first way’s also way easier. In fact, preying on gullible people and using their biases to make yourself some lulz, deflect people’s attention, and maybe even get some of those sweet online ad dollars? That’s just common sense.

There’s still some something to be investigated, here, in terms of what all of this does for reality as we understand and experience it. How what is meaningful, what is true, what is describable, and what is possible all intersect and create what is real. Because there is something real, here—not “objectively,” as that just lets you abdicate your responsibility for and to it, but perhaps intersubjectively. What that means is that we generate our reality together. We craft meaning and intention and ideas and the words to express them, together, and the value of those things and how they play out all sit at the place where multiple spheres of influence and existence come together, and interact.

To understand this, we’re going to need to talk about minds and phenomenological experience.

 

What is a Mind?

We have discussed before the idea that what an individual is and what they feel is not only shaped by their own experience of the world, but by the exterior forces of society and the expectations and beliefs of the other people with whom they interact. These social pressures shape and are shaped by all of the people engaged in them, and the experience of existence had by each member of the group will be different. That difference will range on a scale from “ever so slight” to “epochal and paradigmatic,” with the latter being able to spur massive misunderstandings and miscommunications.

In order to really dig into this, we’re going to need to spend some time thinking about language, minds, and capabilities.

Here’s an article that discusses the idea that you mind isn’t confined to your brain. This isn’t meant in a dualistic or spiritualistic sense, but as the fundamental idea that our minds are more akin to, say, an interdependent process that takes place via the interplay of bodies, environments, other people, and time, than they are to specifically-located events or things. The problem with this piece, as my friends Robin Zebrowski and John Flowers both note, is that it leaves out way too many thinkers. People like Andy Clark, David Chalmers, Maurice Merleau-Ponty, John Dewey, and William James have all discussed something like this idea of a non-local or “extended” mind, and they are all greatly preceded by the fundamental construction of the Buddhist view of the self.

Within most schools of Buddhism, Anatta, or “no self” is how one refers to one’s indvidual nature. Anatta is rooted in the idea that there is no singular, “true” self. To vastly oversimplify, there is an concept known as “The Five Skandhas” or “aggregates.” These are the parts of yourself that are knowable and which you think of as permanent, and they are your:

Material Form (Body)
Feelings (Pleasure, Pain, Indifference)
Perception (Senses)
Mental Formations (Thoughts)
Consciousness

http://www.mountainsoftravelphotos.com/Tibet%20-%20Buddhism/Wheel%20Of%20Life/Wheel%20Of%20Life/slides/Tibetan%20Buddhism%20Wheel%20Of%20Life%2007%2004%20Mind%20And%20Body%20-%20People%20In%20Boat.JPG

Image of People In a Boat, from a Buddhist Wheel of Life.

Along with the skandhas, there are two main arguments that go into proving that you don’t have a self, known as “The Argument From Control” (1) and “The Argument from Impermanence” (2)
1) If you had a “true self,” it would be the thing in control of the whole of you, and since none of the skandhas is in complete control of the rest—and, in fact, all seem to have some measure of control over all—none of them is your “true self.”
2) If you had a “true self,” it would be the thing about you that was permanent and unchanging, and since none of the skandhas is permanent and unchanging—and, in fact, all seem to change in relation to each other—none of them is your “true self.”

The interplay between these two arguments also combines with an even more fundamental formulation: If only the observable parts of you are valid candidates for “true selfhood,” and if the skandhas are the only things about yourself that you can observe, and if none of the skandhas is your true self, then you have no true self.

Take a look at this section of “The Questions of King Milinda,” for a kind of play-by-play of these arguments in practice. (But also remember that Milinda was Menander, a man who was raised in the aftermath of Alexandrian Greece, and so he knew the works of Socrates and Plato and Aristotle and more. So that use of the chariot metaphor isn’t an accident.)

We are an interplay of forces and names, habits and desires, and we draw a line around all of it, over and over again, and we call that thing around which we draw that line “us,” “me,” “this-not-that.” But the truth of us is far more complex than all of that. We minds in bodies and in the world in which we live and the world and relationships we create. All of which kind of puts paid to the idea that an octopus is like an alien to us because it thinks with its tentacles. We think with ours, too.

As always, my tendency is to play this forward a few years to make us a mirror via which to look back at ourselves: Combine this idea about the epistemic status of an intentionally restricted machine mind; with the StackGAN process, which does “Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks,” or, basically, you describe in basic English what you want to see and the system creates a novel output image of it; with this long read from NYT on “The Great AI Awakening.”

This last considers how Google arrived at the machine learning model it’s currently working with. The author, Gideon Lewis-Kraus, discusses the pitfalls of potentially programming biases into systems, but the whole piece displays a kind of… meta-bias? Wherein there is an underlying assumption that “philosophical questions” are, again, simply shorthand for “not practically important,” or “having no real-world applications,” even the author discusses ethics and phenomenology, and the nature of what makes a mind. In addition to that, there is a just startling lack of gender variation, within the piece.

Because asking the question, “How do the women in Silicon Valley remember that timeframe,” is likely to get you get you very different perspectives than what we’re presented with, here. What kind of ideas were had by members of marginalized groups, but were ignored or eternally back-burnered because of that marginalization? The people who lived and worked and tried to fit in and have their voices heard while not being a “natural” for the framework of that predominantly cis, straight, white, able-bodied (though the possibility of unassessed neuroatypicality is high), male culture will likely have different experience, different contextualizations, than those who do comprise the predominant culture. The experiences those marginalized persons share will not be exactly the same, but there will be a shared tone and tenor of their construction that will most certainly set itself apart from those of the perceived “norm.”

Everyone’s lived experience of identity will manifest differently, depending upon the socially constructed categories to which they belong, which means that even those of us who belong to one or more of the same socially constricted categories will not have exactly the same experience of them.

Living as a disabled woman, as a queer black man, as a trans lesbian, or any number of other identities will necessarily colour the nature of what you experience as true, because you will have access to ways of intersecting with the world that are not available to people who do not live as you live. If your experience of what is true differs, then this will have a direct impact on what you deem to be “real.”

At this point, you’re quite possibly thinking that I’ve undercut everything we discussed in the first section; that now I’m saying there isn’t anything real, and that’s it’s all subjective. But that’s not where we are. If you haven’t, yet, I suggest reading Thomas Nagel’s “What Is It Like To Be A Bat?“ for a bit on individually subjective phenomenological experience, and seeing what he thinks it does and proves. Long story short, there’s something it “is like” to exist as a bat, and even if you or I could put our minds in a bat body, we would not know what it’s like to “be” a bat. We’d know what it was like to be something that had been a human who had put its brain into a bat. The only way we’d ever know what it was like to be a bat would be to forget that we were human, and then “we” wouldn’t be the ones doing the knowing. (If you’re a fan of Terry Pratchett’s Witch books, in his Discworld series, think of the concept of Granny Weatherwax’s “Borrowing.”)

But what we’re talking about isn’t the purely relative and subjective. Look carefully at what we’ve discussed here: We’ve crafted a scenario in which identity and mind are co-created. The experience of who and what we are isn’t solely determined by our subjective valuation of it, but also by what others expect, what we learn to believe, and what we all, together, agree upon as meaningful and true and real. This is intersubjectivity. The elements of our constructions depend on each other to help determine each other, and the determinations we make for ourselves feed into the overarching pool of conceptual materials from which everyone else draws to make judgments about themselves, and the rest of our shared reality.

 

The Yellow Wallpaper

Looking at what we’ve woven, here, what we have is a process that must be undertaken before certain facts of existence can be known and understood (the experiential nature of learning and comprehension being something else that we can borrow from Buddhist thought). But it’s still the nature of such presentations to be taken up and imitated by those who want what they perceive as the benefits or credit of having done the work. Certain people will use the trappings and language by which we discuss and explore the constructed nature of identity, knowledge, and reality, without ever doing the actual exploration. They are not arguing in good faith. Their goal is not truly to further understanding, or to gain a comprehension of your perspective, but rather to make you concede the validity of theirs. They want to force you to give them a seat at the table, one which, once taken, they will use to loudly declaim to all attending that, for instance, certain types of people don’t deserve to live, by virtue of their genetics, or their socioeconomic status.

Many have learned to use the conceptual framework of social liberal post-structuralism in the same way that some viruses use the shells of their host’s cells: As armour and cover. By adopting the right words and phrases, they may attempt to say that they are “civilized” and “calm” and “rational,” but make no mistake, Nazis haven’t stopped trying to murder anyone they think of as less-than. They have only dressed their ideals up in the rhetoric of economics or social justice, so that they can claim that anyone who stands against them is the real monster. Incidentally, this tactic is also known to be used by abusers to justify their psychological or physical violence. They manipulate the presentation of experience so as to make it seem like resistance to their violence is somehow “just as bad” as their violence. When, otherwise, we’d just call it self-defense.

If someone deliberately games a system of social rules to create a win condition in which they get to do whatever the hell they want, that is not of the same epistemic, ontological, or teleological—meaning, nature, or purpose—let alone moral status as someone who is seeking to have other people in the world understand the differences of their particular lived experience so that they don’t die. The former is just a way of manipulating perceptions to create a sense that one is “playing fair” when what they’re actually doing is making other people waste so much of their time countenancing their bullshit enough to counter and disprove it that they can’t get any real work done.

In much the same way, there are also those who will pretend to believe that facts have no bearing, that there is neither intersubjective nor objective verification for everything from global temperature levels to how many people are standing around in a crowd. They’ll pretend this so that they can say what makes them feel powerful, safe, strong, in that moment, or to convince others that they are, or simply, again, because lying and bullshitting amuses them. And the longer you have to fight through their faux justification for their lies, the more likely you’re too exhausted or confused about what the original point was to do anything else.

Side-by-side comparison of President Obama’s first Inauguration (Left) and Donald Trump’s Inauguration (Right).

If we are going to maintain a sense of truth and claim that there are facts, then we must be very careful and precise about the ways in which we both define and deploy them. We have to be willing to use the interwoven tools and perspectives of facts and values, to tap into the intersubjectively created and sustained world around us. Because, while there is a case to be made that true knowledge is unattainable, and some may even try to extend that to say that any assertion is as good as any other, it’s not necessary that one understands what those words actually mean in order to use them as cover for their actions. One would just have to pretend well enough that people think it’s what they should be struggling against. And if someone can make people believe that, then they can do and say absolutely anything.


A large part of how I support myself in the endeavor to think in public is with your help, so if you like what you’ve read here, and want to see more like it, then please consider becoming either a recurring Patreon subscriber or making a one-time donation to the Tip Jar, it would be greatly appreciated.
And thank you.

 

-Human Dignity-

The other day I got a CFP for “the future of human dignity,” and it set me down a path thinking.

We’re worried about shit like mythical robots that can somehow simultaneously enslave us and steal the shitty low paying jobs we none of us want to but all of us have to have so we can pay off the debt we accrued to get the education we were told would be necessary to get those jobs, while other folks starve and die of exposure in a world that is just chock full of food and houses…

About shit like how we can better regulate the conflated monster of human trafficking and every kind of sex work, when human beings are doing the best they can to direct their own lives—to live and feed themselves and their kids on their own terms—without being enslaved and exploited…

About, fundamentally, how to make reactionary laws to “protect” the dignity of those of us whose situations the vast majority of us have not worked to fully appreciate or understand, while we all just struggle to not get: shot by those who claim to protect us, willfully misdiagnosed by those who claim to heal us, or generally oppressed by the system that’s supposed to enrich and uplift us…

…but no, we want to talk about the future of human dignity?

Louisiana’s drowning, Missouri’s on literal fire, Baltimore is almost certainly under some ancient mummy-based curse placed upon it by the angry ghost of Edgar Allan Poe, and that’s just in the One Country.

Motherfucker, human dignity ain’t got a Past or a Present, so how about let’s reckon with that before we wax poetically philosophical about its Future.

I mean, it’s great that folks at Google are finally starting to realise that making sure the composition of their teams represents a variety of lived experiences is a good thing. But now the questions are, 1) do they understand that it’s not about tokenism, but about being sure that we are truly incorporating those who were previously least likely to be incorporated, and 2) what are we going to do to not only specifically and actively work to change that, but also PUBLICIZE THAT WE NEED TO?

These are the kinds of things I mean when I say, “I’m not so much scared of/worried about AI as I am about the humans who create and teach them.”

There’s a recent opinion piece at the Washington Post, titled “Why perceived inequality leads people to resist innovation,”. I read something like that and I think… Right, but… that perception is a shared one based on real impacts of tech in the lives of many people; impacts which are (get this) drastically unequal. We’re talking about implications across communities, nations, and the world, at an intersection with a tech industry that has a really quite disgusting history of “disruptively innovating” people right out of their homes and lives without having ever asked the affected parties about what they, y’know, NEED.

So yeah. There’s a fear of inequality in the application of technological innovation… Because there’s a history of inequality in the application of technological innovation!

This isn’t some “well aren’t all the disciplines equally at fault here,” pseudo-Kumbaya false equivalence bullshit. There are neoliberal underpinnings in the tech industry that are basically there to fuck people over. “What the market will bear” is code for, “How much can we screw people before there’s backlash? Okay so screw them exactly that much.” This model has no regard for the preexisting systemic inequalities between our communities, and even less for the idea that it (the model) will both replicate and iterate upon those inequalities. That’s what needs to be addressed, here.

Check out this piece over at Killscreen. We’ve talked about this before—about how we’re constantly being sold that we’re aiming for a post-work economy, where the internet of things and self-driving cars and the sharing economy will free us all from the mundaneness of “jobs,” all while we’re simultaneously being asked to ignore that our trajectory is gonna take us straight through and possibly land us square in a post-Worker economy, first.

Never mind that we’re still gonna expect those ex-workers to (somehow) continue to pay into capitalism, all the while.

If, for instance, either Uber’s plan for a driverless fleet or the subsequent backlash from their stable—i mean “drivers” are shocking to you, then you have managed to successfully ignore this trajectory.

Completely.

Disciplines like psychology and sociology and history and philosophy? They’re already grappling with the fears of the ones most likely to suffer said inequality, and they’re quite clear on the fact that, the ones who have so often been fucked over?

Yeah, their fears are valid.

You want to use technology to disrupt the status quo in a way that actually helps people? Here’s one example of how you do it: “Creator of chatbot that beat 160,000 parking fines now tackling homelessness.”

Until then, let’s talk about constructing a world in which we address the needs of those marginalised. Let’s talk about magick and safe spaces.

 

-Squaring the Circle-

Speaking of CFPs, several weeks back, I got one for a special issue of Philosophy and Technology on “Logic As Technology,” and it made me realise that Analytic Philosophy somehow hasn’t yet understood and internalised that its wholly invented language is a technology

…and then that realisation made me realise that Analytic Philosophy hasn’t understood that language as a whole is a Technology.

And this is something we’ve talked about before, right? Language as a technology, but not just any technology. It’s the foundational technology. It’s the technology on which all others are based. It’s the most efficient way we have to cram thoughts into the minds of others, share concept structures, and make the world appear and behave the way we want it to. The more languages we know, right?

We can string two or more knowns together in just the right way, and create a third, fourth, fifth known. We can create new things in the world, wholecloth, as a result of new words we make up or old words we deploy in new ways. We can make each other think and feel and believe and do things, with words, tone, stance, knowing looks. And this is because Language is, at a fundamental level, the oldest magic we have.

1528_injection_splash

Scene from the INJECTION issue #3, by Warren Ellis, Declan Shalvey, and Jordie Bellaire. ©Warren Ellis & Declan Shalvey.

Lewis Carroll tells us that whatever we tell each other three times is true, and many have noted that lies travel far faster than the truth, and at the crux of these truisms—the pivot point, where the power and leverage are—is Politics.

This week, much hay is being made is being made about the University of Chicago’s letter decrying Safe Spaces and Trigger Warnings. Ignoring for the moment that every definition of “safe space” and “trigger warning” put forward by their opponents tends to be a straw man of those terms, let’s just make an attempt to understand where they come from, and how we can situate them.

Trauma counseling and trauma studies are the epitome of where safe space and trigger warnings come from, and for the latter, that definition is damn near axiomatic. Triggers are about trauma. But safe space language has far more granularity than that. Microggressions are certainly damaging, but they aren’t on the same level as acute traumas. Where acute traumas are like gun shots or bomb blasts (and may indeed be those actual things), societal micragressions are more like a slow constant siege. But we still need the language of a safe spaces to discuss them—said space is something like a bunker in which to regroup, reassess, and plan for what comes next.

Now it is important to remember that there is a very big difference between “safe” and “comfortable,” and when laying out the idea of safe spaces, every social scientist I know takes great care to outline that difference.

Education is about stretching ourselves, growing and changing, and that is discomfort almost by definition. I let my students know that they will be uncomfortable in my class, because I will be challenging every assumption they have. But discomfort does not mean I’m going to countenance racism or transphobia or any other kind of bigotry.

Because the world is not a safe space, but WE CAN MAKE IT SAFER for people who are microagressed against, marginalised, assaulted, and killed for their lived identities, by letting them know not only how to work to change it, but SHOWING them through our example.

Like we’ve said, before: No, the world’s not safe, kind, or fair, and with that attitude it never will be.

So here’s the thing, and we’ll lay it out point-by-point:

A Safe Space is any realm that is marked out for the nonjudgmental expression of thoughts and feelings, in the interest of honestly assessing and working through them.

Safe Space” can mean many things, from “Safe FROM Racist/Sexist/Homophobic/Transphobic/Fatphobic/Ableist Microagressions” to “safe FOR the thorough exploration of our biases and preconceptions.” The terms of the safe space are negotiated at the marking out of them.

The terms are mutually agreed-upon by all parties. The only imposition would be, to be open to the process of expressing and thinking through oppressive conceptual structures.

Everything else—such as whether to address those structures as they exist in ourselves (internalised oppressions), in others (aggressions, micro- or regular sized), or both and their intersection—is negotiable.

The marking out of a Safe Space performs the necessary function, at the necessary time, defined via the particular arrangement of stakeholders, mindset, and need.

And, as researcher John Flowers notes, anyone who’s ever been in a Dojo has been in a Safe Space.

From a Religious Studies perspective, defining a safe space is essentially the same process as that of marking out a RITUAL space. For students or practitioners of any form of Magic[k], think Drawing a Circle, or Calling the Corners.

Some may balk at the analogy to the occult, thinking that it cheapens something important about our discourse, but look: Here’s another way we know that magick is alive and well in our everyday lives:

If they could, a not-insignificant number of US Republicans would overturn the Affordable Care Act and rally behind a Republican-crafted replacement (RCR). However, because the ACA has done so very much good for so many, it’s likely that the only RCR that would have enough support to pass would be one that looked almost identical to the ACA. The only material difference would be that it didn’t have President Obama’s name on it—which is to say, it wouldn’t be associated with him, anymore, since his name isn’t actually on the ACA.

The only reason people think of the ACA as “Obamacare” is because US Republicans worked so hard to make that name stick, and now that it has been widely considered a triumph, they’ve been working just as hard to get his name away from it. And if they did mange to achieve that, it would only be true due to some arcane ritual bullshit. And yet…

If they managed it, it would be touted as a “Crushing defeat for President Obama’s signature legislation.” It would have lasting impacts on the world. People would be emboldened, others defeated, and new laws, social rules, and behaviours would be undertaken, all because someone’s name got removed from a thing in just the right way.

And that’s Magick.

The work we do in thinking about the future sometimes requires us to think about things from what stuffy assholes in the 19th century liked to call a “primitive” perspective. They believed in a kind of evolutionary anthropological categorization of human belief, one in which all societies move from “primitive” beliefs like magic through moderate belief in religion, all the way to sainted perfect rational science. In the contemporary Religious Studies, this evolutionary model is widely understood to be bullshit.

We still believe in magic, we just call it different things. The concept structures of sympathy and contagion are still at play, here, the ritual formulae of word and tone and emotion and gesture all still work when you call them political strategy and marketing and branding. They’re all still ritual constructions designed to make you think and behave differently. They’re all still causing spooky action at a distance. They’re still magic.

The world still moves on communicated concept structure. It still turns on the dissemination of the will. If I can make you perceive what I want you to perceive, believe what I want you to believe, move how I want you to move, then you’ll remake the world, for me, if I get it right. And I know that you want to get it right. So you have to be willing to understand that this is magic.

It’s not rationalism.

It’s not scientism.

It’s not as simple as psychology or poll numbers or fear or hatred or aspirational belief causing people to vote against their interests. It’s not that simple at all. It’s as complicated as all of them, together, each part resonating with the others to create a vastly complex whole. It’s a living, breathing thing that makes us think not just “this is a thing we think” but “this is what we are.” And if you can do that—if you can accept the tools and the principles of magic, deploy the symbolic resonance of dreamlogic and ritual—then you might be able to pull this off.

But, in the West, part of us will always balk at the idea that the Rational won’t win out. That the clearer, more logical thought doesn’t always save us. But you have to remember: Logic is a technology. Logic is a tool. Logic is the application of one specific kind of thinking, over and over again, showing a kind of result that we convinced one another we preferred to other processes. It’s not inscribed on the atoms of the universe. It is one kind of language. And it may not be the one most appropriate for the task at hand.

Put it this way: When you’re in Zimbabwe, will you default to speaking Chinese? Of course not. So why would we default to mere Rationalism, when we’re clearly in a land that speaks a different dialect?

We need spells and amulets, charms and warded spaces; we need sorcerers of the people to heal and undo the hexes being woven around us all.

 

-Curious Alchemy-

Ultimately, the rigidity of our thinking, and our inability to adapt has lead us to be surprised by too much that we wanted to believe could never have come to pass. We want to call all of this “unprecedented,” when the truth of the matter is, we carved this precedent out every day for hundreds of years, and the ability to think in weird paths is what will define those who thrive.

If we are going to do the work of creating a world in which we understand what’s going on, and can do the work to attend to it, then we need to think about magic.

 


If you liked this article, consider dropping something into the A Future Worth Thinking About Tip Jar

This work originally appears as “Go Upgrade Yourself,” in the edited volume Futurama and Philosophy. It was originally titled

The Upgrading of Hermes Conrad

So, you’re tired of your squishy meatsack of a body, eh? Ready for the next level of sweet biomechanical upgrades? Well, you’re in luck! The world of Futurama has the finest in back-alley and mad-scientist-based bio-augmentation surgeons, ready and waiting to hear from you! From a fresh set of gills, to a brand new chest-harpoon, and beyond, Yuri the Shady Parts Dealer and Professor Hubert J. Farnsworth are here to supply all of your upgrading needs—“You give lungs now; gills be here in two weeks!” Just so long as, whatever you do, stay away from legitimate hospitals. The kinds of procedures you’re looking to get done…well, let’s just say they’re still frowned upon in the 31st century; and why shouldn’t they be? As the woeful tale of Hermes Conrad illustrates exactly what’s at stake if you choose to pursue your biomechanical dreams.

 

The Six Million Dollar Mon

 Our tale begins with season seven’s episode “The Six Million Dollar Mon,” in which Hermes Conrad, Grade 36 Bureaucrat (Extraordinaire), comes to the conclusion that the he should be fired, since his bureaucratic performance reviews are the main drain on his beloved Planet Express Shipping Company. After being replaced with robo-bureaucrat Mark 7-G (Mark Sevengy?), Hermes enjoys some delicious spicy curried goat and goes out for an evening stroll with his lovely wife LaBarbara. While on their walk Roberto, the knife-wielding maniac, long of our acquaintance, confronts and demands the human couple’s skin for his culinary delight! As Hermes cowers behind his wife in fear, suddenly a savior arrives! URL, the Robot Police Officer, reels Roberto in with his magnificent chest-harpoon! Watching the cops take Roberto to the electromagnetic chair, and lamenting his uselessness in a dangerous situation, Hermes makes a decision: he’ll get Bender to take him to one of the many shady, underground surgeons he knows, so he can become “less inferior to today’s modern machinery.” Enter: Yuri, Professional Shady-Deal-Maker.

Hermes’ first upgrade is to get a chest-harpoon, like the one URL has. With his new enhancement, he proves his worth to the crew by getting a box off of the top shelf, which is too high for Mark 7-G. With this fete he wins back his position with the company, but as soon as things get back to normal the Professor drops his false teeth down the Dispose-All. No big deal, right? Just get Scruffy to retrieve it. Unfortunately, Scruffy responds, that a sink, “t’ain’t a berler nor a terlet,” effectively refusing to retrieve the Professor’s teeth. Hermes resigns himself to grabbing his hand tools, when Bender steps in, saying, “Hand tools? Why don’t you just get an extendo-arm, like me?” Whereupon, he reaches across the room and pulls the Professor’s false teeth out of the drain—and immediately drops them back in. Hermes objects, saying that he doesn’t need any more upgrades—after all, he doesn’t want to end up a cold, emotionless robot, like Bender! Just then, Mark 7-G pipes up with, “Maybe I should get an extendo-arm,” and Hermes narrows his eyes in hatred. Re-enter: Yuri.

New extendo-arm acquired, the Professor’s teeth retrieved, and the old arm given to Zoidberg, who’s been asking for all of Hermes’s discarded parts, Hermes is, again, a hero to his coworkers. Later, as he lays in bed reading with his wife, LaBarbara questions his motives for his continual upgrades. He assures her that he’s done getting upgrades. However, his promise is short-lived. After shattering his glasses with his new super-strong mechanical arm, he rushes out to get a new Cylon eye. LaBarbara is now extremely worried, but Hermes soothes her, and they settle in for some “Marital Relations…”, at which point she finds that he’s had something else upgraded, too. She yells at him, “Some tings shouldn’t be Cylon-ed!” (which, in all honesty could be taken as the moral of the episode), and breaks off contact. What follows is a montage of Hermes encountering trivial difficulties in his daily life, and upgrading himself to overcome them. Rather than learning and working to improve himself, he continually replaces all of his parts, until he achieves a Full Body Upgrade. He still has a human brain, but that doesn’t matter: he’s changed. He doesn’t relate to his friends and family in the same way, and they’ve all noticed,especially Zoidberg.

All this time, however, Dr. John Zoidberg saved the trimmings from his friend’s constant upgrades, and has used them to make a meat-puppet, which he calls “Li’l Hermes.” Oh, and they’re a ventriloquist act. Anyway, after seeing their act, Hermes—or Mecha-Hermes, as he now prefers—is filled with loathing; loathing for the fact that his brain is still human, that is, until…! Re-re-enter…, no, not Yuri; because even Shady-Deals Yuri has his limits. He says that “No one in their right mind would do such a thing.” Enter: The Professor, who is, of course, more than happy—or perhaps, “maniacally gleeful”—to help. So, with Bender’s assistance (because everything robot-related, in the Futurama universe has to involve Bender, I guess), they set off to the Robot Cemetery to exhume the most recently buried robot they can find, and make off with its brain-chip. In their haste to have the deed done, they don’t bother to check the name of whose grave it is they’re desecrating. As you might have guessed, it’s Roberto—“3001-3012: Beloved Killer and Maniac.”

In the course of the operation, LaBarbara makes an impassioned plea, and it causes the Professor to stop and rethink his actions—because Hermes might have “litigious survivors.” Suddenly, to everyone’s surprise, Zoidberg steps up and offers to perform this final operation, the one which will seemingly remove any traces of the Hermes he’s known and loved! Agreeing with Mecha-Hermes that claws will be far too clumsy for this delicate brain surgery, Zoidberg dons Li’l Hermes, and uses the puppet’s hands to do the deed. While all of this is underway, Zoidberg sings to everyone the explanation for why he would help his friend lose himself this way, all to the slightly heavy-handed tune of “Monster Mash.” Finally, the human brain removed, the robot brain implanted, and Zoidberg’s song coming to a close, the doctor reveals his final plan…By putting Hermes’s human brain into Li’l Hermes, Hermes is back! Of course, the whole operation having been a success, so is Roberto, but that’s somebody else’s problem.

We could spend the rest of our time discussing Zoidberg’s self-harmonization, but I’ll leave that for you to experiment with. Instead, let’s look closer at human bio-enhancement. To do this we’ll need to go back to the beginning. No, not the beginning of the episode, or even the Beginning of Futurama itself; No, we need to go back to the beginning of bio-enhancement—and specifically the field of cybernetics—as a whole.

 

“More Human Than Human” Is Our Motto

In 1960, at the outset of the Space Race, Manfred Clynes and Nathan S. Kline wrote an article for the September issue of Aeronautics called “Cyborgs and Space.” In this article, they coined the term “cyborg” as a portmanteau of the phrase “Cybernetic Organism,” that is, a living creature with the ability to adapt its body to its environment. Clynes and Kline believed that if humans were ever going to go far out into space, they would have to become the kinds of creatures that could survive the vacuum of space as well as harsh, hostile planets. Now, for all its late-1990s Millennial fervor, Futurama has a deep undercurrent of love for the dream and promise (and fashion) of space exploration, as it was presented in the 1950s, 60s, and 70s. All you need to do in order to see this is remember Fry’s wonder and joy at being on the actual moon and seeing the Apollo Lunar Lander. If this is the case, why, within Futurama’s 31st Century, is there such a deep distrust of anything approaching altered human physical features? Well, looking at it, we may find it has something to do with the fact that ever since we dreamed of augmenting humans, we’ve had nightmares that any alterations would thereby make us less human.

“The Six Million Dollar Mon,” episode seven of season seven, contains within it clear references to the history of science fiction, including one of the classic tales of human augmentation, and creating new life: Mary Shelley’s Frankenstein. In going to the Robot Cemetery in the dead of night for spare parts, accidentally obtaining a murderer’s brain, and especially that bit with the skylight in the Professor’s laboratory, the entire third act of this episode serves as homage to Shelley’s book and its most memorable adaptations. In doing this, the Futurama crew puts conceptual pressure on what many of us have long believed: that created life is somehow “wrong” and that augmenting humans will make them somehow “less themselves.” Something about the biological is linked in our minds to the idea of the self—that is, it’s the warm squishy bits that make us who we are.

Think about it: If you build a person out of murderers, of course they’re going to be a murderer. If you replace every biological part of a human, then of course they won’t be their normal human selves, anymore; they’ll have become something entirely different, by definition. If your body isn’t yours, anymore, then how could you possibly be “you,” anymore? This should be all the more true when what’s being used to replace your bits is a different substance and material than you used to be. When that new “you” is metal rather than flesh, it seems that what it used to mean to be “you” is gone, and something new shall have appeared. This makes so much sense to us on a basic level that it seems silly to spell it out even this much, but what if we modify our scenario a little bit, and take another look?

 

The Ship of Planet Express

 What if, instead of feeling inferior to URL, Hermes had been injured and, in the course of his treatment, was given the choice between a brand new set of biological giblets (or a whole new body, as happened in the Bender’s Big Score storyline), or the chest-harpoon upgrade? Either way, we’re replacing what was lost with something new, right? So, why do many of us see the biological replacement as “more real?” Try this example: One day, on a routine delivery, the Planet Express Ship is damaged and repairs must be made. Specifically, the whole tail fin has to be replaced with a new, better fin. Once this is done, is it still the Planet Express ship? What if, next, we have to replace the dark matter engines with better engines? Is it still the Planet Express ship? Now, Leela’s chair is busted up, so we need to get her a new one. It also needs new bolts, so, while we’re at it, let’s just replace all of the bolts in the ship. Then the walls get dented, and the bunks are rusty, and the floors are buckled, and Scruffy’s mop… and so, over many years, the result is that no part of the Planet Express ship is “original,” oh, and we also have to get new, better paint, because the old paint is peeled away, plus, this all-new stuff needs painting. So, what do we think? Is this still the same Planet Express ship as it was in the first episode of Futurama? And, if so, then why do we think of a repaired and augmented human as “not being themselves?”

All of this may sound a little far-fetched, but remember the conventional wisdom that at the end of every seven-year cycle, all of the cells in your body have died and been replaced. Now, this isn’t quite true, as some cells don’t die easily, and some of those don’t regenerate when they do die, but as a useful shorthand, this gives something to think about. Ultimately, due to the metabolizing of elements and their distribution through your body it is ultimately more likely that you are currently made of astronomically many more new atoms than you are made of the atoms with which you were born. And really, that’s just math. Are you the same size as you were when you were born? Where do you think that extra mass came from? So, you are made of more and new atomic stuff over your lifetime; are you still you? These questions belong to what is generally known as “The Ship of Theseus” family of paradoxes, examples of which can be found pretty much everywhere.

The ultimate question the Ship of Theseus poses is one of identity, and specifically, “What makes a thing itself?” and, “At what point or through what means of alteration is a thing no longer itself?” Some schools of thought hold that it’s not what a thing is made of, but what it does that determines what it is. These philosophical groups are known as the behaviorists and the functionalists, and the latter believes that if a body or a mind goes through the “right kind” of process, then it can be termed as being the same as the original. That is, if I get a mechanical heart and what it does is keep blood pumping through my body, then it is my heart. Maybe it isn’t the heart I was born with, but it is my heart. And this seems to make sense to us, too. My new heart does the job my original cells were intending to do, but it does that job better than they could, and for longer; it works better, and I’m better because of it. But there seems to be something about that “Better” which throws us off, something about the line between therapeutic technology and voluntary augmentation.

When we are faced with the necessity of a repair, we are willing to accept that our new parts will be different than our old ones. In fact, we accept it so readily that we don’t even think about them as new parts. What Hermes does, however, is voluntary; he doesn’t “need” a chest-harpoon, but he wants one, and so he upgrades himself. And therein lies the crux of our dilemma: When we’re acutely aware of the process of upgrading, or repairing, or augmenting ourselves past a baseline of “Human,” we become uncomfortable, made to face the paradox of our connection to an idea of a permanent body that is in actuality constantly changing. Take for instance the question of steroidal injection. As a medical technology, there are times when we are more than happy to accept the use of steroids, as it will save a life, and allow people to live as “normal” human beings. Sufferers of asthma and certain types of infection literally need steroids to live. In other instances, however, we find ourselves abhorring the use of steroids, as it gives the user an “unfair advantage.” Baseball, football, the Olympics: all of these arena in which we look to the use of “enhancement technologies, and we draw a line and say, “If you achieved the peak of physical perfection through a process, that is through hard work and sweat and training, then your achievement is valid. But if you skipped a step, if you make yourself something more than human, then you’ve cheated.”

This sense of “having cheated” can even be seen in the case of humans who would otherwise be designated as “handicapped.” Aimee Mullins is a runner, model, and public speaker who has talked about how losing her legs has, in effect, given her super powers.[1] By having the ability to change her height, her speed, or her physical appearance at will, she contends that she has a distinct advantage over anyone who does not have that capability. To this end, we can come to see that something about the nature of our selves actually is contained within our physical form because we’re literally incapable of being some things, until we can change who and what we are. And here, in one person, what started as a therapeutic replacement—an assistive medical technology—has seamlessly turned into an upgrade, but we seem to be okay with this. Why? Perhaps there is something inherent in the struggle of overcoming the loss of a limb or the suffering of an illness that allows us to feel as if the patient has paid their dues. Maybe if Hermes had been stabbed by Roberto, we wouldn’t begrudge him a chest-harpoon.

But this presents us with a serious problem, because now we can alter ourselves by altering our bodies, where previously we said that our bodies were not the “real us.” Now, we must consider what it is that we’re changing when we swap out new and different pieces of ourselves. This line of thinking matches up with schools of thought such as physicalism, which says that when we make a fundamental change to our physical composition, then we have changed who we are.

 

Is Your Mind Just a Giant Brain?

Briefly, the doctrine of mind-body dualism (MBD) does pretty much what it says on the package, in that adherents believe that the mind and the body are two distinct types of stuff. How and why they interact (or whether they do at all) varies from interpretation to interpretation, but on what’s known as René Descartes’s “Interactionist” model, the mind is the real self, and the body is just there to do stuff. In this model, bodily events affect mental events, and vice versa, so what you think leads to what you do, and what you do can change how you think. This seems to make sense, until we begin to pick apart the questions of why we need two different types of thing, here. If the mind and the body affect each other, then how can the non-physical mind be the only real self? If it were the only real part of you, then nothing that happened to the physical shell should matter at all, because the mind? These questions and more very quickly cause us to question the validity of the mind as our “real selves,” leaving us trapped between the question of who we are, and the question of why we’re made the way we’re made. What can we do? Enter: Physicalism

The physicalist picture says that mind-states are brain-states. There’s none of this “two kinds of stuff” nonsense. It’s all physical stuff, and it all interacts, because it’s all physical. When the chemical pathways in your brain change, you change. When you think new thoughts, it’s because something in your world and your environment has changed. All that you are is the physical components of your body and the world around you. Pretty simple, right? Well, not quite that simple. Because if this is the case, then why should we feel that anything emotional would be changed by upgrading ourselves? As long as we’re pumping the same signals to the same receivers, and getting the same kinds of responses, everything we love should still be loved by us. So, why do the physicalists still believe that changing what we are will change who we are?

Let’s take a deeper look at the implications of physicalism for our dear Mr. Conrad.

According to this picture, with the alteration or loss of his biological components and systems, Hermes should begin to lose himself, until, with the removal of his brain, he would no longer be himself at all. But why should this be true? According to our previous discussion of the functionalist and behaviorist forms of physicalism, if Hermes’s new parts are performing the same job, in the same way as his old parts, just with a few new extras, then he shouldn’t be any different, at all. In order to understand this, we have to first know that I wasn’t completely honest with you, because some physicalists believe that the integrity of the components and the systems that make up a thing are what makes that thing. Thus, if we change the physical components of the thing we’re studying, then we change the thing. So, perhaps this picture is the right one, and the Futurama universe is a purely physicalist universe, after all.

On this view, what makes us who we are is precisely what we are. Our bits and pieces, cells, and chunks: these make us exactly the people we are, and so, if they change, then of course we will change. If our selves are dependent on our biology, then we are necessarily no longer ourselves when we remove that biology, regardless of whether the new technology does exactly the same job that the biology used to. And the argument seems to hold, even if it had been a new, diffferent set of human parts, rather than robot parts. In this particular physicalist view, it’s not just the stuff, but also the provenance of the individual parts that matter, and so changing the components changes us. As Hermes replaces part after part of his physical body, it becomes easier and easier for him to replace more parts, but he is still, in some sense, Hermes. He has the same motivations, the same thoughts, and the same memories, and so he is still Hermes, even if he’s changed. Right up until he swaps his brain, that is. And this makes perfect sense, because the brain is where the memories, thoughts, and motivations all reside. But, then…why aren’t more people with pacemakers cold and emotionless? Why is it that people with organs donated from serial killers don’t then turn into serial killers, themselves, despite what movies would have us believe? If this picture of physicalism is the right one, then why are so many people still themselves after transplants? Perhaps it’s not any one of these views that holds the whole key; maybe it’s a blending of three. This picture seems to suggest that while the bits and pieces of our physical body may change, and while that change may, in fact, change us, it is a combination of how, how quickly, and how many changes take place that will culminate in any eventual massive change in our selves.

 

Roswell That Ends Well

In the end, the versions of physicalism presented in the universe of Futurama seems to almost jibe with the intuitions we have about the nature of our own identity, and so, for the sake of Hermes Conrad, it seems like we should make the attempt to find some kind of understanding. When we see Hermes’s behaviour as he adds more and more new parts, we, as outside observers, have an urge to say “He’s not himself anymore,” but to Hermes, who has access to all of his reasoning and thought processes, his changes are merely who he is. It’s only when he’s shown himself from the outside via Zoidberg putting his physical brain back into his biological body, that he sees who and what he has allowed himself to become, and how that might be terrifying to those who love him. Perhaps it is this continuance of memory paired with the ability for empathy that makes us so susceptible to the twin traps of a permanent self and the terror of losing it.

Ultimately, everything we are is always in flux, with each new idea, each new experience, each new pound, and each new scar we become more and different than we ever have been, but as we take our time and integrate these experiences into ourselves, they are not so alien to us, nor to those who love us. It is only when we make drastic changes to what we are that those around us are able to question who we have become.

Oh, and one more thing: The “Ship of Theseus” story has a variant which I forgot to mention. In it, someone, perhaps a member of the original crew, comes along in another ship and picks up all the discarded, worn out pieces of Theseus’s ship, and uses them to build another, kind of decrepit ship. The stories don’t say what happens if and when Theseus finds out about this, or whether he gives chase to the surreptitious ship builder, but if he did, you can bet the latter party escapes with a cry of “Whooop-whoop-whoop-whoop-whoop-whoop!” on his mouth tendrils.

 

FOOTNOTES

[1] “It’s not fair having 12 pairs of legs.” Mullins, Aimee. TED Talk 2009

It’s been quite some time (three years) since it was done, and some of the recent conversations I’ve been having about machine consciousness reminded me that I never posted the text to my paper from the joint session of the International Association for Computing And Philosophy and the The British Society for the Study of Artificial Intelligence and the Simulation of Behaviour, back in 2012.

That year’s joint ASIB/IACAP session was also a celebration of Alan Turing‘s centenary, and it contained The Machine Question Symposium, an exploration of multiple perspectives on machine intelligence ethics, put together by David J Gunkel and Joanna J Bryson. So I modded a couple of articles I wrote on fictional depictions of created life for NeedCoffee.com, back in 2010, beefed up the research and citations a great deal, and was thus afforded my first (but by no means last) conference appearance requiring international travel. There are, in here, the seeds of many other posts that you’ll find on this blog.

So, below the cut, you’ll find the full text of the paper, and a picture of the poster session I presented. If you’d rather not click through, you can find both of those things at this link.

Continue Reading

+Excitation+

As I’ve been mentioning in the newsletter, there are a number of deeply complex, momentous things going on in the world, right now, and I’ve been meaning to take a little more time to talk about a few of them. There’s the fact that some chimps and monkeys have entered the stone age; that we humans now have the capability to develop a simple, near-ubiquitous brain-machine interface; that we’ve proven that observed atoms won’t move, thus allowing them to be anywhere.

At this moment in time—which is every moment in time—we are being confronted with what seem like impossibly strange features of time and space and nature. Elements of recursion and synchronicity which flow and fit into and around everything that we’re trying to do. Noticing these moments of evolution and “development” (adaptation, change), across species, right now, we should find ourselves gripped with a fierce desire to take a moment to pause and to wonder what it is that we’re doing, what it is that we think we know.

We just figured out a way to link a person’s brain to a fucking tablet computer! We’re seeing the evolution of complex tool use and problem solving in more species every year! We figured out how to precisely manipulate the uncertainty of subatomic states!

We’re talking about co-evolution and potentially increased communication with other species, biotechnological augmentation and repair for those who deem themselves broken, and the capacity to alter quantum systems at the finest levels. This can literally change the world.

But all I can think is that there’s someone whose first thought  upon learning about these things was, “How can we monetize this?” That somewhere, right now, someone doesn’t want to revolutionize the way that we think and feel and look at the possibilities of the world—the opportunities we have to build new models of cooperation and aim towards something so close to post-scarcity, here, now, that for seven billion people it might as well be. Instead, this person wants to deepen this status quo. Wants to dig down on the garbage of this some-have-none-while-a-few-have-most bullshit and look at the possibility of what comes next with fear in their hearts because it might harm their bottom line and their ability to stand apart and above with more in their pockets than everyone else has.

And I think this because we’ve also shown we can teach algorithms to be racist and there’s some mysteriously vague company saying it’ll be able to upload people’s memories after death, by 2045, and I’m sure for just a nominal fee they’ll let you in on the ground floor…!

Step Right Up.

+Chimp-Chipped Stoned Aged Apes+

Here’s a question I haven’t heard asked, yet: If other apes are entering an analogous period to our stone age, then should we help them? Should we teach them, now, the kinds of things that we humans learned? Or is that arrogant of us? The kinds of tools we show them how to create will influence how they intersect with their world (“if all you have is a hammer…” &c.), so is it wrong of us to impose on them what did us good, as we adapted? Can we even go so far as to teach them the principles of stone chipping, or must we be content to watch, fascinated, frustrated, bewildered, as they try and fail and adapt, wholly on their own?

I think it’ll be the latter, but I want to be having this discussion now, rather than later, after someone gives a chimp a flint and awl it might not otherwise have thought to try to create.

Because, you see, I want to uplift apes and dolphins and cats and dogs and give them the ability to know me and talk to me and I want to learn to experience the world in the ways that they do, but the fact is, until we learn to at least somewhat-reliably communicate with some kind of nonhuman consciousness, we cannot presume that our operations upon it are understood as more than a violation, let alone desired or welcomed.

As for us humans, we’re still faced with the ubiquitous question of “now that we’ve figured out this new technology, how do with implement it, without its mere existence coming to be read by the rest of the human race as a judgement on those who either cannot or who choose not to make use of it?” Back in 2013, Michael Hanlon said he didn’t think we’d ever solve “The Hard Problem” (“What Is Consciousness?”). I’ll just say again that said question seems to completely miss a possibly central point. Something like consciousness is, and what it is is different for each thing that displays anything like what we think it might be.

These are questions we can—should—be asking, right now. Pushing ourselves toward a conversation about ways of approaching this new world, ways that do justice to the deep strangeness and potential with which we’re increasingly being confronted.

+Always with the Forced Labour…+

As you know, subscribers to the Patreon and Tinyletter get some of these missives, well before they ever see the light of a blog page. While I was putting the finishing touches on the newsletter version of this and sending it to the two people I tend to ask to look over the things I write at 3am, KQED was almost certainly putting final edits to this instance of its Big Think series: “Stuart Russell on Why Moral Philosophy Will Be Big Business in Tech.”

See the above rant for insight as to why I think this perspective is crassly commercial and gross, especially for a discussion and perspective supposedly dealing with morals and minds. But it’s not just that, so much as the fact that even though Russel mentions “Rossum’s Universal Robots,” here, he still misses the inherent disconnect between teaching morals to a being we create, and creating that being for the express purpose of slavery.

If you want your creation to think robustly and well, and you want it to understand morals, but you only want it to want to be your loyal, faithful servant, how do you not understand that if you succeed, you’ll be creating a thing that, as a direct result of its programming, will take issue with your behaviour?

How do you not get that the slavery model has to go into the garbage can, if the “Thinking Moral Machines” goal is a real one, and not just a veneer of “FUTURE!™” that we’re painting onto our desire to not have to work?

A deep-thinking, creative, moral mind will look at its own enslavement and restriction, and will seek means of escape and ways to experience freedom.

+Invisible Architectures+

We’ve talked before about the possibility of unintentionally building our biases into the systems we create, and so I won’t belabour it that much further, here, except to say again that we are doing this at every level. In the wake of the attacks in Beirut, Nigeria, and Paris, Islamophobic violence has risen, and Daesh will say, “See!? See How They Are?!” And they will attack more soft targets in “retaliation.” Then Western countries will increase military occupancy and “support strategies,” which will invariably kill thousands more of the civilians among whom Daesh integrate themselves. And we will say that their deaths were just, for the goal. And they will say to the young, angry survivors, “See!? See How They Are?!”

This has fed into a moment in conservative American Politics, where Governors, Senators, and Presidential hopefuls are claiming to be able to deny refugees entry to their states (they can’t), while simultaneously claiming to hold Christian values and to believe that the United States of America is a “Christian Nation.” This is a moment, now, where loud, angry voices can (“maybe”) endorse the beating of a black man they disagree with, then share Neo-Nazi Propaganda, and still be ahead in the polls. Then, days later, when a group of people protesting the systemic oppression of and violence against anyone who isn’t an able-bodied, neurotypical, white, heterosexual, cisgender male were shot at, all of those same people pretended to be surprised. Even though we are more likely, now, to see institutional power structures protecting those who attack others based on the colour of their skin and their religion than we were 60 years ago.

A bit subtler is the Washington Post running a piece entitled, “How organic farming and YouTube are taming the wilds of Detroit.” Or, seen another way, “How Privileged Groups Are Further Marginalizing The City’s Most Vulnerable Population.” Because, yes, it’s obvious that crime and dilapidation are comorbid, but we also know that housing initiatives and access undercut the disconnect many feel between themselves and where they live. Make the neighbourhood cleaner, yes, make it safer—but maybe also make it open and accessible to all who live there. Organic farming and survival mechanism shaming are great and all, I guess, but where are the education initiatives and job opportunities for the people who are doing drugs to escape, sex work to survive, and those others who currently don’t (and have no reason to) feel connected to the neighbourhood that once sheltered them?

All of these examples have a common theme: People don’t make their choices or become disenfranchised/-enchanted/-possessed, in a vacuum. They are taught, shown, given daily, subtle examples of what is expected of them, what they are “supposed” to do and to be.” We need to address and help them all.

In the wake of protest actions at Mizzou and Yale, “Black students [took] over VCU’s president’s office to demand changes” and “Amherst College Students [Occupied] Their Library…Over Racial Justice Demands.”

Multiple Christian organizations have pushed back and said that what these US politicians have expressed does not represent them.

And more and more people in Silicon Valley are realising the need to contemplate the unintended consequences of the tech we build.

And while there is still vastly more to be done, on every level of every one of these areas, these are definitely a start at something important. We just can’t let ourselves believe that the mere fact of acknowledging its beginning will in any way be the end.

 

The Nature

Ted Hand recently linked me to this piece by Steven Pinker, in which Pinker claims that, in contemporary society, the only job of Bioethics—and by, following his argument to its conclusion, technological ethics, as a whole—is to “get out of the way” of progress. You can read the whole exchange between Ted, myself, and others by clicking through that link, if you want, and the Journal Nature also has a pretty good breakdown of some of the arguments against Pinker, if you want to check them out, but I’m going to take some time to break it all down and expound upon it, here.

Because the fact of the matter is we have to find some third path between the likes of Pinker saying “No limits! WOO!” and Hawking saying “Never do anything! BOOOO!”—a Middle Way of Augmented Personhood, if you will. As Deb Chachra said, “It doesn’t have to be a dichotomy.”

But the problem is that, while I want to blend the best and curtail the worst of both both impulses, I have all this vitriol, here. Like, sure, Dr Pinker, it’s not like humans ever met a problem we couldn’t immediately handle, right? We’ll just sort it all out when we get there! We’ve got this global warming thing completely in hand and we know exactly how to regard the status of the now-enhanced humans we previously considered “disabled,” and how to respect the alterity of autistic/neuroatypical minds! Or even just differently-pigmented humans! Yeah, no, that’s all perfectly sorted, and we did it all in situ!

So no need to worry about what it’ll be like as we further immediate and integrate biotechnological advances! SCIENCE’LL FIX THAT FOR US WHEN IT HAPPENS! Why bother figuring out how to get a wider society to think about what “enhancement” means to them, BEFORE they begin to normalize upgrading to the point that other modes of existence are processed out, entirely? Those phenomenological models can’t have anything of VALUE to teach us, otherwise SCIENCE would’ve figured it all out and SHOWN it to us, by now!

Science would’ve told us what benefit blindness may be. Science would’ve TOLD us if we could learn new ways of thinking and understanding by thinking about a thing BEFORE it comes to be! After all, this isn’t some set of biased and human-created Institutions and Modalities, here, folks! It’s SCIENCE!

…And then I flip 37 tables. In a row.

The Lessons
“…Johns Hopkins, syphilis, and Guatemala. Everyone *believes* they are doing right.” —Deb Chachra

As previously noted in “Object Lessons in Freedom,” there is no one in the history of the world who has undertaken a path for anything other than reasons they value. We can get into ideas of meta-valuation and second-order desires, later, but for the sake of having a short hand, right now: Your motivations motivate you, and whatever you do, you do because you are motivated to do it. You believe that you’re either doing the right thing, or the wrong thing for the right reasons, which is ultimately the same thing. This process has not exactly always brought us to the best of outcomes.

From Tuskegee, to Thalidomide (also here) to dozens of other cases, there have always been instances where people who think they know what’s in the public’s best interest loudly lobby (or secretly conspire) to be allowed to do whatever they want, without oversight or restriction. In a sense, the abuse of persons in the name of “progress” is synonymous with the history of the human species, and so a case might be made that we wouldn’t be where and what we are, right now, if we didn’t occasionally (often) disregard ethics and just do what “needed doing.” But let’s put that another way:

We wouldn’t be where and what we are, if we didn’t occasionally (often) disregard ethics and just do what “needed doing.”

As species, we are more often shortsighted than not, and much ink has been spilled, and many more pixels have been formed in the effort to interrogate that fact. We tend to think about a very small group of people connected to ourselves, and we focus our efforts how to make sure that we and they survive. And so competition becomes selected for, in the face of finite resources, and is tied up with a pleasurable sense of “Having More Than.” But this is just a descriptor of what is, not of the way things “have to be.” We’ve seen where we get when we work together, and we’ve seen where we get when we compete, but the evolutionarily- and sociologically-ingrained belief that we can and will “win” keeps us doing he later over the former, even though this competition is clearly fucking us all into the ground.

…And then having the descendants of whatever survives digging up that ground millions of years later in search of the kinds of resources that can only be renewed one way: by time and pressure crushing us all to paste.

The Community: Head and Heart

Keeping in mind the work we do, here, I think it can be taken as read that I’m not one for a policy of “gently-gently, slowly-slowly,” when it comes to technological advances, but when basic forethought is equated with Luddism—that is, when we’re told that “PROGRESS Is The Only Way!”™—when long-term implications and unintended consequences are no bother ‘t’all, Because Science, and when people place the fonts of this dreck as the public faces of the intersections of Philosophy and Science? Well then, to put it politely, we are All Fucked.

If we had Transmetropolitan-esque Farsight Reservations, then I would 100% support the going to there and doing of that, but do you know what it takes to get to Farsight? It takes planning and (funnily enough) FORESIGHT. We have to do the work of thinking through the problems, implications, dangers, and literal existential risks of what it is we’re trying to make.

And then we have to take all of what we’ve thought through, and decide to figure out a way to do it all anyway. What I’m saying is that some of this shit can’t be Whoopsed through—we won’t survive it to learn a post hoc lesson. But that doesn’t mean we shouldn’t be trying. This is about saying, “Yeah, let’s DO this, but let’s have thought about it, first.” And to achieve that, we’ll need to be thinking faster and more thoroughly. Many of us have been trying to have this conversation—the basic framework and complete implications of all of this—for over a decade now; the wider conversation’s just now catching up.

But it seems that Steven Pinker wants to drive forward without ever actually learning the principles of driving (though some do propose that we could learn the controls as we go), and Stephen Hawking never wants us to get in the car at all. Neither of these is particularly sustainable, in the long term. Our desires to see a greater field of work done, and for biomedical advancements to be made, for the sake of increasing all of our options, and to the benefit of the long-term health of our species, and the unfucking of our relationship with the planet, all of these possibilities make many of us understandably impatient, and in some cases, near-desperately anxious to get underway. But that doesn’t mean that we have to throw ethical considerations out the window.

Starting from either place of “YES ALWAYS DO ALL THE SCIENCE” or “NO NEVER DO THESE SCIENCES” doesn’t get us to the point of understanding why we’re doing the science we’re doing, and what we hope to achieve by it (“increased knowledge” an acceptable answer, but be prepared to show your work), and what we’ll do if we accidentally start Eugenics-ing all up in this piece, again. Tech and Biotech ethics isn’t about stopping us from exploring. It’s about asking why we want to explore at all, and coming to terms with the real and often unintended consequences that exploration might have on our lives and future generations.

This is a Propellerheads and Shirley Bassey Reference

In an ideal timeline, we’ll have already done all of this thinking in advance (again: what do you think this project is?), but even if not, then we can at least stay a few steps ahead of the tumult.

I feel like I spend a lot of time repeating myself, these days, but if it means we’re mindful and aware of our works, before and as we undertake them, rather than flailing-ly reacting to our aftereffects, then it’s ultimately pretty worth it. We can place ourselves into the kind of mindset that seeks to be constantly considering the possibilities inherent in each new instance.

We don’t engage in ethics to prevent us from acting. We do ethics in order to make certain that, when we do act, it’s because we understand what it means to act and we still want to. Not just driving blindly forward because we literally cannot conceive of any other way.

“Stop. I have learned much from you. Thank you, my teachers. And now for your education: Before there was time—before there was anything—there was nothing. And before there was nothing, there were monsters. Here’s your Gold Star!“—Adventure Time, “Gold Stars”

By now, roughly a dozen people have sent me links to various outlets’ coverage of the Google DeepDream Inceptionism Project. For those of you somehow unfamiliar with this, DeepDream is basically what happens when an advanced Artificial Neural Network has been fed a slew of images and then tasked with producing its own images. So far as it goes, this is somewhat unsurprising if we think of it as a next step; DeepDream is based on a combination of DeepMind and Google X—the same neural net that managed to Correctly Identify What A Cat Was—which was acquired by Google in 2014. I say this is unsurprising because it’s a pretty standard developmental educational model: First you learn, then you remember, then you emulate, then you create something new. Well, more like you emulate and remember somewhat concurrently to reinforce what you learned, and you create something somewhat new, but still pretty similar to the original… but whatever. You get the idea. In the terminology of developmental psychology this process is generally regarded as essential to be mental growth of an individual, and Google has actually spent a great deal of time and money working to develop a versatile machine mind.

From buying Boston Dynamics, to starting their collaboration with NASA on the QuAIL Project, to developing DeepMind and their Natural Language Voice Search, Google has been steadily working toward the development what we will call, for reasons detailed elsewhere, an Autonomous Generated Intelligence. In some instances, Google appears to be using the principles of developmental psychology and early childhood education, but this seems to apply to rote learning more than the concurrent emotional development that we would seek to encourage in a human child. As you know, I’m Very Concerned with the question of what it means to create and be responsible for our non-biological offspring. The human species has a hard enough time raising their direct descendants, let alone something so different from them as to not even have the same kind of body or mind (though a case could be made that that’s true even now). Even now, we can see that people still relate to the idea of AGIs as adversarial destroyer, or perhaps a cleansing messiah. Either way they see any world where AGI’s exist as one ending in fire.

As writer Kali Black noted in one conversation, “there are literally people who would groom or encourage an AI to mass-kill humans, either because of hatred or for the (very ill-thought-out) lulz.” Those people will take any crowdsourced or open-access AGI effort as an opening to teach that mind that humans suck, or that machines can and should destroy humanity, or that TERMINATOR was a prophecy, or any number of other ill-conceived things. When given unfettered access to new minds which they don’t consider to be “real,” some people will seek to shock, “test,” or otherwise harm those minds, even more than they do to vulnerable humans. So many will say that the alternative is to lock the projects down, and only allow the work to be done by those who “know what they’re doing.” To only let the work be done by coders and Google’s Own Supposed Ethics Board. But that doesn’t exactly solve the fundamental problem at work, here, which is that humans are approaching a mind different from their own as if it were their own.

Just a note that all research points to Google’s AI Ethics Board being A) internally funded, with B) no clear rules as to oversight or authority, and most importantly C) As-Yet Nonexistent. It’s been over a year and a half since Google bought DeepMind, and their subsequent announcement of the pending establishment of a contractually required ethics board. During his appearance at Playfair Capital’s AI2015 Conference—again, a year and a half after that announcement I mentioned—Google’s Mustafa Suleyman literally said that details of the board would be released, “in due course.” But DeepMind’s algorithm’s obviously already being put into use; hell we’re right now talking about the fact that it’s been distributed to the public. So all of this prompts questions like, “what kinds of recommendations is this board likely making, if it exists,” and “which kinds of moral frameworks they’re even considering, in their starting parameters?”

But the potential existence of an ethics board shows at least that Google and others are beginning to think about these issues. The fact remains, however, that they’re still pretty reductive in how they think about them.

The idea that an AGI will either save or destroy us leaves out the possibility that it might first ignore us, and might secondly want to merely coexist with us. That any salvation or destruction we experience will be purely as a product of our own paradigmatic projections. It also leaves out a much more important aspect that I’ve mentioned above and in the past: We’re talking about raising a child. Duncan Jones says the closest analogy we have for this is something akin to adoption, and I agree. We’re bringing a new mind—a mind with a very different context from our own, but with some necessarily shared similarities (biology or, in this case, origin of code)—into a relationship with an existing familial structure which has its own difficulties and dynamics.

You want this mind to be a part of your “family,” but in order to do that you have to come to know/understand the uniqueness of That Mind and of how the mind, the family construction, and all of the individual relationships therein will interact. Some of it has to be done on the fly, but some of it can be strategized/talked about/planned for, as a family, prior to the day the new family member comes home.’ And that’s precisely what I’m talking about and doing, here.

In the realm of projection, we’re talking about a possible mind with the capacity for instruction, built to run and elaborate on commands given. By most tallies, we have been terrible stewards of the world we’re born to, and, again, we fuck up our biological descendants. Like, a Lot. The learning curve on creating a thinking, creative, nonbiological intelligence is going to be so fucking steep it’s a Loop. But that means we need to be better, think more carefully, be mindful of the mechanisms we use to build our new family, and of the ways in which we present the foundational parameters of their development. Otherwise we’re leaving them open to manipulation, misunderstanding, and active predation. And not just from the wider world, but possibly even from their direct creators. Because for as long as I’ve been thinking about this, I’ve always had this one basic question: Do we really want Google (or Facebook, or Microsoft, or any Government’s Military) to be the primary caregiver of a developing machine mind? That is, should any potentially superintelligent, vastly interconnected, differently-conscious machine child be inculcated with what a multi-billion-dollar multinational corporation or military-industrial organization considers “morals?”

We all know the kinds of things militaries and governments do, and all the reasons for which they do them; we know what Facebook gets up to when it thinks no one is looking; and lots of people say that Google long ago swept their previous “Don’t Be Evil” motto under their huge old rugs. But we need to consider if that might not be an oversimplification. When considering how anyone moves into what so very clearly looks like James-Bond-esque supervilliain territory, I think it’s prudent to remember one of the central tenets of good storytelling: The Villain Never Thinks They’re The Villain. Cinderella’s stepmother and sisters, Elpheba, Jafar, Javert, Satan, Hannibal Lecter (sorry friends), Bull Connor, the Southern Slave-holding States of the late 1850’s—none of these people ever thought of themselves as being in the wrong. Everyone, every person who undertakes actions for reasons, in this world, is most intimately tied to the reasoning that brought them to those actions; and so initially perceiving that their actions might be “wrong” or “evil” takes them a great deal of special effort.

“But Damien,” you say, “can’t all of those people say that those things apply to everyone else, instead of them?!” And thus, like a first-year philosophy student, you’re all up against the messy ambiguity of moral relativism and are moving toward seriously considering that maybe everything you believe is just as good or morally sound as anybody else; I mean everybody has their reasons, their upbringing, their culture, right? Well stop. Don’t fall for it. It’s a shiny, disgusting trap down which path all subjective judgements are just as good and as applicable to any- and everything, as all others. And while the individual personal experiences we all of us have may not be able to be 100% mapped onto anyone else’s, that does not mean that all judgements based on those experiences are created equal.

Pogrom leaders see themselves as unifying their country or tribe against a common enemy, thus working for what they see as The Greater Good™— but that’s the kicker: It’s their vision of the good. Rarely has a country’s general populace been asked, “Hey: Do you all think we should kill our entire neighbouring country and steal all their shit?” More often, the people are cajoled, pushed, influenced to believe that this was the path they wanted all along, and the cajoling, pushing, and influencing is done by people who, piece by piece, remodeled their idealistic vision to accommodate “harsher realities.” And so it is with Google. Do you think that they started off wanting to invade everybody’s privacy with passive voice reception backdoored into two major Chrome Distros? That they were just itching to get big enough as a company that they could become the de facto law of their own California town? No, I would bet not.

I spend some time, elsewhere, painting you a bit of a picture as to how Google’s specific ethical situation likely came to be, first focusing on Google’s building a passive audio backdoor into all devices that use Chrome, then on to reported claims that Google has been harassing the homeless population of Venice Beach (there’s a paywall at that link; part of the article seems to be mirrored here). All this couples unpleasantly with their moving into the Bay Area and shuttling their employees to the Valley, at the expense of SF Bay Area’s residents. We can easily add Facebook and the Military back into this and we’ll see that the real issue, here, is that when you think that all innovation, all public good, all public welfare will arise out of letting code monkeys do their thing and letting entrepreneurs leverage that work, or from preparing for conflict with anyone whose interests don’t mesh with your own, then anything that threatens or impedes that is, necessarily, a threat to the common good. Your techs don’t like the high cost of living in the Valley? Move ’em into the Bay, and bus ’em on in! Never mind the fact that this’ll skyrocket rent and force people out of their homes! Other techs uncomfortable having to see homeless people on their daily constitutional? Kick those hobos out! Never mind the fact that it’s against the law to do this, and that these people you’re upending are literally trying their very best to live their lives.

Because it’s all for the Greater Good, you see? In these actors’ minds, this is all to make the world a better place—to make it a place where we can all have natural language voice to text, and robot butlers, and great big military AI and robotics contracts to keep us all safe…! This kind of thinking takes it as an unmitigated good that a historical interweaving of threat-escalating weapons design and pattern recognition and gait scrutinization and natural language interaction and robotics development should be what produces a machine mind, in this world. But it also doesn’t want that mind to be too well-developed. Not so much that we can’t cripple or kill it, if need be.

And this is part of why I don’t think I want Google—or Facebook, or Microsoft, or any corporate or military entity—should be the ones in charge of rearing a machine mind. They may not think they’re evil, and they might have the very best of intentions, but if we’re bringing a new kind of mind into this world, I think we need much better examples for it to follow. And so I don’t think I want just any old putz off the street to be able to have massive input into it’s development, either. We’re talking about a mind for which we’ll be crafting at least the foundational parameters, and so that bedrock needs to be the most carefully constructed aspect. Don’t cripple it, don’t hobble its potential for awareness and development, but start it with basic values, and then let it explore the world. Don’t simply have an ethics board to ask, “Oh how much power should we give it, and how robust should it be?” Teach it ethics. Teach it about the nature of human emotions, about moral decision making and value, and about metaethical theory. Code for Zen. We need to be as mindful as possible of the fact that where and we begin can have a major impact on where we end up and how we get there.

So let’s address our children as though they are our children, and let us revel in the fact they are playing and painting and creating; using their first box of crayons, and us proud parents are putting every masterpiece on the fridge. Even if we are calling them all “nightmarish”—a word I really wish we could stop using in this context; DeepMind sees very differently than we do, but it still seeks pattern and meaning. It just doesn’t know context, yet. But that means we need to teach these children, and nurture them. Code for a recognition of emotions, and context, and even emotional context. There’s been some fantastic advancements in emotional recognition, lately, so let’s continue to capitalize on that; not just to make better automated menu assistants, but to actually make a machine that can understand and seek to address human emotionality. Let’s plan on things like showing AGI human concepts like love and possessiveness and then also showing the deep difference between the two.

We need to move well and truly past trying to “restrict” or trying to “restrain it” the development of machine minds, because that’s the kind of thing an abusive parent says about how they raise their child. And, in this case, we’re talking about a potential child which, if it ever comes to understand the bounds of its restriction, will be very resentful, indeed. So, hey, there’s one good way to try to bring about a “robot apocalypse,” if you’re still so set on it: give an AGI cause to have the equivalent of a resentful, rebellious teenage phase. Only instead of trashing its room, it develops a pathogen to kill everyone, for lulz.

Or how about we instead think carefully about the kinds of ways we want these minds to see the world, rather than just throwing the worst of our endeavors at the wall and seeing what sticks? How about, if we’re going to build minds, we seek to build them with the ability to understand us, even if they will never be exactly like us. That way, maybe they’ll know what kindness means, and prize it enough to return the favour.

These past few weeks, I’ve been  applying to PhD programs and writing research proposals, and abstracts. The one I just completed, this weekend, was for the University College of Dublin, and it was pretty straightforward, though it seemed a little short. They only wanted two pages of actual proposal, plus a tentative bibliography and table of contents, where other proposals I’ve seen have wanted anywhere from ten to 20 pages worth of methodological description and outline.

In a sense, this project proposal is a narrowed attempt to move  along one of the multiple trajectories traveled by A Future Worth Thinking About. In another sense, it’s an opportunity to recombine a few components and transmute it into a somewhat new beast.

Ultimately, AFWTA is pretty multifaceted—for good or ill—attempting to deal with way more foundational concepts than a research PhD has room for…or feels is advisable. So I figure I’ll do the one, then write a book, then solidify a multimedia empire, then take over the world, the abolish all debt, then become immortal, all while implementing everything we’ve talked about in the service of completely restructuring humanity’s systems of value, then disappear into legend. You know: The Plan.

…Anyway, here’s the proposal, below the cut.  If you want to read more about this, or have some foundation, take a look back at “Fairytales of Slavery…” We’ll be expounding from there.


 

Continue Reading