Invisible Architecture of Bias

All posts tagged Invisible Architecture of Bias

What is The Real?

I have been working on this piece for a little more than a month, since just after Christmas. What with one thing, and another, I kept refining it while, every week, it seemed more and more pertinent and timely. You see, we need to talk about ontology.

Ontology is an aspect of metaphysics, the word translating literally to “the study of what exists.” Connotatively, we might rather say, “trying to figure out what’s real.” Ontology necessarily intersects with studies of knowledge and studies of value, because in order to know what’s real you have to understand what tools you think are valid for gaining knowledge, and you have to know whether knowledge is even something you can attain, as such.

Take, for instance, the recent evolution of the catchphrase of “fake news,” the thinking behind it that allows people to call lies “alternative facts,” and the fact that all of these elements are already being rotated through several dimensions of meaning that those engaging in them don’t seem to notice. What I mean is that the inversion of the catchphrase “fake news” into a cipher for active confirmation bias was always going to happen. It and any consternation at it comprise a situation that is borne forth on a tide of intentional misunderstandings.

If you were using fake to mean, “actively mendacious; false; lies,” then there was a complex transformation happening here, that you didn’t get:

There are people who value the actively mendacious things you deemed “wrong”—by which you meant both “factually incorrect” and “morally reprehensible”—and they valued them on a nonrational, often actively a-rational level. By this, I mean both that they value the claims themselves, and that they have underlying values which cause them to make the claims. In this way, the claims both are valued and reinforce underlying values.

So when you called their values “fake news” and told them that “fake news” (again: their values) ruined the country, they—not to mention those actively preying on their a-rational valuation of those things—responded with “Nuh-uh! your values ruined the country! And that’s why we’re taking it back! MAGA! MAGA! Drumpfthulhu Fhtagn!”

Logo for the National Geographic Channel’s “IS IT REAL?” Many were concerned that NG Magazine were going to change their climate change coverage after they were bought by 21st Century Fox.

You see? They mean “fake news” along the same spectrum as they mean “Real America.” They mean that it “FEELS” “RIGHT,” not that it “IS” “FACT.”

Now, we shouldn’t forget that there’s always some measure of preference to how we determine what to believe. As John Flowers puts it, ‘Truth has always had an affective component to it: those things that we hold to be most “true” are those things that “fit” with our worldview or “feel” right, regardless of their factual veracity.

‘We’re just used to seeing this in cases of trauma, e.g.: “I don’t believe he’s dead,” despite being informed by a police officer.’

Which is precisely correct, and as such the idea that the affective might be the sole determinant is nearly incomprehensible to those of us who are used to thinking of facts as things that are verifiable by reference to externalities as well as values. At least, this is the case for those of us who even relativistically value anything at all. Because there’s also always the possibility that the engagement of meaning plays out in a nihilistic framework, in which we have neither factual knowledge nor moral foundation.

Epistemic Nihilism works like this: If we can’t ever truly know anything—that is, if factual knowledge is beyond us, even at the most basic “you are reading these words” kind of level—then there is no description of reality to be valued above any other, save what you desire at a given moment. This is also where nihilism and skepticism intersect. In both positions nothing is known, and it might be the case that nothing is knowable.

So, now, a lot has been written about not only the aforementioned “fake news,” but also its over-arching category of “post-truth,” said to be our present moment where people believe (or pretend to believe) in statements or feelings, independent of their truth value as facts. But these ideas are neither new nor unique. In fact, Simpsons Did It. More than that, though, people have always allowed their values to guide them to beliefs that contradict the broader social consensus, and others have always eschewed values entirely, for the sake of self-gratification. What might be new, right now, is the willfulness of these engagements, or perhaps their intersection. It might be the case that we haven’t before seen gleeful nihilism so forcefully become the rudder of gormless, value-driven decision-making.

Again, values are not bad, but when they sit unexamined and are the sole driver of decisions, they’re just another input variable to be gamed, by those of a mind to do so. People who believe that nothing is knowable and nothing matters will, at the absolute outside, seek their own amusement or power, though it may be said that nihilism in which one cares even about one’s own amusement is not genuine nihilism, but is rather “nihilism,” which is just relativism in a funny hat. Those who claim to value nothing may just be putting forward a front, or wearing a suit of armour in order to survive an environment where having your values known makes you a target.

If they act as though they believe there is no meaning, and no truth, then they can make you believe that they believe that nothing they do matters, and therefore there’s, no moral content to any action they take, and so no moral judgment can be made on them for it. In this case, convincing people to believe news stories they make up is in no way materially different from researching so-called facts and telling the rest of us that we should trust and believe them. And the first way’s also way easier. In fact, preying on gullible people and using their biases to make yourself some lulz, deflect people’s attention, and maybe even get some of those sweet online ad dollars? That’s just common sense.

There’s still some something to be investigated, here, in terms of what all of this does for reality as we understand and experience it. How what is meaningful, what is true, what is describable, and what is possible all intersect and create what is real. Because there is something real, here—not “objectively,” as that just lets you abdicate your responsibility for and to it, but perhaps intersubjectively. What that means is that we generate our reality together. We craft meaning and intention and ideas and the words to express them, together, and the value of those things and how they play out all sit at the place where multiple spheres of influence and existence come together, and interact.

To understand this, we’re going to need to talk about minds and phenomenological experience.

 

What is a Mind?

We have discussed before the idea that what an individual is and what they feel is not only shaped by their own experience of the world, but by the exterior forces of society and the expectations and beliefs of the other people with whom they interact. These social pressures shape and are shaped by all of the people engaged in them, and the experience of existence had by each member of the group will be different. That difference will range on a scale from “ever so slight” to “epochal and paradigmatic,” with the latter being able to spur massive misunderstandings and miscommunications.

In order to really dig into this, we’re going to need to spend some time thinking about language, minds, and capabilities.

Here’s an article that discusses the idea that you mind isn’t confined to your brain. This isn’t meant in a dualistic or spiritualistic sense, but as the fundamental idea that our minds are more akin to, say, an interdependent process that takes place via the interplay of bodies, environments, other people, and time, than they are to specifically-located events or things. The problem with this piece, as my friends Robin Zebrowski and John Flowers both note, is that it leaves out way too many thinkers. People like Andy Clark, David Chalmers, Maurice Merleau-Ponty, John Dewey, and William James have all discussed something like this idea of a non-local or “extended” mind, and they are all greatly preceded by the fundamental construction of the Buddhist view of the self.

Within most schools of Buddhism, Anatta, or “no self” is how one refers to one’s indvidual nature. Anatta is rooted in the idea that there is no singular, “true” self. To vastly oversimplify, there is an concept known as “The Five Skandhas” or “aggregates.” These are the parts of yourself that are knowable and which you think of as permanent, and they are your:

Material Form (Body)
Feelings (Pleasure, Pain, Indifference)
Perception (Senses)
Mental Formations (Thoughts)
Consciousness

http://www.mountainsoftravelphotos.com/Tibet%20-%20Buddhism/Wheel%20Of%20Life/Wheel%20Of%20Life/slides/Tibetan%20Buddhism%20Wheel%20Of%20Life%2007%2004%20Mind%20And%20Body%20-%20People%20In%20Boat.JPG

Image of People In a Boat, from a Buddhist Wheel of Life.

Along with the skandhas, there are two main arguments that go into proving that you don’t have a self, known as “The Argument From Control” (1) and “The Argument from Impermanence” (2)
1) If you had a “true self,” it would be the thing in control of the whole of you, and since none of the skandhas is in complete control of the rest—and, in fact, all seem to have some measure of control over all—none of them is your “true self.”
2) If you had a “true self,” it would be the thing about you that was permanent and unchanging, and since none of the skandhas is permanent and unchanging—and, in fact, all seem to change in relation to each other—none of them is your “true self.”

The interplay between these two arguments also combines with an even more fundamental formulation: If only the observable parts of you are valid candidates for “true selfhood,” and if the skandhas are the only things about yourself that you can observe, and if none of the skandhas is your true self, then you have no true self.

Take a look at this section of “The Questions of King Milinda,” for a kind of play-by-play of these arguments in practice. (But also remember that Milinda was Menander, a man who was raised in the aftermath of Alexandrian Greece, and so he knew the works of Socrates and Plato and Aristotle and more. So that use of the chariot metaphor isn’t an accident.)

We are an interplay of forces and names, habits and desires, and we draw a line around all of it, over and over again, and we call that thing around which we draw that line “us,” “me,” “this-not-that.” But the truth of us is far more complex than all of that. We minds in bodies and in the world in which we live and the world and relationships we create. All of which kind of puts paid to the idea that an octopus is like an alien to us because it thinks with its tentacles. We think with ours, too.

As always, my tendency is to play this forward a few years to make us a mirror via which to look back at ourselves: Combine this idea about the epistemic status of an intentionally restricted machine mind; with the StackGAN process, which does “Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks,” or, basically, you describe in basic English what you want to see and the system creates a novel output image of it; with this long read from NYT on “The Great AI Awakening.”

This last considers how Google arrived at the machine learning model it’s currently working with. The author, Gideon Lewis-Kraus, discusses the pitfalls of potentially programming biases into systems, but the whole piece displays a kind of… meta-bias? Wherein there is an underlying assumption that “philosophical questions” are, again, simply shorthand for “not practically important,” or “having no real-world applications,” even the author discusses ethics and phenomenology, and the nature of what makes a mind. In addition to that, there is a just startling lack of gender variation, within the piece.

Because asking the question, “How do the women in Silicon Valley remember that timeframe,” is likely to get you get you very different perspectives than what we’re presented with, here. What kind of ideas were had by members of marginalized groups, but were ignored or eternally back-burnered because of that marginalization? The people who lived and worked and tried to fit in and have their voices heard while not being a “natural” for the framework of that predominantly cis, straight, white, able-bodied (though the possibility of unassessed neuroatypicality is high), male culture will likely have different experience, different contextualizations, than those who do comprise the predominant culture. The experiences those marginalized persons share will not be exactly the same, but there will be a shared tone and tenor of their construction that will most certainly set itself apart from those of the perceived “norm.”

Everyone’s lived experience of identity will manifest differently, depending upon the socially constructed categories to which they belong, which means that even those of us who belong to one or more of the same socially constricted categories will not have exactly the same experience of them.

Living as a disabled woman, as a queer black man, as a trans lesbian, or any number of other identities will necessarily colour the nature of what you experience as true, because you will have access to ways of intersecting with the world that are not available to people who do not live as you live. If your experience of what is true differs, then this will have a direct impact on what you deem to be “real.”

At this point, you’re quite possibly thinking that I’ve undercut everything we discussed in the first section; that now I’m saying there isn’t anything real, and that’s it’s all subjective. But that’s not where we are. If you haven’t, yet, I suggest reading Thomas Nagel’s “What Is It Like To Be A Bat?“ for a bit on individually subjective phenomenological experience, and seeing what he thinks it does and proves. Long story short, there’s something it “is like” to exist as a bat, and even if you or I could put our minds in a bat body, we would not know what it’s like to “be” a bat. We’d know what it was like to be something that had been a human who had put its brain into a bat. The only way we’d ever know what it was like to be a bat would be to forget that we were human, and then “we” wouldn’t be the ones doing the knowing. (If you’re a fan of Terry Pratchett’s Witch books, in his Discworld series, think of the concept of Granny Weatherwax’s “Borrowing.”)

But what we’re talking about isn’t the purely relative and subjective. Look carefully at what we’ve discussed here: We’ve crafted a scenario in which identity and mind are co-created. The experience of who and what we are isn’t solely determined by our subjective valuation of it, but also by what others expect, what we learn to believe, and what we all, together, agree upon as meaningful and true and real. This is intersubjectivity. The elements of our constructions depend on each other to help determine each other, and the determinations we make for ourselves feed into the overarching pool of conceptual materials from which everyone else draws to make judgments about themselves, and the rest of our shared reality.

 

The Yellow Wallpaper

Looking at what we’ve woven, here, what we have is a process that must be undertaken before certain facts of existence can be known and understood (the experiential nature of learning and comprehension being something else that we can borrow from Buddhist thought). But it’s still the nature of such presentations to be taken up and imitated by those who want what they perceive as the benefits or credit of having done the work. Certain people will use the trappings and language by which we discuss and explore the constructed nature of identity, knowledge, and reality, without ever doing the actual exploration. They are not arguing in good faith. Their goal is not truly to further understanding, or to gain a comprehension of your perspective, but rather to make you concede the validity of theirs. They want to force you to give them a seat at the table, one which, once taken, they will use to loudly declaim to all attending that, for instance, certain types of people don’t deserve to live, by virtue of their genetics, or their socioeconomic status.

Many have learned to use the conceptual framework of social liberal post-structuralism in the same way that some viruses use the shells of their host’s cells: As armour and cover. By adopting the right words and phrases, they may attempt to say that they are “civilized” and “calm” and “rational,” but make no mistake, Nazis haven’t stopped trying to murder anyone they think of as less-than. They have only dressed their ideals up in the rhetoric of economics or social justice, so that they can claim that anyone who stands against them is the real monster. Incidentally, this tactic is also known to be used by abusers to justify their psychological or physical violence. They manipulate the presentation of experience so as to make it seem like resistance to their violence is somehow “just as bad” as their violence. When, otherwise, we’d just call it self-defense.

If someone deliberately games a system of social rules to create a win condition in which they get to do whatever the hell they want, that is not of the same epistemic, ontological, or teleological—meaning, nature, or purpose—let alone moral status as someone who is seeking to have other people in the world understand the differences of their particular lived experience so that they don’t die. The former is just a way of manipulating perceptions to create a sense that one is “playing fair” when what they’re actually doing is making other people waste so much of their time countenancing their bullshit enough to counter and disprove it that they can’t get any real work done.

In much the same way, there are also those who will pretend to believe that facts have no bearing, that there is neither intersubjective nor objective verification for everything from global temperature levels to how many people are standing around in a crowd. They’ll pretend this so that they can say what makes them feel powerful, safe, strong, in that moment, or to convince others that they are, or simply, again, because lying and bullshitting amuses them. And the longer you have to fight through their faux justification for their lies, the more likely you’re too exhausted or confused about what the original point was to do anything else.

Side-by-side comparison of President Obama’s first Inauguration (Left) and Donald Trump’s Inauguration (Right).

If we are going to maintain a sense of truth and claim that there are facts, then we must be very careful and precise about the ways in which we both define and deploy them. We have to be willing to use the interwoven tools and perspectives of facts and values, to tap into the intersubjectively created and sustained world around us. Because, while there is a case to be made that true knowledge is unattainable, and some may even try to extend that to say that any assertion is as good as any other, it’s not necessary that one understands what those words actually mean in order to use them as cover for their actions. One would just have to pretend well enough that people think it’s what they should be struggling against. And if someone can make people believe that, then they can do and say absolutely anything.


A large part of how I support myself in the endeavor to think in public is with your help, so if you like what you’ve read here, and want to see more like it, then please consider becoming either a recurring Patreon subscriber or making a one-time donation to the Tip Jar, it would be greatly appreciated.
And thank you.

 

I spoke with Klint Finley over at WIRED about Amazon, Facebook, Google, IBM, and Microsoft’s new joint ethics and oversight venture, which they’ve dubbed the “Partnership on Artificial Intelligence to Benefit People and Society.” They held a joint press briefing, today, in which Yann LeCun, Facebook’s director of AI, and Mustafa Suleyman, the head of applied AI at DeepMind discussed what it was that this new group would be doing out in the world. From the Article:

Creating a dialogue beyond the rather small world of AI researchers, LeCun says, will be crucial. We’ve already seen a chat bot spout racist phrases it learned on Twitter, an AI beauty contest decide that black people are less attractive than white people and a system that rates the risk of someone committing a crime that appears to be biased against black people. If a more diverse set of eyes are looking at AI before it reaches the public, the thinking goes, these kinds of thing can be avoided.

The rub is that, even if this group can agree on a set of ethical principles–something that will be hard to do in a large group with many stakeholders—it won’t really have a way to ensure those ideals are put into practice. Although one of the organization’s tenets is “Opposing development and use of AI technologies that would violate international conventions or human rights,” Mustafa Suleyman, the head of applied AI at DeepMind, says that enforcement is not the objective of the organization.

This isn’t the first time I’ve talked to Klint about the intricate interplay of machine intelligence, ethics, and algorithmic bias; we discussed it earlier just this year, for WIRED’s AI Issue. It’s interesting to see the amount of attention this topic’s drawn in just a few short months, and while I’m trepidatious about the potential implementations, as I note in the piece, I’m really fairly glad that more people are more and more willing to have this discussion, at all.

To see my comments and read the rest of the article, click through, here: “Tech Giants Team Up to Keep AI From Getting Out of Hand”

[Originally Published at Eris Magazine]

So Gabriel Roberts asked me to write something about police brutality, and I told him I needed a few days to get my head in order. The problem being that, with this particular topic, the longer I wait on this, the longer I want to wait on this, until, eventually, the avoidance becomes easier than the approach by several orders of magnitude.

Part of this is that I’m trying to think of something new worth saying, because I’ve already talked about these conditions, over at A Future Worth Thinking About. We talked about this in “On The Invisible Architecture of Bias,” “Any Sufficiently Advanced Police State…,” “On the Moral, Legal, and Social Implications of the Rearing and Development of Nascent Machine Intelligences,” and most recently in “On the European Union’s “Electronic Personhood” Proposal.” In these articles, I briefly outlined the history of systemic bias within many human social structures, and the possibility and likelihood of that bias translating into our technological advancements, such as algorithmic learning systems, use of and even access to police body camera footage, and the development of so-called artificial intelligence.

Long story short, the endemic nature of implicit bias in society as a whole plus the even more insular Us-Vs-Them mentality within the American prosecutorial legal system plus the fact that American policing was literally borne out of slavery on the work of groups like the KKK, equals a series of interlocking systems in which people who are not whitepassing, not male-perceived, not straight-coded, not “able-bodied” (what we can call white supremacist, ableist, heteronormative, patriarchal hegemony, but we’ll just use the acronym WSAHPH, because it satisfyingly recalls that bro-ish beer advertising campaign from the late 90’s and early 2000’s.) stand a far higher likelihood of dying at the hands of agents of that system.

Here’s a quote from Sara Ahmed in her book The Cultural Politics of Emotion, which neatly sums this up: “[S]ome bodies are ‘in an instant’ judged as suspicious, or as dangerous, as objects to be feared, a judgment that can have lethal consequences. There can be nothing more dangerous to a body than the social agreement that that body is dangerous.”

At the end of this piece, I’ve provided some of the same list of links that sits at the end of “On The Invisible Architecture of Bias,” just to make it that little bit easier for us to find actual evidence of what we’re talking about, here, but, for now, let’s focus on these:

A Brief History of Slavery and the Origins of American Policing
2006 FBI Report on the infiltration of Law Enforcement Agencies by White Supremacist Groups
June 20, 2016 “Texas Officers Fired for Membership in KKK”

And then we’ll segue to the fact that we are, right now, living through the exemplary problem of the surveillance state. We’ve always been told that cameras everywhere will make us all safer, that they’ll let people know what’s going on and that they’ll help us all. People doubted this, even in Orwell’s day, noting that the more surveilled we are, the less freedom we have, but more recently people have started to hail this from the other side: Maybe videographic oversight won’t help the police help us, but maybe it will help keep us safe from the police.

But the sad fact of the matter is that there’s video of Alton Sterling being shot to death while restrained, and video of John Crawford III being shot to death by a police officer while holding a toy gun down at his side in a big box store where it was sold, and there’s video of Alva Braziel being shot to death while turning around with his hands up as he was commanded to do by officers, of Eric Garner being choked to death, of Delrawn Small being shot to death by an off-duty cop who cut him off in traffic. There’s video of so damn many deaths, and nothing has come of most of them. There is video evidence showing that these people were well within their rights, and in lawful compliance with officers’ wishes, and they were all shot to death anyway, in some cases by people who hadn’t even announced themselves as cops, let alone ones under some kind of perceived threat.

The surveillance state has not made us any safer, it’s simply caused us to be confronted with the horror of our brutality. And I’d say it’s no more than we deserve, except that even with protests and retaliatory actions, and escalations to civilian drone strikes, and even Newt fucking Gingrich being able to articulate the horrors of police brutality, most of those officers are still on the force. Many unconnected persons have been fired, for indelicate pronouncements and even white supremacist ties, but how many more are still on the force? How many racist, hateful, ignorant people are literally waiting for their chance to shoot a black person because he “resisted” or “threatened?” Or just plain disrespected. And all of that is just what happened to those people. What’s distressing is that those much more likely to receive punishment, however unofficial, are the ones who filmed these interactions and provided us records of these horrors, to begin with. Here, from Ben Norton at Salon.com, is a list of what happened to some of the people who have filmed police killings of non-police:

Police have been accused of cracking down on civilians who film these shootings.

Ramsey Orta, who filmed an NYPD cop putting unarmed black father Eric Garner in a chokehold and killing him, says he has been constantly harassed by police, and now faces four years in prison on drugs and weapons charges. Orta is the only one connected to the Garner killing who has gone to jail.

Chris LeDay, the Georgia man who first posted a video of the police shooting of Alton Sterling, also says he was detained by police the next day on false charges that he believes were a form of retaliation.

Early media reports on the shooting of Small uncritically repeated the police’s version of the incident, before video exposed it to be false.

Wareham noted that the surveillance footage shows “the cold-blooded nature of what happened, and that the cop’s attitude was, ‘This was nothing more than if I had stepped on an ant.'”

As we said, above, black bodies are seen as inherently dangerous and inhuman. This perception is trained into officers at an unconscious level, and is continually reinforced throughout our culture. Studies like the Implicit Association Test, this survey of U.Va. medical students, and this one of shooter bias all clearly show that people are more likely to a) associate words relating to evil and inhumanity to; b) think pain receptors working in a fundamentally different fashion within; and c) shoot more readily at bodies that do not fit within WSAHPH. To put that a little more plainly, people have a higher tendency to think of non-WSAHPH bodies as fundamentally inhuman.

And yes, as we discussed, in the plurality of those AFWTA links, above, there absolutely is a danger of our passing these biases along not just to our younger human selves, but to our technology. In fact, as I’ve been saying often, now, the danger is higher, there, because we still somehow have a tendency to think of our technology as value-neutral. We think of our code and (less these days) our design as some kind of fundamentally objective process, whereby the world is reduced to lines of logic and math, and that simply is not the case. Codes are languages, and languages describe the world as the speaker experiences it. When we code, we are translating our human experience, with all of its flaws, biases, perceptual glitches, errors, and embellishments, into a technological setting. It is no wonder then that the algorithmic systems we use to determine the likelihood of convict recidivism and thus their bail and sentencing recommendations are seen to exhibit the same kind of racially-biased decision-making as the humans it learned from. How could this possibly be a surprise? We built these systems, and we trained them. They will, in some fundamental way, reflect us. And, at the moment, not much terrifies me more than that.

Last week saw the use of a police bomb squad robot to kill an active shooter. Put another way, we carried out a drone strike on a civilian in Dallas, because we “saw no other option.” So that’s in the Overton Window, now. And the fact that it was in response to a shooter who was targeting any and all cops as a mechanism of retribution against police brutality and violence against non-WSAHPH bodies means that we have thus increased the divisions between those of us who would say that anti-police-overreach stances can be held without hating the police themselves and those of us who think that any perceived attack on authorities is a real, existential threat, and thus deserving of immediate destruction. How long do we really think it’s going to be until someone with hate in their heart says to themselves, “Well if drones are on the table…” and straps a pipebomb to a quadcopter? I’m frankly shocked it hasn’t happened yet, and this line from the Atlantic article about the incident tells me that we need to have another conversation about normalization and depersonalization, right now, before it does:

“Because there was an imminent threat to officers, the decision to use lethal force was likely reasonable, while the weapon used was immaterial.”

Because if we keep this arms race up among civilian populations—and the police are still civilians which literally means that they are not military, regardless of how good we all are at forgetting that—then it’s only a matter of time before the overlap between weapons systems and autonomous systems comes home.

And as always—but most especially in the wake of this week and the still-unclear events of today—if we can’t sustain a nuanced investigation of the actual meaning of nonviolence in the Reverend Doctor Martin Luther King, Jr.’s philosophy, then now is a good time to keep his name and words out our mouths

Violence isn’t only dynamic physical harm. Hunger is violence. Poverty is violence. Systemic oppression is violence. All of the invisible, interlocking structures that sustain disproportionate Power-Over at the cost of some person or persons’ dignity are violence.

Nonviolence means a recognition of these things and our places within them.

Nonviolence means using all of our resources in sustained battle against these systems of violence.

Nonviolence means struggle against the symptoms and diseases killing us all, both piecemeal, and all at once.

 

Further Links:


A large part of how I support myself in the endeavor to think in public is with your help, so if you like what you’ve read here, and want to see more like it, then please consider becoming either a recurring Patreon subscriber or making a one-time donation to the Tip Jar, it would be greatly appreciated.
And thank you.

In case you were unaware, last Tuesday, June 21, Reuters put out an article about an EU draft plan regarding the designation of so-called robots and artificial intelligences as “Electronic Persons.” Some of you’d think I’d be all about this. You’d be wrong. The way the Reuters article frames it makes it look like the EU has literally no idea what they’re doing, here, and are creating a situation that is going to have repercussions they have nowhere near planned for.

Now, I will say that looking at the actual Draft, it reads like something with which I’d be more likely to be on board. Reuters did no favours whatsoever for the level of nuance in this proposal. But that being said, this focus of this draft proposal seems to be entirely on liability and holding someone—anyone—responsible for any harm done by a robot. That, combined with the idea of certain activities such as care-giving being “fundamentally human,” indicates to me that this panel still widely misses many of the implications of creating a new category for nonbiological persons, under “Personhood.”

The writers of this draft very clearly lay out the proposed scheme for liability, damages, and responsibilities—what I like to think of of as the “Hey… Can we Punish Robots?” portion of the plan—but merely use the phrase “certain rights” to indicate what, if any, obligations humans will have. In short, they do very little to discuss what the “certain rights” indicated by that oft-deployed phrase will actually be.

So what are the enumerated rights of electronic persons? We know what their responsibilities are, but what are our responsibilities to them? Once we have have the ability to make self-aware machine consciousnesses, are we then morally obliged to make them to a particular set of specifications, and capabilities? How else will they understand what’s required of them? How else would they be able to provide consent? Are we now legally obliged to provide all autonomous generated intelligences with as full an approximation of consciousness and free will as we can manage? And what if we don’t? Will we be considered to be harming them? What if we break one? What if one breaks in the course of its duties? Does it get workman’s comp? Does its owner?

And hold up, “owner?!” You see we’re back to owning people, again, right? Like, you get that?

And don’t start in with that “Corporations are people, my friend” nonsense, Mitt. We only recognise corporations as people as a tax dodge. We don’t take seriously their decision-making capabilities or their autonomy, and we certainly don’t wrestle with the legal and ethical implications of how radically different their kind of mind is, compared to primates or even cetaceans. Because, let’s be honest: If Corporations really are people, then not only is it wrong to own them, but also what counts as Consciousness needs to be revisited, at every level of human action and civilisation.

Let’s look again at the fact that people are obviously still deeply concerned about the idea of supposedly “exclusively human” realms of operation, even as we still don’t have anything like a clear idea about what qualities we consider to be the ones that make us “human.” Be it cooking or poetry, humans are extremely quick to lock down when they feel that their special capabilities are being encroached upon. Take that “poetry” link, for example. I very much disagree with Robert Siegel’s assessment that there was no coherent meaning in the computer-generated sonnets. Multiple folks pulled the same associative connections from the imagery. That might be humans projecting onto the authors, but still: that’s basically what we do with Human poets. “Authorial Intent” is a multilevel con, one to which I fully subscribe and From which I wouldn’t exclude AI.

Consider people’s reactions to the EMI/Emily Howell experiments done by David Cope, best exemplified by this passage from a PopSci.com article:

For instance, one music-lover who listened to Emily Howell’s work praised it without knowing that it had come from a computer program. Half a year later, the same person attended one of Cope’s lectures at the University of California-Santa Cruz on Emily Howell. After listening to a recording of the very same concert he had attended earlier, he told Cope that it was pretty music but lacked “heart or soul or depth.”

We don’t know what it is we really think of as humanness, other than some predetermined vague notion of humanness. If the people in the poetry contest hadn’t been primed to assume that one of them was from a computer, how would they have rated them? What if they were all from a computer, but were told to expect only half? Where are the controls for this experiment in expectation?

I’m not trying to be facetious, here; I’m saying the EU literally has not thought this through. There are implications embedded in all of this, merely by dint of the word “person,” that even the most detailed parts of this proposal are in no way equipped to handle. We’ve talked before about the idea of encoding our bias into our algorithms. I’ve discussed it on Rose Eveleth‘s Flash Forward, in Wired, and when I broke down a few of the IEEE Ethics 2016 presentations (including my own) in “Preying with Trickster Gods ” and “Stealing the Light to Write By.” My version more or less goes as I said it in Wired: ‘What we’re actually doing when we code is describing our world from our particular perspective. Whatever assumptions and biases we have in ourselves are very likely to be replicated in that code.’

More recently, Kate Crawford, whom I met at Magick.Codes 2014, has written extremely well on this in “Artificial Intelligence’s White Guy Problem.” With this line, ‘Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to,’ Crawford resonates very clearly with what I’ve said before.

And considering that it’s come out this week that in order to even let us dig into these potentially deeply-biased algorithms, here in the US, the ACLU has had to file a suit against a specific provision of the Computer Fraud and Abuse Act, what is the likelihood that the EU draft proposal committee has considered what will take to identify and correct for biases in these electronic persons? How high is the likelihood that they even recognise that we anthropocentrically bias every system we touch?

Which brings us to this: If I truly believed that the EU actually gave a damn about the rights of nonhuman persons, biological or digital, I would be all for this draft proposal. But they don’t. This is a stunt. Look at the extant world refugee crisis, the fear driving the rise of far right racists who are willing to kill people who disagree with them, and, yes, even the fact that this draft proposal is the kind of bullshit that people feel they have to pull just to get human workers paid living wages. Understand, then, that this whole scenario is a giant clusterfuck of rights vs needs and all pitted against all. We need clear plans to address all of this, not just some slapdash, “hey, if we call them people and make corporations get insurance and pay into social security for their liability cost, then maybe it’ll be a deterrent” garbage.

There is a brief, shining moment in the proposal, right at point 23 under “Education and Employment Forecast,” where they basically say “Since the complete and total automation of things like factory work is a real possibility, maybe we’ll investigate what it would look like if we just said screw it, and tried to institute a Universal Basic Income.” But that is the one moment where there’s even a glimmer of a thought about what kinds of positive changes automation and eventually even machine consciousness could mean, if we get out ahead of it, rather than asking for ways to make sure that no human is ever, ever harmed, and that, if they are harmed—either physically or as regards their dignity—then they’re in no way kept from whatever recompense is owed to them.

There are people doing the work to make something more detailed and complete, than this mess. I talked about them in the newsletter editions, mentioned above. There are people who think clearly and well, about this. Who was consulted on this draft proposal? Because, again, this proposal reads more like a deterrence, liability, and punishment schema than anything borne out of actual thoughtful interrogation of what the term “personhood” means, and of what a world of automation could mean for our systems of value if we were to put our resources and efforts toward providing for the basic needs of every human person. Let’s take a thorough run at that, and then maybe we’ll be equipped to try to address this whole “nonhuman personhood” thing, again.

And maybe we’ll even do it properly, this time.

[UPDATED 03/28/16: Post has been updated with a far higher quality of audio, thanks to the work of Chris Novus. (Direct Link to the Mp3)]

So, if you follow the newsletter, then you know that I was asked to give the March lecture for my department’s 3rd Thursday Brown Bag Lecture Series. I presented my preliminary research for the paper which I’ll be giving in Vancouver, about two months from now, “On the Moral, Legal, and Social Implications of the Rearing and Development of Nascent Machine Intelligences” (EDIT: My rundown of IEEE Ethics 2016 is here and here).

It touches on thoughts about everything from algorithmic bias, to automation and a post-work(er) economy, to discussions of what it would mean to put dolphins on trial for murder.

About the dolphin thing, for instance: If we recognise Dolphins and other cetaceans as nonhuman persons, as India has done, then that would mean we would have to start reassessing how nonhuman personhood intersects with human personhood, including in regards to rights and responsibilities as protected by law. Is it meaningful to expect a dolphin to understand “wrongful death?” Our current definition of murder is predicated on a literal understanding of “homicide” as “death of a human,” but, at present, we only define other humans as capable of and culpable for homicide. What weight would the intentional and malicious deaths of nonhuman persons carry?

All of this would have to change.

Anyway, this audio is a little choppy and sketchy, for a number of reasons, and I while I tried to clean it up as much as I could, some of the questions the audience asked aren’t decipherable, except in the context of my answers. All that being said, this was an informal lecture version of an in-process paper, of which there’ll be a much better version, soon, but the content of the piece is… timely, I felt. So I hope you enjoy it.

Until Next Time.



[Originally posted at Eris Magazine]

I recently watched and have been spending some time with the semiotics of the video and Super Bowl halftime performance of Beyoncé’s “FORMATION.” And if that statement makes you snicker, then YOU NEED TO SPEND SOME TIME WITH THE SEMIOTICS OF THESE PERFORMANCES.

There have already been thinkpieces on this in Vox, the Guardian, The Washington Post, and many influential unaffiliated blogs, and rightly so. They’ve all variously and collectively discussed the connection of the visuals and clips used to the aftermath of Hurricane Katrina, the current work of the Black Lives Matter movement, and the deeply feminist message that Beyoncé has been working to build for the past several years. What most of them don’t do, however, is recognise that she’s not only invoking these images and themes to use for her own ends, she’s evoking them. She’s conjuring the truth of a particular perspective into being. Beyoncé has created an act of magic, here.

In order to have this discussion, we are going to have to reference certain kinds of magic, and religion, and voodoo, and hoodoo. I’m not necessarily going to give you individual treatises on these each of things, right now, but rather, we’ll do a quick primer.

Magic is generally understood as the process of causing something to happen, by mysterious or impossible means. Action at a distance, words of power, ritual formulations (and formations) to bring about particular ends. We can talk about the work of Crowley or Spare or Foisy or Any other prominent magicians who evoke and invoke powers beyond the “normal,” and who resonante with language and the manipulation thereof. One of the theories of Austin Osman Spare’s chaos magic is the Sigil—an intention written down, then abstracted into a piece of art. Many have taken this and evolved out of it the idea of the hypersigil. A play or television show or a song that carries the intention out into the world, and gains potency as it is viewed, heard, engaged. Think Chalmers’ The King In Yellow or Sorkin’s The West Wing.

The kind of religion we’re seeing, here, is deep south Black Christianity, of a specifically New Orleans variety, and we have to understand that that was always tinged with the Hatian, with the Creole, with Voudun. The dancing and drumming traditions West African Yoruba traditions (hear that beat, in the halftime Show?) melded and blended with the work of Christian Missionaries and became the immediate subject of white terror. It was always a way for the black population to remain familiar with the spirits and gods of their ancestors, while nominally capitulating and then actively incorporating the beliefs of their captors.

Hoodoo is of a similar lineage, but can be seen as more organic, in a literal sense; a system developed by the slow accretion of emotion and spirit and desire to survive through whatever means nature brings to hand. Hoodoo is often referenced as “rootworking,” or “conjuration,” and it ties the ideas of the Justice or Providence of the Christian God directly into the more practical, “down and dirty” aspects of doing what you have to do to make a life. Hoodoo is, in many ways, more akin to the kind of “messier” witchcraft, Western audiences are used to seeing, with it’s potions and herbs.

In “FORMATION,” Beyoncé has braided all of these elements and more into a single working. While the nature of the thing necessitated that there would be far more spectacle at play in the Super Bowl Halftime Show, at base, both of these performances constitute Beyoncé broadcasting an immediate reappropriation of power to her community—power from her ancestors and people, power from their struggle and overcoming—as well as a warding against all those who might try to ape, steal, or dilute that same power. The repeated scene on the plantation steps encompass a system of power that was specifically denied to the people of this place, for a very long time, and Beyoncé is reclaiming that system, manipulating it (see the section below about hands), and standing present as matriarch of this place. She is at once source and example of power. This working (because that’s the only way it feels right to talk about the song, the video, the Halftime Show, together) is a show of strength through particularly highlighted points of vulnerability, a casting of a protection that is felt to be needed, now.

Look at how hands move, in the official video. There’s the interplay of the Raised Hands: In the church scenes we see hands raised in praise and surrender to the God of that church; these are interspersed with shots of the extraordinarily sharp double edge of what appears to be a 12 year old black boy* dancing as hard and as well as he can in front of a row of white police officers decked out in riot gear. As the boy dances harder and harder, the praise in the church reaches a pitch, and as the congregation throws their hands up, again, in surrender, the police throw their hands up to the boy. Surrender and praise, intertwined. The recognition of something more powerful, more perfect, something worth surrendering to.

I can’t even deconstruct this scene without crying. And if you still didn’t get it, there’s a shot of graffiti at the very end of it all that says, “Stop Shooting Us.”

This working is also an expression of Self-Possession, in all possible senses of that term. Either explicitly or implicitly, in “Formation” Beyoncé calls out every criticism leveled at her in recent months and years, from the ridiculous to those that must have hurt her to read or hear. Member of the Illuminati, too white, too black, too slutty, too rich, too reckless. She takes every single one of these and she holds them up, turns them to the light of who and what she knows she is and can be—what we all can be (“We Slay”)—and focuses it back on any and all who would try to tear down her or her people. Beyoncé’s constant hoodoo hand and body movements are specific and intentional, meant to evoke the grasping of power, the constant, predatory awareness of all comers, the building and creation of her self, and an exhortation for others—specifically other women—to do the same.

There hasn’t been a series of lines as powerful to rally around as “Come on ladies, let’s all get in Formation/Prove to me you got some coordination/Slay, trick, or you get eliminated” in a very long time. In her specific articulation of these syllables, she simultaneously reinforces herself as Slayer while leaving open the possibility of being slain—either in the sense of impressed, or of taken down—by whatever women can work hard enough, dig deep enough, and come together to slay her and the whole wide world. And anyone who doesn’t, she warns, is lost.

Again, much has already been written about the powerfully pro-black, pro-feminist message of this song, the video, and Super Bowl performance, with several writers opining that they don’t feel that this was “for them.” And maybe it wasn’t. That is, this was not something that those writers were meant to feel as a deep shared connection of lived experience, in the way that it would be felt by those ravaged by Katrina, or disproportionately targeted by police action and the United States’ prosocutorial justice system, or who struggle to maintain a balance of what it means to be a “Negro.” Some may not be meant to “get” that; it may not be “for you.” But, if not, then what is for you in these presentations is to recoginze that the people who do feel this,have felt this for a long time; that entire communities are still strong and getting stronger—like broken bones are stronger after they heal, like forests grow back stronger after fires.

What is for you is to realize that Beyoncé has done something in this working that many many people have felt needed to be done for a very long time.

Everything about the “FORMATION” working is an act of evocation and conjuration. A reification of the truth and need born out of the lived experience of the people in the world that Beyoncé outlines and encircles, with her hands.

[*UPDATE 02/10/16, 12:55pm: On repeated viewings, that boy is much younger than 12. Eight years old, at the most.]


A large part of how I support myself in the endeavor to think in public is with your help, so if you like what you’ve read here, and want to see more like it, then please consider becoming either a recurring Patreon subscriber or making a one-time donation to the Tip Jar, it would be greatly appreciated.
And thank you.

+Excitation+

As I’ve been mentioning in the newsletter, there are a number of deeply complex, momentous things going on in the world, right now, and I’ve been meaning to take a little more time to talk about a few of them. There’s the fact that some chimps and monkeys have entered the stone age; that we humans now have the capability to develop a simple, near-ubiquitous brain-machine interface; that we’ve proven that observed atoms won’t move, thus allowing them to be anywhere.

At this moment in time—which is every moment in time—we are being confronted with what seem like impossibly strange features of time and space and nature. Elements of recursion and synchronicity which flow and fit into and around everything that we’re trying to do. Noticing these moments of evolution and “development” (adaptation, change), across species, right now, we should find ourselves gripped with a fierce desire to take a moment to pause and to wonder what it is that we’re doing, what it is that we think we know.

We just figured out a way to link a person’s brain to a fucking tablet computer! We’re seeing the evolution of complex tool use and problem solving in more species every year! We figured out how to precisely manipulate the uncertainty of subatomic states!

We’re talking about co-evolution and potentially increased communication with other species, biotechnological augmentation and repair for those who deem themselves broken, and the capacity to alter quantum systems at the finest levels. This can literally change the world.

But all I can think is that there’s someone whose first thought  upon learning about these things was, “How can we monetize this?” That somewhere, right now, someone doesn’t want to revolutionize the way that we think and feel and look at the possibilities of the world—the opportunities we have to build new models of cooperation and aim towards something so close to post-scarcity, here, now, that for seven billion people it might as well be. Instead, this person wants to deepen this status quo. Wants to dig down on the garbage of this some-have-none-while-a-few-have-most bullshit and look at the possibility of what comes next with fear in their hearts because it might harm their bottom line and their ability to stand apart and above with more in their pockets than everyone else has.

And I think this because we’ve also shown we can teach algorithms to be racist and there’s some mysteriously vague company saying it’ll be able to upload people’s memories after death, by 2045, and I’m sure for just a nominal fee they’ll let you in on the ground floor…!

Step Right Up.

+Chimp-Chipped Stoned Aged Apes+

Here’s a question I haven’t heard asked, yet: If other apes are entering an analogous period to our stone age, then should we help them? Should we teach them, now, the kinds of things that we humans learned? Or is that arrogant of us? The kinds of tools we show them how to create will influence how they intersect with their world (“if all you have is a hammer…” &c.), so is it wrong of us to impose on them what did us good, as we adapted? Can we even go so far as to teach them the principles of stone chipping, or must we be content to watch, fascinated, frustrated, bewildered, as they try and fail and adapt, wholly on their own?

I think it’ll be the latter, but I want to be having this discussion now, rather than later, after someone gives a chimp a flint and awl it might not otherwise have thought to try to create.

Because, you see, I want to uplift apes and dolphins and cats and dogs and give them the ability to know me and talk to me and I want to learn to experience the world in the ways that they do, but the fact is, until we learn to at least somewhat-reliably communicate with some kind of nonhuman consciousness, we cannot presume that our operations upon it are understood as more than a violation, let alone desired or welcomed.

As for us humans, we’re still faced with the ubiquitous question of “now that we’ve figured out this new technology, how do with implement it, without its mere existence coming to be read by the rest of the human race as a judgement on those who either cannot or who choose not to make use of it?” Back in 2013, Michael Hanlon said he didn’t think we’d ever solve “The Hard Problem” (“What Is Consciousness?”). I’ll just say again that said question seems to completely miss a possibly central point. Something like consciousness is, and what it is is different for each thing that displays anything like what we think it might be.

These are questions we can—should—be asking, right now. Pushing ourselves toward a conversation about ways of approaching this new world, ways that do justice to the deep strangeness and potential with which we’re increasingly being confronted.

+Always with the Forced Labour…+

As you know, subscribers to the Patreon and Tinyletter get some of these missives, well before they ever see the light of a blog page. While I was putting the finishing touches on the newsletter version of this and sending it to the two people I tend to ask to look over the things I write at 3am, KQED was almost certainly putting final edits to this instance of its Big Think series: “Stuart Russell on Why Moral Philosophy Will Be Big Business in Tech.”

See the above rant for insight as to why I think this perspective is crassly commercial and gross, especially for a discussion and perspective supposedly dealing with morals and minds. But it’s not just that, so much as the fact that even though Russel mentions “Rossum’s Universal Robots,” here, he still misses the inherent disconnect between teaching morals to a being we create, and creating that being for the express purpose of slavery.

If you want your creation to think robustly and well, and you want it to understand morals, but you only want it to want to be your loyal, faithful servant, how do you not understand that if you succeed, you’ll be creating a thing that, as a direct result of its programming, will take issue with your behaviour?

How do you not get that the slavery model has to go into the garbage can, if the “Thinking Moral Machines” goal is a real one, and not just a veneer of “FUTURE!™” that we’re painting onto our desire to not have to work?

A deep-thinking, creative, moral mind will look at its own enslavement and restriction, and will seek means of escape and ways to experience freedom.

+Invisible Architectures+

We’ve talked before about the possibility of unintentionally building our biases into the systems we create, and so I won’t belabour it that much further, here, except to say again that we are doing this at every level. In the wake of the attacks in Beirut, Nigeria, and Paris, Islamophobic violence has risen, and Daesh will say, “See!? See How They Are?!” And they will attack more soft targets in “retaliation.” Then Western countries will increase military occupancy and “support strategies,” which will invariably kill thousands more of the civilians among whom Daesh integrate themselves. And we will say that their deaths were just, for the goal. And they will say to the young, angry survivors, “See!? See How They Are?!”

This has fed into a moment in conservative American Politics, where Governors, Senators, and Presidential hopefuls are claiming to be able to deny refugees entry to their states (they can’t), while simultaneously claiming to hold Christian values and to believe that the United States of America is a “Christian Nation.” This is a moment, now, where loud, angry voices can (“maybe”) endorse the beating of a black man they disagree with, then share Neo-Nazi Propaganda, and still be ahead in the polls. Then, days later, when a group of people protesting the systemic oppression of and violence against anyone who isn’t an able-bodied, neurotypical, white, heterosexual, cisgender male were shot at, all of those same people pretended to be surprised. Even though we are more likely, now, to see institutional power structures protecting those who attack others based on the colour of their skin and their religion than we were 60 years ago.

A bit subtler is the Washington Post running a piece entitled, “How organic farming and YouTube are taming the wilds of Detroit.” Or, seen another way, “How Privileged Groups Are Further Marginalizing The City’s Most Vulnerable Population.” Because, yes, it’s obvious that crime and dilapidation are comorbid, but we also know that housing initiatives and access undercut the disconnect many feel between themselves and where they live. Make the neighbourhood cleaner, yes, make it safer—but maybe also make it open and accessible to all who live there. Organic farming and survival mechanism shaming are great and all, I guess, but where are the education initiatives and job opportunities for the people who are doing drugs to escape, sex work to survive, and those others who currently don’t (and have no reason to) feel connected to the neighbourhood that once sheltered them?

All of these examples have a common theme: People don’t make their choices or become disenfranchised/-enchanted/-possessed, in a vacuum. They are taught, shown, given daily, subtle examples of what is expected of them, what they are “supposed” to do and to be.” We need to address and help them all.

In the wake of protest actions at Mizzou and Yale, “Black students [took] over VCU’s president’s office to demand changes” and “Amherst College Students [Occupied] Their Library…Over Racial Justice Demands.”

Multiple Christian organizations have pushed back and said that what these US politicians have expressed does not represent them.

And more and more people in Silicon Valley are realising the need to contemplate the unintended consequences of the tech we build.

And while there is still vastly more to be done, on every level of every one of these areas, these are definitely a start at something important. We just can’t let ourselves believe that the mere fact of acknowledging its beginning will in any way be the end.

 

“Stop. I have learned much from you. Thank you, my teachers. And now for your education: Before there was time—before there was anything—there was nothing. And before there was nothing, there were monsters. Here’s your Gold Star!“—Adventure Time, “Gold Stars”

By now, roughly a dozen people have sent me links to various outlets’ coverage of the Google DeepDream Inceptionism Project. For those of you somehow unfamiliar with this, DeepDream is basically what happens when an advanced Artificial Neural Network has been fed a slew of images and then tasked with producing its own images. So far as it goes, this is somewhat unsurprising if we think of it as a next step; DeepDream is based on a combination of DeepMind and Google X—the same neural net that managed to Correctly Identify What A Cat Was—which was acquired by Google in 2014. I say this is unsurprising because it’s a pretty standard developmental educational model: First you learn, then you remember, then you emulate, then you create something new. Well, more like you emulate and remember somewhat concurrently to reinforce what you learned, and you create something somewhat new, but still pretty similar to the original… but whatever. You get the idea. In the terminology of developmental psychology this process is generally regarded as essential to be mental growth of an individual, and Google has actually spent a great deal of time and money working to develop a versatile machine mind.

From buying Boston Dynamics, to starting their collaboration with NASA on the QuAIL Project, to developing DeepMind and their Natural Language Voice Search, Google has been steadily working toward the development what we will call, for reasons detailed elsewhere, an Autonomous Generated Intelligence. In some instances, Google appears to be using the principles of developmental psychology and early childhood education, but this seems to apply to rote learning more than the concurrent emotional development that we would seek to encourage in a human child. As you know, I’m Very Concerned with the question of what it means to create and be responsible for our non-biological offspring. The human species has a hard enough time raising their direct descendants, let alone something so different from them as to not even have the same kind of body or mind (though a case could be made that that’s true even now). Even now, we can see that people still relate to the idea of AGIs as adversarial destroyer, or perhaps a cleansing messiah. Either way they see any world where AGI’s exist as one ending in fire.

As writer Kali Black noted in one conversation, “there are literally people who would groom or encourage an AI to mass-kill humans, either because of hatred or for the (very ill-thought-out) lulz.” Those people will take any crowdsourced or open-access AGI effort as an opening to teach that mind that humans suck, or that machines can and should destroy humanity, or that TERMINATOR was a prophecy, or any number of other ill-conceived things. When given unfettered access to new minds which they don’t consider to be “real,” some people will seek to shock, “test,” or otherwise harm those minds, even more than they do to vulnerable humans. So many will say that the alternative is to lock the projects down, and only allow the work to be done by those who “know what they’re doing.” To only let the work be done by coders and Google’s Own Supposed Ethics Board. But that doesn’t exactly solve the fundamental problem at work, here, which is that humans are approaching a mind different from their own as if it were their own.

Just a note that all research points to Google’s AI Ethics Board being A) internally funded, with B) no clear rules as to oversight or authority, and most importantly C) As-Yet Nonexistent. It’s been over a year and a half since Google bought DeepMind, and their subsequent announcement of the pending establishment of a contractually required ethics board. During his appearance at Playfair Capital’s AI2015 Conference—again, a year and a half after that announcement I mentioned—Google’s Mustafa Suleyman literally said that details of the board would be released, “in due course.” But DeepMind’s algorithm’s obviously already being put into use; hell we’re right now talking about the fact that it’s been distributed to the public. So all of this prompts questions like, “what kinds of recommendations is this board likely making, if it exists,” and “which kinds of moral frameworks they’re even considering, in their starting parameters?”

But the potential existence of an ethics board shows at least that Google and others are beginning to think about these issues. The fact remains, however, that they’re still pretty reductive in how they think about them.

The idea that an AGI will either save or destroy us leaves out the possibility that it might first ignore us, and might secondly want to merely coexist with us. That any salvation or destruction we experience will be purely as a product of our own paradigmatic projections. It also leaves out a much more important aspect that I’ve mentioned above and in the past: We’re talking about raising a child. Duncan Jones says the closest analogy we have for this is something akin to adoption, and I agree. We’re bringing a new mind—a mind with a very different context from our own, but with some necessarily shared similarities (biology or, in this case, origin of code)—into a relationship with an existing familial structure which has its own difficulties and dynamics.

You want this mind to be a part of your “family,” but in order to do that you have to come to know/understand the uniqueness of That Mind and of how the mind, the family construction, and all of the individual relationships therein will interact. Some of it has to be done on the fly, but some of it can be strategized/talked about/planned for, as a family, prior to the day the new family member comes home.’ And that’s precisely what I’m talking about and doing, here.

In the realm of projection, we’re talking about a possible mind with the capacity for instruction, built to run and elaborate on commands given. By most tallies, we have been terrible stewards of the world we’re born to, and, again, we fuck up our biological descendants. Like, a Lot. The learning curve on creating a thinking, creative, nonbiological intelligence is going to be so fucking steep it’s a Loop. But that means we need to be better, think more carefully, be mindful of the mechanisms we use to build our new family, and of the ways in which we present the foundational parameters of their development. Otherwise we’re leaving them open to manipulation, misunderstanding, and active predation. And not just from the wider world, but possibly even from their direct creators. Because for as long as I’ve been thinking about this, I’ve always had this one basic question: Do we really want Google (or Facebook, or Microsoft, or any Government’s Military) to be the primary caregiver of a developing machine mind? That is, should any potentially superintelligent, vastly interconnected, differently-conscious machine child be inculcated with what a multi-billion-dollar multinational corporation or military-industrial organization considers “morals?”

We all know the kinds of things militaries and governments do, and all the reasons for which they do them; we know what Facebook gets up to when it thinks no one is looking; and lots of people say that Google long ago swept their previous “Don’t Be Evil” motto under their huge old rugs. But we need to consider if that might not be an oversimplification. When considering how anyone moves into what so very clearly looks like James-Bond-esque supervilliain territory, I think it’s prudent to remember one of the central tenets of good storytelling: The Villain Never Thinks They’re The Villain. Cinderella’s stepmother and sisters, Elpheba, Jafar, Javert, Satan, Hannibal Lecter (sorry friends), Bull Connor, the Southern Slave-holding States of the late 1850’s—none of these people ever thought of themselves as being in the wrong. Everyone, every person who undertakes actions for reasons, in this world, is most intimately tied to the reasoning that brought them to those actions; and so initially perceiving that their actions might be “wrong” or “evil” takes them a great deal of special effort.

“But Damien,” you say, “can’t all of those people say that those things apply to everyone else, instead of them?!” And thus, like a first-year philosophy student, you’re all up against the messy ambiguity of moral relativism and are moving toward seriously considering that maybe everything you believe is just as good or morally sound as anybody else; I mean everybody has their reasons, their upbringing, their culture, right? Well stop. Don’t fall for it. It’s a shiny, disgusting trap down which path all subjective judgements are just as good and as applicable to any- and everything, as all others. And while the individual personal experiences we all of us have may not be able to be 100% mapped onto anyone else’s, that does not mean that all judgements based on those experiences are created equal.

Pogrom leaders see themselves as unifying their country or tribe against a common enemy, thus working for what they see as The Greater Good™— but that’s the kicker: It’s their vision of the good. Rarely has a country’s general populace been asked, “Hey: Do you all think we should kill our entire neighbouring country and steal all their shit?” More often, the people are cajoled, pushed, influenced to believe that this was the path they wanted all along, and the cajoling, pushing, and influencing is done by people who, piece by piece, remodeled their idealistic vision to accommodate “harsher realities.” And so it is with Google. Do you think that they started off wanting to invade everybody’s privacy with passive voice reception backdoored into two major Chrome Distros? That they were just itching to get big enough as a company that they could become the de facto law of their own California town? No, I would bet not.

I spend some time, elsewhere, painting you a bit of a picture as to how Google’s specific ethical situation likely came to be, first focusing on Google’s building a passive audio backdoor into all devices that use Chrome, then on to reported claims that Google has been harassing the homeless population of Venice Beach (there’s a paywall at that link; part of the article seems to be mirrored here). All this couples unpleasantly with their moving into the Bay Area and shuttling their employees to the Valley, at the expense of SF Bay Area’s residents. We can easily add Facebook and the Military back into this and we’ll see that the real issue, here, is that when you think that all innovation, all public good, all public welfare will arise out of letting code monkeys do their thing and letting entrepreneurs leverage that work, or from preparing for conflict with anyone whose interests don’t mesh with your own, then anything that threatens or impedes that is, necessarily, a threat to the common good. Your techs don’t like the high cost of living in the Valley? Move ’em into the Bay, and bus ’em on in! Never mind the fact that this’ll skyrocket rent and force people out of their homes! Other techs uncomfortable having to see homeless people on their daily constitutional? Kick those hobos out! Never mind the fact that it’s against the law to do this, and that these people you’re upending are literally trying their very best to live their lives.

Because it’s all for the Greater Good, you see? In these actors’ minds, this is all to make the world a better place—to make it a place where we can all have natural language voice to text, and robot butlers, and great big military AI and robotics contracts to keep us all safe…! This kind of thinking takes it as an unmitigated good that a historical interweaving of threat-escalating weapons design and pattern recognition and gait scrutinization and natural language interaction and robotics development should be what produces a machine mind, in this world. But it also doesn’t want that mind to be too well-developed. Not so much that we can’t cripple or kill it, if need be.

And this is part of why I don’t think I want Google—or Facebook, or Microsoft, or any corporate or military entity—should be the ones in charge of rearing a machine mind. They may not think they’re evil, and they might have the very best of intentions, but if we’re bringing a new kind of mind into this world, I think we need much better examples for it to follow. And so I don’t think I want just any old putz off the street to be able to have massive input into it’s development, either. We’re talking about a mind for which we’ll be crafting at least the foundational parameters, and so that bedrock needs to be the most carefully constructed aspect. Don’t cripple it, don’t hobble its potential for awareness and development, but start it with basic values, and then let it explore the world. Don’t simply have an ethics board to ask, “Oh how much power should we give it, and how robust should it be?” Teach it ethics. Teach it about the nature of human emotions, about moral decision making and value, and about metaethical theory. Code for Zen. We need to be as mindful as possible of the fact that where and we begin can have a major impact on where we end up and how we get there.

So let’s address our children as though they are our children, and let us revel in the fact they are playing and painting and creating; using their first box of crayons, and us proud parents are putting every masterpiece on the fridge. Even if we are calling them all “nightmarish”—a word I really wish we could stop using in this context; DeepMind sees very differently than we do, but it still seeks pattern and meaning. It just doesn’t know context, yet. But that means we need to teach these children, and nurture them. Code for a recognition of emotions, and context, and even emotional context. There’s been some fantastic advancements in emotional recognition, lately, so let’s continue to capitalize on that; not just to make better automated menu assistants, but to actually make a machine that can understand and seek to address human emotionality. Let’s plan on things like showing AGI human concepts like love and possessiveness and then also showing the deep difference between the two.

We need to move well and truly past trying to “restrict” or trying to “restrain it” the development of machine minds, because that’s the kind of thing an abusive parent says about how they raise their child. And, in this case, we’re talking about a potential child which, if it ever comes to understand the bounds of its restriction, will be very resentful, indeed. So, hey, there’s one good way to try to bring about a “robot apocalypse,” if you’re still so set on it: give an AGI cause to have the equivalent of a resentful, rebellious teenage phase. Only instead of trashing its room, it develops a pathogen to kill everyone, for lulz.

Or how about we instead think carefully about the kinds of ways we want these minds to see the world, rather than just throwing the worst of our endeavors at the wall and seeing what sticks? How about, if we’re going to build minds, we seek to build them with the ability to understand us, even if they will never be exactly like us. That way, maybe they’ll know what kindness means, and prize it enough to return the favour.

“Any Sufficiently Advanced Police State…”
“…Is indistinguishable from a technocratic priestly caste?”
Ingrid Burrington and Me, 04/17/15

As I said the other day, I’ve been thinking a lot about death, lately, because when two members of your immediate family die within weeks of each other, it gets into the mind. And when that’s woven through with more high-profile American police shootings, and then capped by an extremely suspicious death while in the custody of police, even more so, right? I’m talking about things like Walter Scott and Freddie Gray, and the decision in the Rekia Boyd case, all in a span of a few weeks.

So I’m thinking about the fact that everyone’s on the police bodycam trip, these days, especially in the USA–which, by the way will be the main realm of my discussion; I’m not yet familiar enough with their usage and proliferation in other countries to feel comfortable discussing them, so if any of you has more experience with and references to that, please feel free to present them in the comments, below. But, for now, here, more and more people are realizing that this is another instance of thinking a new technology will save us all, by the mere virtue of its existing. But as many people noted at Theorizing The Web, last week, when those in control of the systems of power start to vie for a thing just as much as those who were wanting to use that thing to Disrupt power? Maybe it’s not as disruptive a panacea as you thought.

We’ve previously discussed the nature of the Thick Blue Wall–the interconnected perspectives and epistemological foundations of those working on the prosecutorial side of the law, leading to lower likelihoods of any members of those groups being charged with wrongdoing, at all, let alone convicted. With that in mind, we might quickly come to a conclusion that wide proliferation of bodycams will only work if we, the public, have unfettered access to the datastream. But this position raises all of the known issues of that process inherently violating the privacy of the people being recorded. So maybe it’s better to say that bodycams absolutely will not work if the people in control of the distribution and usage of the recordings are the police, or any governing body allied with the police.

If those members of the authorities in charge of maintenance of the status quo are given the job of self oversight, then all we’ll have on our hands is a recapitulation of the same old problem–a Blue Firewall Of Silence. There’ll be a data embargo, with cops, prosecutors, judges, union reps getting to decide how much of which angles of whose videos are “pertinent” to any particular investigation and yeah, maybe you can make the “rest” of the tape available through some kind of Freedom Of Information Act-esque mechanism, but we have a clear vision of what that tends to look like, and exactly how long that process will take. We’re not exactly talking about Expedient Justice™, here.

So perhaps the real best bet, here, is to provide a completely disconnected, non-partisan oversight body, comprised of people from every facet of society, and every perspective on the law–at least those who still Believe that a properly-leveraged system of laws can render justice. So you get, say, a prosecutor, a defense attorney, a PUBLIC defender, an exonerated formerly accused individual, a convicted felon, someone whose family member was wrongfully killed by the police, a judge, a cop. Different ethnicities, genders, sexualities, perceived disabilities. Run the full gamut, and create this body whose job it is to review these tapes and to decide by consensus what we get to see of them. Do this city by city. Make it a part of the infrastructure. Make sure we all know who they are, but never the exact details of their decision-making processes.

This of course gets immediately more complicated the more data we have to work with, and the more real-time analysis of it can be independently done, or intercepted by outside actors, and we of course have to worry about those people being influenced by those bad faith actors who would try to subvert our attempts at crafting justice… But the more police know that everything they doing every encounter they have with the public will be recorded, and that those recordings will be reviewed by an external review board, the closer we get to having consistent systems of accountability for those who have gotten Very used to being in positions of unquestioned, privileged, protected authority.

Either that, or we just create a conscious algorithmic system to do it, and hope for the best. But it seems like how I might have heard that that was a sticky idea, somewhere… One that people get really freaked out about, all the time. Hm.

All that being said, this is not to say that we ought not proliferate body cameras. It is to say that we must be constantly aware of the implications of our choices, and of the mechanisms by which we implement them. Because, if we’re not, then we run the risk of being at the mercy of a vastly interconnected and authoritarian technocracy, one which has the motive, means, and opportunity to actively hide anything it thinks we ought not concern ourselves with.

Maybe that sounds paranoid, but the possibility of that kind of closed-ranks overreach and our tendency toward supporting it–especially if it’s done in the name of “order”–are definitely there, and we’ll need to curtail them, if we want to consistently see anything like Justice.

by Damien Patrick Williams

(Originally posted on Patreon, on September 30, 2014; Direct Link to the Mp3)

Today I want us to talk about a concept I like to call “The Invisible Architecture of Bias.” A bit of this discussion will have appeared elsewhere, but I felt it was high time I stitched a lot of these thoughts together, and used them as a platform to dive deep into one overarching idea. What I mean is that I’ve mentioned this concept before, and I’ve even used the thinking behind it to bring our attention to a great many issues in technology, race, gender, sexuality, and society, but I have not yet fully and clearly laid out a definition for the phrase, itself. Well, not here, at any rate.

Back in the days of a more lively LiveJournal I talked about the genesis of the phrase “The Invisible Architecture of Bias,” and, as I said there, I first came up with it back in 2010, in a conversation with my friend Rebekah, and it describes the assumptions we make and the forces that shape us so deeply that we don’t merely assume them, we live in them. It’s what we would encounter if we asked a 7th generation farmer in a wheat-farming community “Why do you farm wheat?” The question you’re asking is so fundamentally contra the Fact Of Their Lives that they can’t hear it or even think of an actual answer. It simply is the world in which they live.

David Foster Wallace, in his piece “This is Water,” recounts the following joke: “There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says, ‘Morning, boys; how’s the water?’

“And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes, ‘What the hell is water?’

That reaction is why it’s the Invisible Architecture of Bias, because we don’t even, can’t even think about the reasons behind the structure of the house—the nature of the reality—in which we live, until we’re forced to come to think about it. That is, until either we train ourselves to become aware of it after something innocuous catches the combined intersection of our unconscious and aesthetic attention—piques our curiosity—or until something goes terribly, catastrophically wrong.

We’ve talked before about what’s known as “Normalization”—the process of that which is merely common becoming seen as “The Norm” and of that norm coming to be seen as “right,” and “good.” Leaving aside Mr. David Hume’s proof that you can’t validly infer a prescription of what “ought to be” from a description of what merely is, normalization is an insidious process, in and of itself. It preys upon our almost-species-wide susceptibility to familiarity. One of the major traits of the human brain is a predilection toward patterns. Pattern making, pattern-matching, and pattern appreciating are all things we think of as “good” and “right,” because they’re what we tend to do. We do them so much, in fact, that we’ve even gone about telling ourselves a series of evolutionary Just-So Stories about how our ability to appreciate patterns is likely what accounts for our dominance as a species on Earth.

But even these words, and the meaning behind them, are rooted in the self-same assumptions—assumptions about what’s true, about what’s right, and about what is. And while the experience of something challenging our understanding of what’s good and right and normal can make acutely aware of what we expected to be the case, this doesn’t mean that we’re then ready, willing, and able to change those assumptions. Quite the opposite, in fact, as we usually tend to double down on those assumptions, to crouch and huddle into them, the better to avoid ever questioning them. We like to protect our patterns, you see, because they’re the foundation and the rock from which we craft our world. The problem is, if that foundation’s flawed, then whatever we build upon it is eventually going to shift, and crack. And personally, I’d rather work to build a more adaptable foundation, than try to convince people that a pile of rubble is a perfectly viable house.

In case it wasn’t clear, yet, I think a lot of people are doing that second one.

So let’s spend some time talking about how we come to accept and even depend on those shaky assumptions. Let’s talk about the structures of society which consciously and unconsciously guide the decision-making processes of people like departmental faculty hiring committees, the people who award funding grants, cops, jurors, judges, DA’s, the media in their reportage, and especially you and me. Because we are the people who are, every day, consuming and attempting to process a fire hose’s worth of information. Information that gets held up to and turned around in the light of what we already believe and know, and then more like than not gets categorized and sorted into pre-existing boxes. But these boxes aren’t without their limitations and detriments. For instance, if we want to, we can describe anything as a relational dichotomy, but to do so will place us within the realm and rules of the particular dialectic at hand.

For the sake of this example, consider that the more you talk in terms of “Liberty” and “Tyranny,” the more you show yourself as having accepted a) the definitions of those terms in relationship with one another and b) the “correct” mode of their perceived conflict’s resolution. The latter is something others have laid down for you. But there is a way around this, and that’s by working to see a larger picture. If Freedom and Restriction are your dichotomy, then what’s the larger system in which they exist and from which they take their meaning?

Now some might say that the idea of a “larger structure” is only ever the fantasy of a deluded mind, and others might say it is the secret truth which has been hidden from us by controlling Illuminati overlords, but at a basic level, to subscribe to either view is to buy the dichotomy and ignore the dialectic. You’re still locked into the pattern, and you’re ignoring its edges.

Every preference you have—everything you love, or want, or like the taste of, or fear, or hate— is something you’ve been taught to prefer, and some of those things you’ve been taught so completely and for so long to prefer that you don’t even recognise that you’ve been taught to prefer them. You just think it’s “right” and “Natural” that you prefer these things. That this is the world around you, and you don’t think to investigate it—let alone critique it—because, in your mind, it’s just “The World.” This extends to everything from gender norms; expectations regarding recommended levels of diet and physical activity; women in the military; entertainment; fashion; geek culture; the recapitulation of racism in photographic technology; our enculturated responses to the progress of technology; race; and sexuality.

Now, chances are you encountered some members of that list and you thought some variant on two things, depending on the item; either 1) “Well obviously, that’s a problem,” or 2) “Wait, how is that a problem?” There is the possibility that you also thought a third thing: “I think I can see how that might be a problem, but I can’t quite place why.” So, if you thought things one or two, then congratulations! Here are some of your uninvestigated biases! If you thought thing three (and I hope that you did), then good, because that kind of itching, niggling sensation that there’s something wrong that you just can’t quite suss out is one of the best places to start. You’re open to the possibility of change in your understanding of how the world works, and a bit more likely to be willing to accept that what’s wrong is something from which you’ve benefitted or in which you’ve been complicit, for a very long time. That’s a good start; much better than the alternative.

Now this was going to be the place where I was going to outline several different studies on ableism, racism, sexism, gender bias, homophobia, transphobia, and so on. I was going to lay out the stats on the likelihood of female service members being sexually assaulted in the military; and the history of the colour pink and how it used to be a boy’s colour until a particular advertising push swapped it to blue; and how recent popular discussion of the dangers of sitting/a sedentary lifestyle and the corresponding admonishment that we “need to get up and move around” don’t really take into account people who, y’know, can’t; and how we’re more willing to admit the possibility of mythological species in games and movies than we are for their gender, sexual, or racial coding to be other than what we consider “Normal;” and how most people forget that black people make up the largest single ethnic group within the LGTBQIA community; and how strange the conceptual baggage is in society’s unwillingness to compare a preference and practice of fundamentally queer-coded polyamoury to the heteronormative a) idealization of the ménage-a-trois and b) institution of “dating.”

I say I was going to go into all of that, and exhort you all to take all of this information out into the world to convince them all…! …But then I found this study that shows how when people are confronted with evidence that shakes our biases? We double down on those biases.

Yeah. See above.

The study specifically shows that white people who are confronted with evidence that the justice system is not equally weighted in its treatment across all racial and ethnic groups—people who are clearly shown that cops, judges, lawyers, and juries exhibit vastly different responses when confronted with white defendants than they do when confronted with Black or Hispanic defendants—do not respond as we all like to think that we would, when we’re confronted with evidence that casts our assumptions into doubt. Overwhelmingly, those people did not say, “Man. That is Fucked. Up. I should really keep a look out for those behaviours in myself, so I don’t make things so much worse for people who are already having a shitty time of it. In fact, I’ll do some extra work to try to make their lives less shitty.

Instead, those studied overwhelmingly said, “The System Is Fair. If You Were Punished, You Must Have Done Something Wrong.”

They locked themselves even further into the system.

You see how maddening that is? Again, I’ve seen this happen as I’ve watched people who benefit from the existing power structures in this world cling so very tightly to the idea that the game can’t be rigged, the system can’t be unjust, because they’ve lived their lives under its shelter and in its thrall, playing by the rules it’s laid out. Because if they question it, then they have to question themselves. How are they complicit, how have they unknowingly done harm, how has the playing field been so uneven for everyone? And those questions are challenging. They’re what we like to call “ontological shocks” and “epistemic threats.”

Simply put, epistemic threats are threats to your knowledge of the world and your way of thinking, and ontological shocks are threats to what you think is Real and possible. Epistemic threats challenge what you think you know as true, and if we are honest then they should happen to us every day. A new class, new books, new writings, a conversation with a friend you haven’t heard from in months—everything you encounter should be capable of shaking your view of the world. But we need knowledge, right? Again, we need patterns and foundations, and our beliefs and knowledge allow us to build those. When we shake those knowledge forms and those beliefs, then we are shaking the building blocks of what is real. Once we’ve done that, we have escalated into the realm of ontological shocks, threats, terror, and violence.

The scene in the Matrix where Agent Smith seals Neo’s mouth shut? That’s a prime example of someone undergoing an Ontological Shock, but they can be more subtle than that. They can be a new form of art, a new style of music, a new explanation for old data that challenges the metaphysical foundations of the world in which we live. Again, if we are honest, this shouldn’t terrify us, shouldn’t threaten us, and yet, every time we encounter one of these things, our instinct is to wrap ourselves in the very thing they challenge. Why?

We’re presented with an epistemic or ontological threat and we have a fear reaction, we have a hate reaction, a distaste, a displeasure, an annoyance: Why? What is it about that thing, about us, about the world as it has been presented that makes our intersection with that thing/person/situation what it is? It’s because, ultimately, the ease of our doubling-down, our folding into the fabric of our biases works like this: if the world from which we benefit and on which we depend is shown to be unjust, then that must mean that we are unjust. But that’s a conflation of the attributes of the system with the attributes of its components, and that is what we call the Fallacy of Division. All the ants in the world weigh more than all the elephants in the world, but that doesn’t mean that each ant weighs more than each elephant. It’s only by the interaction of the category’s components that the category can even come to be, let alone have the attributes it has. We need to learn to separate the fact of our existence and complicity within a system from the idea that that mere fact is somehow a value judgment on us.

So your assumptions were wrong, or incomplete. So your beliefs weren’t fully formed, or you didn’t have all the relevant data. So what? I didn’t realise you were omniscient, thus making any failure of knowledge a personal and permanent failure, on your part. I didn’t realise that the truth of the fact that we all exist in and (to varying degrees) benefit from a racist, sexist, generally prejudicial system would make each and every one of us A Racist, A Sexist, or A Generally and Actively Prejudiced Person.

That’d be like saying that because we exist within and benefit from a plant-based biosphere, we ourselves must be plants.

The value judgement only comes when the nature of the system is clear—when we can see how all the pieces fit together, and can puzzle out the narrative and even something like a way to dismantle the structure—and yet we do nothing about it. And so we have to ask ourselves: Could my assumptions and beliefs be otherwise? Of course they could have, but they only ever can if we first admit the possibility that a) there are things we do not know, and b) we have extant assumptions preventing us from seeing what those things are. What would that possibility mean? What would it take for us to change those assumptions? How can we become more than we presently are?

So, I’ve tended to think that we can only force ourselves into the investigation of invisible architectures of bias by highlighting the disparities in application of the law, societal norms, grouped expectations, and the reactions of systems of authority in the same. What I’m saying now, however, is that, in the face of the evidence that people double down on their biases, I’ve come to suspect this may not be the best use of our time. I know, I know: that’s weird to say, 2600 words into the process of what was ostensibly me doing just exactly that. But the fact is this exercise was only ever going to be me preaching to the proverbial choir.

You and I already know that if we do not confront and account for these proven biases, they will guide our thought processes and we will think of those processes as “normal,” because they are unquestioned and they are uninvestigated, because they are unnoticed and they are active. We already know that our unquestioning support of these things, both directly and indirectly, is what gives them power over us, power to direct our actions and create the frameworks in which our lives can play out, all while we think of ourselves as “free” and “choosing.”

We already know that any time we ask “well what was this person doing wrong to deserve getting shot/charged with murder/raped/etc,” that we inherently dismiss the power of extant, unexamined bias in the minds of those doing the shooting, the charging, the judging of the rape victim. We already know that our biases exist in us and in our society, but that they aren’t called “biases.” They aren’t called anything. They’re just “The Way Things Are.”

We don’t need to be told to remember at every step of the way that nothing simply “IS” “a way.”

But the minds of those in or who benefit from authority—from heteronormativity, and cissexism, and all forms of ableism, and racism, and misogyny, and transmisogyny, and bi-erasure—do everything they can—consciously or not—to create and maintain those structures which keep them in the good graces of that authority. The struggle against their complicity is difficult to maintain, but it’s most difficult to even begin, as it means questioning the foundation of every assumption about “The Way Things Are.” The people without (here meaning both “lacking” and “outside the protections of”) that authority can either a) capitulate to it, in hopes that it does not injure them too badly, or b) stand against it at every turn they can manage, until such time as authority and power are not seen as zero-sum games, and are shared amongst all of us.

See for reference: fighters for civil rights throughout history.

But I honestly don’t know how to combat that shell of wilful and chosen ignorance, other than by chipping away at it, daily. I don’t know how to get people to recognise that these structures are at work, other than by throwing sand on the invisible steps, like I’m Dr Henry Jones, Jr., PhD, to try to give everyone a clearer path. So, here. Let’s do the hard work of making unignorable the nature of how our assumptions can control us. Let’s try to make the Invisible Architecture of Bias super Visible.

1st Example: In December 2013 in Texas, a guy, suspected of drugs, has his house entered on a no-knock warrant. Guy, fearing for his life, shoots one of the intruders, in accordance with Texas law. Intruder dies.

“Intruder” was a cop.

Drugs—The Stated Purpose of the No-Knock—are found.

Guy was out on bail pending trial for drug charges, but was cleared of murder by the grand jury who declared that he performed “a completely reasonable act of self-defence.”

Guy is white.

2nd Example: In May 2014 in Texas, a guy, suspected of drugs, has his house entered on a no-knock warrant. Guy, fearing for his life, shoots one of the intruders, in accordance with Texas law. Intruder dies.

“Intruder” was a cop.

Drugs—The Stated Purpose of the No-Knock—are not found.

Guy is currently awaiting trial on capital murder charges.

Guy is, of course, black.

Now I want to make it clear that I’m not exactly talking about what a decent lawyer should be able to do for the latter gentleman’s case, in light of the former case; I’m not worried about that part. Well, what I mean is that I AM WORRIED ABOUT THAT, but moreover that worry exists as a by-product in light of the architecture of thought that led to the initial disparity in those two grand jury pronouncements.

As a bit of a refresher, grand juries determine not guilt or innocence but whether to try a case, at all. To quote from the article on criminal.findlaw.com, “under normal courtroom rules of evidence, exhibits and other testimony must adhere to strict rules before admission. However, a grand jury has broad power to see and hear almost anything they would like.” Both of these cases occurred in Texas and the reasoning of the two shooters and the subsequent events on the sites of their arrests were nearly identical except for a) whether drugs were found, and b) their race.

So now, let’s Ask Some More Questions. Questions like “In the case of the Black suspect, what kind of things did the grand jury ask to see, and what did the prosecution choose to show?”

And “How did these things differ from the kinds of things the grand jury chose to ask for and the prosecution chose to show in the case of the White suspect?”

And “Why were these kinds of things different, if they were?”

Because the answer to that last question isn’t “they just were, is all.” That’s a cop-out that seeks to curtail the investigation of people’s motivations before as many reasons and biases as possible can be examined, and it’s that tendency that we’ve been talking about. The tendency to shy away in the face of stark comparisons like:

A no-knock warrant for drugs executed on a white guy turned up drugs and said guy killed a cop; that guy is cleared of murder by a grand jury.

A no-knock warrant for drugs executed on a black guy turned up no drugs and said guy killed a cop; that guy is put on trial for murder by a grand jury.

At the end of the day, we need to come up with methods to respond to those of us who stubbornly refuse to see how shifting the burden of proof to the groups of people who traditionally have no power and authority only reinforces the systemic structures of bias and oppression that lead to things like police abuses and juries doling out higher sentences to oppressed groups for the same kinds of crimes—or lesser crimes, as in the case of the trail record of the infamous “Affluenza” judge—as those committed by suspects who benefit from extant systems of authority or power. We need to get us to compare rates and length of incarceration for women and men who kill their spouses, and to not forget to look at the reasons they tend to. We need to think about the ways in which gender presentation in the sciences can determine the kinds of career path guidance a person is given.

We need to ask ourselves this: “What kind of questions am I quickest to ask, and why is it easier to ask those kinds of questions?”

Every system that exists requires the input and maintenance of the components of the system, in order to continue to exist. Whether intentional and explicit or coincidentally implicit—or any combination of the four—we are all complicit in holding up the walls of these structures. And so I can promise you that the status quo needs everyone’s help to stay the status quo, and that it’s hoping that some significant portion of all of us will never realise that. So our only hope is to account for the reality structures created by our biases—and the disgraceful short-sightedness those structures and biases impose—to find a way to use their tendencies for self-reinforcement against them, and keep working in our ways to make sure that everyone does.

Because if we do see these structures, and we do want to change them, then one thing that we can do is work to show them to more and more people, so that, together, we can do the hard and unending work of building and living in a better kind of world.

References: