facial recognition

All posts tagged facial recognition

So with the job of White House Office of Science and Technology Policy director having gone to Dr. Arati Prabhakar back in October, rather than Dr. Alondra Nelson, and the release of the “Blueprint for an AI Bill of Rights” (henceforth “BfaAIBoR” or “blueprint”) a few weeks after that, I am both very interested also pretty worried to see what direction research into “artificial intelligence” is actually going to take from here.

To be clear, my fundamental problem with the “Blueprint for an AI bill of rights” is that while it pays pretty fine lip-service to the ideas of  community-led oversight, transparency, and abolition of and abstaining from developing certain tools, it begins with, and repeats throughout, the idea that sometimes law enforcement, the military, and the intelligence community might need to just… ignore these principles. Additionally, Dr. Prabhakar was director of DARPA for roughly five years, between 2012 and 2015, and considering what I know for a fact got funded within that window? Yeah.

To put a finer point on it, 14 out of 16 uses of the phrase “law enforcement” and 10 out of 11 uses of “national security” in this blueprint are in direct reference to why those entities’ or concept structures’ needs might have to supersede the recommendations of the BfaAIBoR itself. The blueprint also doesn’t mention the depredations of extant military “AI” at all. Instead, it points to the idea that the Department Of Defense (DoD) “has adopted [AI] Ethical Principles, and tenets for Responsible Artificial Intelligence specifically tailored to its [national security and defense] activities.” And so with all of that being the case, there are several current “AI” projects in the pipe which a blueprint like this wouldn’t cover, even if it ever became policy, and frankly that just fundamentally undercuts Much of the real good a project like this could do.

For instance, at present, the DoD’s ethical frames are entirely about transparency, explainability, and some lipservice around equitability and “deliberate steps to minimize unintended bias in Al …” To understand a bit more of what I mean by this, here’s the DoD’s “Responsible Artificial Intelligence Strategy…” pdf (which is not natively searchable and I had to OCR myself, so heads-up); and here’s the Office of National Intelligence’s “ethical principles” for building AI. Note that not once do they consider the moral status of the biases and values they have intentionally baked into their systems.

An "Explainable AI" diagram from DARPA, showing two flowcharts, one on top of the other. The top one is labeled "today" and has the top level condition "task" branching to both a confused looking human user and state called "learned function" which is determined by a previous state labeled "machine learning process" which is determined by a state labeled "training data." "Learned Function" feeds "Decision or Recommendation" to the human user, who has several questions about the model's beaviour, such as "why did you do that?" and "when can i trust you?" The bottom one is labeled "XAI" and has the top level condition "task" branching to both a happy and confident looking human user and state called "explainable model/explanation interface" which is determined by a previous state labeled "new machine learning process" which is determined by a state labeled "training data." "explainable model/explanation interface" feeds choices to the human user, who can feed responses BACK to the system, and who has several confident statements about the model's beaviour, such as "I understand why" and "I know when to trust you."

An “Explainable AI” diagram from DARPA

Continue Reading

I’m Not Afraid of AI Overlords— I’m Afraid of Whoever’s Training Them To Think That Way

by Damien P. Williams

I want to let you in on a secret: According to Silicon Valley’s AI’s, I’m not human.

Well, maybe they think I’m human, but they don’t think I’m me. Or, if they think I’m me and that I’m human, they think I don’t deserve expensive medical care. Or that I pose a higher risk of criminal recidivism. Or that my fidgeting behaviours or culturally-perpetuated shame about my living situation or my race mean I’m more likely to be cheating on a test. Or that I want to see morally repugnant posts that my friends have commented on to call morally repugnant. Or that I shouldn’t be given a home loan or a job interview or the benefits I need to stay alive.

Now, to be clear, “AI” is a misnomer, for several reasons, but we don’t have time, here, to really dig into all the thorny discussion of values and beliefs about what it means to think, or to be a pow3rmind— especially because we need to take our time talking about why values and beliefs matter to conversations about “AI,” at all. So instead of “AI,” let’s talk specifically about algorithms, and machine learning.

Machine Learning (ML) is the name for a set of techniques for systematically reinforcing patterns, expectations, and desired outcomes in various computer systems. These techniques allow those systems to make sought after predictions based on the datasets they’re trained on. ML systems learn the patterns in these datasets and then extrapolate them to model a range of statistical likelihoods of future outcomes.

Algorithms are sets of instructions which, when run, perform functions such as searching, matching, sorting, and feeding the outputs of any of those processes back in on themselves, so that a system can learn from and refine itself. This feedback loop is what allows algorithmic machine learning systems to provide carefully curated search responses or newsfeed arrangements or facial recognition results to consumers like me and you and your friends and family and the police and the military. And while there are many different types of algorithms which can be used for the above purposes, they all remain sets of encoded instructions to perform a function.

And so, in these systems’ defense, it’s no surprise that they think the way they do: That’s exactly how we’ve told them to think.

[Image of Michael Emerson as Harold Finch, in season 2, episode 1 of the show Person of Interest, “The Contingency.” His face is framed by a box of dashed yellow lines, the words “Admin” to the top right, and “Day 1” in the lower right corner.]

Continue Reading

Much of my research deals with the ways in which bodies are disciplined and how they go about resisting that discipline. In this piece, adapted from one of the answers to my PhD preliminary exams written and defended two months ago, I “name the disciplinary strategies that are used to control bodies and discuss the ways that bodies resist those strategies.” Additionally, I address how strategies of embodied control and resistance have changed over time, and how identifying and existing as a cyborg and/or an artificial intelligence can be understood as a strategy of control, resistance, or both.

In Jan Golinski’s Making Natural Knowledge, he spends some time discussing the different understandings of the word “discipline” and the role their transformations have played in the definition and transmission of knowledge as both artifacts and culture. In particular, he uses the space in section three of chapter two to discuss the role Foucault has played in historical understandings of knowledge, categorization, and disciplinarity. Using Foucault’s work in Discipline and Punish, we can draw an explicit connection between the various meanings “discipline” and ways that bodies are individually, culturally, and socially conditioned to fit particular modes of behavior, and the specific ways marginalized peoples are disciplined, relating to their various embodiments.

This will demonstrate how modes of observation and surveillance lead to certain types of embodiments being deemed “illegal” or otherwise unacceptable and thus further believed to be in need of methodologies of entrainment, correction, or reform in the form of psychological and physical torture, carceral punishment, and other means of institutionalization.

Locust, “Master and Servant (Depeche Mode Cover)”

Continue Reading

Below are the slides, audio, and transcripts for my talk ‘”Any Sufficiently Advanced Neglect is Indistinguishable from Malice”: Assumptions and Bias in Algorithmic Systems,’ given at the 21st Conference of the Society for Philosophy and Technology, back in May 2019.

(Cite as: Williams, Damien P. ‘”Any Sufficiently Advanced Neglect is Indistinguishable from Malice”: Assumptions and Bias in Algorithmic Systems;’ talk given at the 21st Conference of the Society for Philosophy and Technology; May 2019)

Now, I’ve got a chapter coming out about this, soon, which I can provide as a preprint draft if you ask, and can be cited as “Constructing Situated and Social Knowledge: Ethical, Sociological, and Phenomenological Factors in Technological Design,” appearing in Philosophy And Engineering: Reimagining Technology And Social Progress. Guru Madhavan, Zachary Pirtle, and David Tomblin, eds. Forthcoming from Springer, 2019. But I wanted to get the words I said in this talk up onto some platforms where people can read them, as soon as possible, for a  couple of reasons.

First, the Current Occupants of the Oval Office have very recently taken the policy position that algorithms can’t be racist, something which they’ve done in direct response to things like Google’s Hate Speech-Detecting AI being biased against black people, and Amazon claiming that its facial recognition can identify fear, without ever accounting for, i dunno, cultural and individual differences in fear expression?

[Free vector image of a white, female-presenting person, from head to torso, with biometric facial recognition patterns on her face; incidentally, go try finding images—even illustrations—of a non-white person in a facial recognition context.]


All these things taken together are what made me finally go ahead and get the transcript of that talk done, and posted, because these are events and policy decisions about which I a) have been speaking and writing for years, and b) have specific inputs and recommendations about, and which are, c) frankly wrongheaded, and outright hateful.

And I want to spend time on it because I think what doesn’t get through in many of our discussions is that it’s not just about how Artificial Intelligence, Machine Learning, or Algorithmic instances get trained, but the processes for how and the cultural environments in which HUMANS are increasingly taught/shown/environmentally encouraged/socialized to think is the “right way” to build and train said systems.

That includes classes and instruction, it includes the institutional culture of the companies, it includes the policy landscape in which decisions about funding and get made, because that drives how people have to talk and write and think about the work they’re doing, and that constrains what they will even attempt to do or even understand.

All of this is cumulative, accreting into institutional epistemologies of algorithm creation. It is a structural and institutional problem.

So here are the Slides:

The Audio:

[Direct Link to Mp3]

And the Transcript is here below the cut:

Continue Reading

Previously, I told you about The Human Futures and Intelligent Machines Summit at Virginia Tech, and now that it’s over, I wanted to go ahead and put the full rundown of the events all in one place.

The goals for this summit were to start looking at the ways in which issues of algorithms, intelligent machine systems, human biotech, religion, surveillance, and more will intersect and affect us in the social, academic, political spheres. The big challenge in all of this was seen as getting better at dealing with this in the university and public policy sectors, in America, rather than the seeming worse we’ve gotten, so far.

Here’s the schedule. Full notes, below the cut.

Friday, June 8, 2018

  • Josh Brown on “the distinction between passive and active AI.”
  • Daylan Dufelmeier on “the potential ramifications of using advanced computing in the criminal justice arena…”
  • Mario Khreiche on the effects of automation, Amazon’s Mechanical Turk, and the Microlabor market.
  • Aaron Nicholson on how technological systems are used to support human social outcomes, specifically through the lens of policing  in the city of Atlanta
  • Ralph Hall on “the challenges society will face if current employment and income trends persist into the future.”
  • Jacob Thebault-Spieker on “how pro-urban and pro-wealth biases manifest in online systems, and  how this likely influences the ‘education’ of AI systems.”
  • Hani Awni on the sociopolitical of excluding ‘relational’ knowledge from AI systems.

Saturday, June 9, 2018

  • Chelsea Frazier on rethinking our understandings of race, biocentrism, and intelligence in relation to planetary sustainability and in the face of increasingly rapid technological advancement.
  • Ras Michael Brown on using the religions technologies of West Africa and the West African Diaspora to reframe how we think about “hybrid humanity.”
  • Damien Williams on how best to use interdisciplinary frameworks in the creation of machine intelligence and human biotechnological interventions.
  • Sara Mattingly-Jordan on the implications of the current global landscape in AI ethics regulation.
  • Kent Myers on several ways in which the intelligence community is engaging with human aspects of AI, from surveillance to sentiment analysis.
  • Emma Stamm on the idea that datafication of the self and what about us might be uncomputable.
  • Joshua Earle on “Morphological Freedom.”

Continue Reading

This summer I participated in SRI International’s Technology and Consciousness Workshop Series. The meetings were held under the auspices of the Chatham House Rule, which means that there are many things I can’t tell you about them, such as who else was there, or what they said in the context of the meetings; however I can tell you what I talked about. In light of this recent piece in The Boston Globe and the ongoing developments in the David Slater/PETA/Naruto case, I figured that now was a good time to do so.

I presented three times—once on interdisciplinary perspectives on minds and mindedness; then on Daoism and Machine Consciousness; and finally on a unifying view of my thoughts across all of the sessions. This is my outline and notes for the first of those talks.

I. Overview
In a 2013 aeon Article Michael Hanlon said he didn’t think we’d ever solve “The Hard Problem,” and there’s been some skepticism about it, elsewhere. I’ll just say that said question seems to completely miss a possibly central point. Something like consciousness is, and what it is is different for each thing that displays anything like what we think it might be. If we manage to generate at least one mind that is similar enough to what humans experience as “conscious” that we may communicate with it, what will we owe it and what would it be able to ask from us? How might our interactions be affected by the fact that its mind (or their minds) will be radically different from ours? What will it be able to know that we cannot, and what will we have to learn from it?

So I’m going to be talking today about intersectionality, embodiment, extended minds, epistemic valuation, phenomenological experience, and how all of these things come together to form the bases for our moral behavior and social interactions. To do that, I’m first going to need ask you some questions:

Continue Reading

[Originally Published at Eris Magazine]

So Gabriel Roberts asked me to write something about police brutality, and I told him I needed a few days to get my head in order. The problem being that, with this particular topic, the longer I wait on this, the longer I want to wait on this, until, eventually, the avoidance becomes easier than the approach by several orders of magnitude.

Part of this is that I’m trying to think of something new worth saying, because I’ve already talked about these conditions, over at A Future Worth Thinking About. We talked about this in “On The Invisible Architecture of Bias,” “Any Sufficiently Advanced Police State…,” “On the Moral, Legal, and Social Implications of the Rearing and Development of Nascent Machine Intelligences,” and most recently in “On the European Union’s “Electronic Personhood” Proposal.” In these articles, I briefly outlined the history of systemic bias within many human social structures, and the possibility and likelihood of that bias translating into our technological advancements, such as algorithmic learning systems, use of and even access to police body camera footage, and the development of so-called artificial intelligence.

Long story short, the endemic nature of implicit bias in society as a whole plus the even more insular Us-Vs-Them mentality within the American prosecutorial legal system plus the fact that American policing was literally borne out of slavery on the work of groups like the KKK, equals a series of interlocking systems in which people who are not whitepassing, not male-perceived, not straight-coded, not “able-bodied” (what we can call white supremacist, ableist, heteronormative, patriarchal hegemony, but we’ll just use the acronym WSAHPH, because it satisfyingly recalls that bro-ish beer advertising campaign from the late 90’s and early 2000’s) stand a far higher likelihood of dying at the hands of agents of that system.

Here’s a quote from Sara Ahmed in her book The Cultural Politics of Emotion, which neatly sums this up:

“[S]ome bodies are ‘in an instant’ judged as suspicious, or as dangerous, as objects to be feared, a judgment that can have lethal consequences. There can be nothing more dangerous to a body than the social agreement that that body is dangerous.”

At the end of this piece, I’ve provided some of the same list of links that sits at the end of “On The Invisible Architecture of Bias,” just to make it that little bit easier for us to find actual evidence of what we’re talking about, here, but, for now, let’s focus on these:

A Brief History of Slavery and the Origins of American Policing
2006 FBI Report on the infiltration of Law Enforcement Agencies by White Supremacist Groups
June 20, 2016 “Texas Officers Fired for Membership in KKK”

And then we’ll segue to the fact that we are, right now, living through the exemplary problem of the surveillance state. We’ve always been told that cameras everywhere will make us all safer, that they’ll let people know what’s going on and that they’ll help us all. People doubted this, even in Orwell’s day, noting that the more surveilled we are, the less freedom we have, but more recently people have started to hail this from the other side: Maybe videographic oversight won’t help the police help us, but maybe it will help keep us safe from the police.

But the sad fact of the matter is that there’s video of Alton Sterling being shot to death while restrained, and video of John Crawford III being shot to death by a police officer while holding a toy gun down at his side in a big box store where it was sold, and there’s video of Alva Braziel being shot to death while turning around with his hands up as he was commanded to do by officers, of Eric Garner being choked to death, of Delrawn Small being shot to death by an off-duty cop who cut him off in traffic. There’s video of so damn many deaths, and nothing has come of most of them. There is video evidence showing that these people were well within their rights, and in lawful compliance with officers’ wishes, and they were all shot to death anyway, in some cases by people who hadn’t even announced themselves as cops, let alone ones under some kind of perceived threat.

The surveillance state has not made us any safer, it’s simply caused us to be confronted with the horror of our brutality. And I’d say it’s no more than we deserve, except that even with protests and retaliatory actions, and escalations to civilian drone strikes, and even Newt fucking Gingrich being able to articulate the horrors of police brutality, most of those officers are still on the force. Many unconnected persons have been fired, for indelicate pronouncements and even white supremacist ties, but how many more are still on the force? How many racist, hateful, ignorant people are literally waiting for their chance to shoot a black person because he “resisted” or “threatened?” Or just plain disrespected. And all of that is just what happened to those people. What’s distressing is that those much more likely to receive punishment, however unofficial, are the ones who filmed these interactions and provided us records of these horrors, to begin with. Here, from Ben Norton at Salon.com, is a list of what happened to some of the people who have filmed police killings of non-police:

Police have been accused of cracking down on civilians who film these shootings.

Ramsey Orta, who filmed an NYPD cop putting unarmed black father Eric Garner in a chokehold and killing him, says he has been constantly harassed by police, and now faces four years in prison on drugs and weapons charges. Orta is the only one connected to the Garner killing who has gone to jail.

Chris LeDay, the Georgia man who first posted a video of the police shooting of Alton Sterling, also says he was detained by police the next day on false charges that he believes were a form of retaliation.

Early media reports on the shooting of Small uncritically repeated the police’s version of the incident, before video exposed it to be false.

Wareham noted that the surveillance footage shows “the cold-blooded nature of what happened, and that the cop’s attitude was, ‘This was nothing more than if I had stepped on an ant.'”

As we said, above, black bodies are seen as inherently dangerous and inhuman. This perception is trained into officers at an unconscious level, and is continually reinforced throughout our culture. Studies like the Implicit Association Test, this survey of U.Va. medical students, and this one of shooter bias all clearly show that people are more likely to a) associate words relating to evil and inhumanity to; b) think pain receptors working in a fundamentally different fashion within; and c) shoot more readily at bodies that do not fit within WSAHPH. To put that a little more plainly, people have a higher tendency to think of non-WSAHPH bodies as fundamentally inhuman.

And yes, as we discussed, in the plurality of those AFWTA links, above, there absolutely is a danger of our passing these biases along not just to our younger human selves, but to our technology. In fact, as I’ve been saying often, now, the danger is higher, there, because we still somehow have a tendency to think of our technology as value-neutral. We think of our code and (less these days) our design as some kind of fundamentally objective process, whereby the world is reduced to lines of logic and math, and that simply is not the case. Codes are languages, and languages describe the world as the speaker experiences it. When we code, we are translating our human experience, with all of its flaws, biases, perceptual glitches, errors, and embellishments, into a technological setting. It is no wonder then that the algorithmic systems we use to determine the likelihood of convict recidivism and thus their bail and sentencing recommendations are seen to exhibit the same kind of racially-biased decision-making as the humans it learned from. How could this possibly be a surprise? We built these systems, and we trained them. They will, in some fundamental way, reflect us. And, at the moment, not much terrifies me more than that.

Last week saw the use of a police bomb squad robot to kill an active shooter. Put another way, we carried out a drone strike on a civilian in Dallas, because we “saw no other option.” So that’s in the Overton Window, now. And the fact that it was in response to a shooter who was targeting any and all cops as a mechanism of retribution against police brutality and violence against non-WSAHPH bodies means that we have thus increased the divisions between those of us who would say that anti-police-overreach stances can be held without hating the police themselves and those of us who think that any perceived attack on authorities is a real, existential threat, and thus deserving of immediate destruction. How long do we really think it’s going to be until someone with hate in their heart says to themselves, “Well if drones are on the table…” and straps a pipebomb to a quadcopter? I’m frankly shocked it hasn’t happened yet, and this line from the Atlantic article about the incident tells me that we need to have another conversation about normalization and depersonalization, right now, before it does:

“Because there was an imminent threat to officers, the decision to use lethal force was likely reasonable, while the weapon used was immaterial.”

Because if we keep this arms race up among civilian populations—and the police are still civilians which literally means that they are not military, regardless of how good we all are at forgetting that—then it’s only a matter of time before the overlap between weapons systems and autonomous systems comes home.

And as always—but most especially in the wake of this week and the still-unclear events of today—if we can’t sustain a nuanced investigation of the actual meaning of nonviolence in the Reverend Doctor Martin Luther King, Jr.’s philosophy, then now is a good time to keep his name and words out our mouths

Violence isn’t only dynamic physical harm. Hunger is violence. Poverty is violence. Systemic oppression is violence. All of the invisible, interlocking structures that sustain disproportionate Power-Over at the cost of some person or persons’ dignity are violence.

Nonviolence means a recognition of these things and our places within them.

Nonviolence means using all of our resources in sustained battle against these systems of violence.

Nonviolence means struggle against the symptoms and diseases killing us all, both piecemeal, and all at once.

 

Further Links:


A large part of how I support myself in the endeavor to think in public is with your help, so if you like what you’ve read here, and want to see more like it, then please consider becoming either a recurring Patreon subscriber or making a one-time donation to the Tip Jar, it would be greatly appreciated.
And thank you.