Previously, I told you about The Human Futures and Intelligent Machines Summit at Virginia Tech, and now that it’s over, I wanted to go ahead and put the full rundown of the events all in one place.
The goals for this summit were to start looking at the ways in which issues of algorithms, intelligent machine systems, human biotech, religion, surveillance, and more will intersect and affect us in the social, academic, political spheres. The big challenge in all of this was seen as getting better at dealing with this in the university and public policy sectors, in America, rather than the seeming worse we’ve gotten, so far.
Here’s the schedule. Full notes, below the cut.
Friday, June 8, 2018
- Josh Brown on “the distinction between passive and active AI.”
- Daylan Dufelmeier on “the potential ramifications of using advanced computing in the criminal justice arena…”
- Mario Khreiche on the effects of automation, Amazon’s Mechanical Turk, and the Microlabor market.
- Aaron Nicholson on how technological systems are used to support human social outcomes, specifically through the lens of policing in the city of Atlanta
- Ralph Hall on “the challenges society will face if current employment and income trends persist into the future.”
- Jacob Thebault-Spieker on “how pro-urban and pro-wealth biases manifest in online systems, and how this likely influences the ‘education’ of AI systems.”
- Hani Awni on the sociopolitical of excluding ‘relational’ knowledge from AI systems.
Saturday, June 9, 2018
- Chelsea Frazier on rethinking our understandings of race, biocentrism, and intelligence in relation to planetary sustainability and in the face of increasingly rapid technological advancement.
- Ras Michael Brown on using the religions technologies of West Africa and the West African Diaspora to reframe how we think about “hybrid humanity.”
- Damien Williams on how best to use interdisciplinary frameworks in the creation of machine intelligence and human biotechnological interventions.
- Sara Mattingly-Jordan on the implications of the current global landscape in AI ethics regulation.
- Kent Myers on several ways in which the intelligence community is engaging with human aspects of AI, from surveillance to sentiment analysis.
- Emma Stamm on the idea that datafication of the self and what about us might be uncomputable.
- Joshua Earle on “Morphological Freedom.”
I talked with Hewlett Packard Enterprise’s Curt Hopkins, for their article “4 obstacles to ethical AI (and how to address them).” We spoke about the kinds of specific tools and techniques by which people who populate or manage artificial intelligence design teams can incorporate expertise from the humanities and social sciences. We also talked about compelling reasons why they should do this, other than the fact that they’re just, y’know, very good ideas.
From the Article:
To “bracket out” bias, Williams says, “I have to recognize how I create systems and code my understanding of the world.” That means making an effort early on to pay attention to the data entered. The more diverse the group, the less likely an AI system is to reinforce shared bias. Those issues go beyond gender and race; they also encompass what you studied, the economic group you come from, your religious background, all of your experiences.
That becomes another reason to diversify the technical staff, says Williams. This is not merely an ethical act. The business strategy may produce more profit because the end result may be a more effective AI. “The best system is the one that best reflects the wide range of lived experiences and knowledge in the world,” he says.
[Image of two blank, white, eyeless faces, partially overlapping each other.]
To be clear, this is an instance in which I tried to find capitalist reasons that would convince capitalist people to do the right thing. To that end, you should imagine that all of my sentences start with “Well if we’re going to continue to be stuck with global capitalism until we work to dismantle it…” Because they basically all did.
I get how folx might think that framing would be a bit of a buzzkill for a tech industry audience, but I do want to highlight and stress something: Many of the ethical problems we’re concerned with mitigating or ameliorating are direct products of the capitalist system in which we are making these choices and building these technologies.
All of that being said, I’m not the only person there with something interesting to say, and you should go check out the rest of my and other people’s comments.
Until Next Time.
This weekend, Virginia Tech’s Center for the Humanities is hosting The Human Futures and Intelligent Machines Summit, and there is a link for the video cast of the events. You’ll need to Download and install Zoom, but it should be pretty straightforward, other than that.
You’ll find the full Schedule, below the cut.