What does it mean to design products in the context of AI? The slides at the bottom of this post focus the ways that AI technologies augment humans, creating what chess champion Garry Kasparov would call “centaurs” and what others might call “cyborgs.” This is true for both designers and the people they design for.
Read MoreWelcome! There are many of us interested in the ethics of AI here at integrate.ai, and we wanted a format to explore and share our thoughts with a broader audience. So we decided to adopt the slack chat format used by the team at FiveThrityEight. This is our first chat, and we’ll be discussing ethics in AI.
Read MoreAlgorithms neither love nor suffer (despite all the hype about super-intelligent sentient robots). The products that algorithms power neither love nor suffer. But people build products and people use products. And a product that loves is one that, as Tyler Schnoebelen laid out in his recent Wrangle Talk, anticipates and respects the goals of the people it impacts. For us, this means building models with evaluation metrics beyond just precision, recall, or accuracy. It means setting objective functions that maximize not only profits, but the mutual benefit between company and consumer. It means helping businesses appreciate the miraculous nuances of people so they can provide contextual experiences and offer relevant products that may just make a consumer experience enjoyable.
Read MoreI had heard about Wrangle for a while — a data science conference where folks come to talk about the hardest problems they’ve faced and how they’ve found their ways around them. It also has a rancher-rustler theme, though you can’t see the cowboy boots I wore in the newly-posted video of my talk.
Here’s how I kicked off my 20-minute talk, called “The Ethics of Everybody Else”:
There are lots of articles about Artificial Intelligence, but it’s pretty hard to pin people down with a definition. So in this article, I’ll pull out the best definitions of AI from the Harvard Business Review. The HBR understands the context for C-suite executives well and CEOs of big companies usually have decades upon decades of experience.
But what if we swing to the opposite end of the spectrum — to people who have only been alive for half a decade? In the second part of the post, I’ll take data from child language acquisition studies and work out a definition of AI made up exclusively of words that five-year-olds understand. (These last paragraphs wouldn’t work at all.)
Read MoreProfessions run into ethical problems all the time. Consider engineering: the US sold $9.9b worth of arms in 2016 ($3.9b in missiles). The most optimistic reading is that instruments of death prevent death. Consider medicine: Medical research is dominated by concerns of market size and patentability, leaving basic questions like “is this fever from bacteria or virus” unanswered for people treating illnesses in low-income countries. Consider law: Lawyers upholding the law can break any normal definition of justice. Even in philosophy, ethicists are not known to be more moral than anyone else.
Read MoreThis is the visual version of my 5-pg paper, “Goal-oriented design for ethical machine learning and NLP”, which you can find alongside a bunch of others by going to http://ethicsinnlp.com/program.
Read MoreOrganizations build machine learning systems so that they can predict and categorize data. But to get a system to do anything, you have to train it. This post is meant to help you figure out a budget for training data based on best practices.
Read MoreThis post wraps up a series I’ve been doing on using machine learning models to understand recent American political debates (here and here). By taking all the transcripts of the debates since last year, I show which words and phrases most distinguish debaters’ styles and issues. Training a computer to identify speakers is usually thought of as a way of doing forensics or personalization. But here, I’m interested in something closer to summarization. If you can pick one section of talk for each candidate from the last debate, which moments are most consistent with everything they’ve said up to then?
Read MoreLet’s teach a computer to guess who-said-what in the first US presidential debate between Hillary Clinton and Donald Trump. This is a way of finding out which moments the candidates were most like themselves — as well as when they were most like Bernie Sanders or Ted Cruz.
Read MoreMost academic papers and blogs about machine learning focus on improvements to algorithms and features. At the same time, the widely acknowledged truth is that throwing more training data into the mix beats work on algorithms and features. This post will get down and dirty with algorithms and features vs. training data by looking at a 12-way classification problem: people accusing banks of unfair, deceptive, or abusive practices.
Read MoreYou can get pretty far in text classification just by treating documents as bags of words where word order doesn’t matter. So you’d treat “It’s not reliable and it’s not cheap” the same as “It’s cheap and it’s not not reliable”, even though the first is an strong indictment and the second is a qualified recommendation. Surely it’s dangerous to ignore the ways words come together to make meaning, right?
Read MoreSome of the earliest applications of artificial intelligence in healthcare were in diagnosis—it was a major push in expert systems, for example, where you aim to build up a knowledge base that lets software be as good as a human clinician. Expert systems hit their peak in the late 1980s, but required a lot of knowledge to be encoded by people who had lots of other things to do. Hardware was also a problem for AI in the 1980s.
Read MoreThere’s Apple’s Siri, Microsoft’s Cortana, Amazon’s Alexa, and Nuance’s Nina. Sure, Facebook has “M”, Google has “Google Now”, and Siri’s voice isn’t always that of a woman. But it does feel worth noting that (typically male-dominated) engineering groups routinely give women’s names to the things you issue commands to. Is artificial intelligence work about Adams making Eves?
Read MoreAt their best, chatbots help you get things done. At their worst, they spew toxic nonsense. Whether we call them chatbots, intelligent agents, or virtual agents, the basic idea is that you shouldn’t need to bother with human interaction for things that computers can do quickly and efficiently: ask questions about a flight, manage your expenses, order a pizza, tell you the weather, and apply for a job. A lot of these are handy but may not feel quite like artificial intelligence–later in this post, we’ll tackle the relationship between detecting intentions, having conversations and building trust as the core pieces that make a chatbot feel more like artificial intelligence.
Read MoreYou can turn right on red in Iowa. Except not where I was last night, from Washington Street on to Linn, which I only realized as I read the “no right on red” sign mid-turn. You’re definitely not supposed to turn left on red, which is what I did a few blocks earlier going from Iowa St. to Clinton. I have no excuse except—I’m not kidding—my mind was preoccupied by thoughts about self-driving cars.
Read MoreEarlier this week, I talked about the major themes in how the press has been covering artificial intelligence since 2015. This post puts the recent months in context, looking at whether we are in an AI spring time (tl;dr: we are) and when the AI winter(s) were. A big theme is going to be hype-and-disappointment, so we’ll close on “are we in hyperhype right now?”
Read More