What does it mean to design products in the context of AI? The slides at the bottom of this post focus the ways that AI technologies augment humans, creating what chess champion Garry Kasparov would call “centaurs” and what others might call “cyborgs.” This is true for both designers and the people they design for.
Read MoreI’ll kick off this post with a definition of trust, but focus on an analysis of trust in everyday and not-so-everyday situations: about 12,000 conversations among friends, family members and strangers. About 10% of all these conversations make some mention of trust. Then I’ll turn to more extreme situations exemplified by characters in 135 different TV shows, episodes are longer than conversations but on average 53% of TV episodes make at least one mention of trust.
Read MoreWe just posted the video for our latest AI in the 6ix (come to the next one!).
Our panel grappled with two use cases that have strong ethical implications: giving judges bail/sentencing recommendations and how to address social media filter bubbles that facilitate the spread of divisive politics.
Read MoreThe most basic information about customer transactions tells you what someone bought, when they bought it, and for how much. But if that’s all you see, you’ve pretty much reduced people into rows in your spreadsheet and you’ve put to bed any ambition of understanding the relationships you have with customers. This is a post about coffee, but it’s also about waking up to the meaning and motivations behind transaction data.
Welcome! There are many of us interested in the ethics of AI here at integrate.ai, and we wanted a format to explore and share our thoughts with a broader audience. So we decided to adopt the slack chat format used by the team at FiveThrityEight. This is our first chat, and we’ll be discussing ethics in AI.
Read MoreAlgorithms neither love nor suffer (despite all the hype about super-intelligent sentient robots). The products that algorithms power neither love nor suffer. But people build products and people use products. And a product that loves is one that, as Tyler Schnoebelen laid out in his recent Wrangle Talk, anticipates and respects the goals of the people it impacts. For us, this means building models with evaluation metrics beyond just precision, recall, or accuracy. It means setting objective functions that maximize not only profits, but the mutual benefit between company and consumer. It means helping businesses appreciate the miraculous nuances of people so they can provide contextual experiences and offer relevant products that may just make a consumer experience enjoyable.
Read MoreI had heard about Wrangle for a while — a data science conference where folks come to talk about the hardest problems they’ve faced and how they’ve found their ways around them. It also has a rancher-rustler theme, though you can’t see the cowboy boots I wore in the newly-posted video of my talk.
Here’s how I kicked off my 20-minute talk, called “The Ethics of Everybody Else”:
A podcast you can listen to (or read the transcript of) about conversational AI's (chatbots!) and what we learn from thinking about emoji.
Read MoreThere are lots of articles about Artificial Intelligence, but it’s pretty hard to pin people down with a definition. So in this article, I’ll pull out the best definitions of AI from the Harvard Business Review. The HBR understands the context for C-suite executives well and CEOs of big companies usually have decades upon decades of experience.
But what if we swing to the opposite end of the spectrum — to people who have only been alive for half a decade? In the second part of the post, I’ll take data from child language acquisition studies and work out a definition of AI made up exclusively of words that five-year-olds understand. (These last paragraphs wouldn’t work at all.)
Read MoreToronto-based Integrate.ai has announced the appointment of Kathryn Hume, Tyler Schnoebelen, and Jason Silver to its leadership team.
Read MoreA well-funded Toronto artificial-intelligence startup founded by a former Facebook Inc. executive has snagged three marquee hires.
Read MoreProfessions run into ethical problems all the time. Consider engineering: the US sold $9.9b worth of arms in 2016 ($3.9b in missiles). The most optimistic reading is that instruments of death prevent death. Consider medicine: Medical research is dominated by concerns of market size and patentability, leaving basic questions like “is this fever from bacteria or virus” unanswered for people treating illnesses in low-income countries. Consider law: Lawyers upholding the law can break any normal definition of justice. Even in philosophy, ethicists are not known to be more moral than anyone else.
Read MoreThis is the visual version of my 5-pg paper, “Goal-oriented design for ethical machine learning and NLP”, which you can find alongside a bunch of others by going to http://ethicsinnlp.com/program.
Read MoreOrganizations build machine learning systems so that they can predict and categorize data. But to get a system to do anything, you have to train it. This post is meant to help you figure out a budget for training data based on best practices.
Read MoreSome of the earliest applications of artificial intelligence in healthcare were in diagnosis—it was a major push in expert systems, for example, where you aim to build up a knowledge base that lets software be as good as a human clinician. Expert systems hit their peak in the late 1980s, but required a lot of knowledge to be encoded by people who had lots of other things to do. Hardware was also a problem for AI in the 1980s.
Read MoreThere’s Apple’s Siri, Microsoft’s Cortana, Amazon’s Alexa, and Nuance’s Nina. Sure, Facebook has “M”, Google has “Google Now”, and Siri’s voice isn’t always that of a woman. But it does feel worth noting that (typically male-dominated) engineering groups routinely give women’s names to the things you issue commands to. Is artificial intelligence work about Adams making Eves?
Read MoreAt their best, chatbots help you get things done. At their worst, they spew toxic nonsense. Whether we call them chatbots, intelligent agents, or virtual agents, the basic idea is that you shouldn’t need to bother with human interaction for things that computers can do quickly and efficiently: ask questions about a flight, manage your expenses, order a pizza, tell you the weather, and apply for a job. A lot of these are handy but may not feel quite like artificial intelligence–later in this post, we’ll tackle the relationship between detecting intentions, having conversations and building trust as the core pieces that make a chatbot feel more like artificial intelligence.
Read More