Why we should care about ethics in AI

Bo Ren
Samsung NEXT NY
Published in
7 min readNov 6, 2019

--

How does the role of responsibility and governance come into play for companies building the foundation of AI? How can AI be biased? What are the best practices around privacy and trust?

To discuss these questions, we hosted our second What’s NEXT Founder Dinner series on October 24, with a focus on ethics in AI. We convened some of the brightest minds from IBM Quantum Computing, Group Project, Alpha Drive, and Argo Digital Ventures to chat about the social, legal, and human implications of what they are building.

We wanted to ask philosophical questions related to building a more just, equitable, and diverse future with AI.

Will AI eat the world?
From 2001 to Blade Runner to The Terminator, pop culture for decades has warned of robots and AI machines threatening to take over. More recently, the specter of automation has threatened to take our jobs and decimate the global workforce.

Who will be the winners and losers of an AI revolution? Will truck drivers lose their jobs with the emergence of AI-enabled autonomous vehicles?

Sally Simms from Group Project believes that many jobs displaced by AI will be augmented by greater accessibility due to AI-enabled technologies. For instance, more people will be able to code and develop products without a deep technical background, since AI-enabled no-code tools will make it easier to do so.

“Something I’m excited about right now is we built our initial product with my dev and data scientist last year… earlier this year [we] wanted to explore a new feature area, and it was going to be really burdensome for us to build on top of the stack we already had, so I built it with a no-code [framework] which was good app builder,” she said. “I have consequently gone really deep in the whole no-code movement I’m really bullish about it.”

Sally built that feature using Bubble, a visual programming language for web and mobile applications. She believes the no-code movement will bring more diversity into the AI space, as more people will have access to building AI products from all backgrounds.

How biased can AI be?
One of the biggest takeaways from dinner was around the principle of Hanlon’s Razor: “Never attribute to malice that which is adequately explained by ignorance.” Oftentimes, mistakes are misconstrued for malice, when more often it’s due to ignorance within an organization.

The problem is twofold: The training data used in many AI algorithms often create biased results… but also people have automation bias, which means they believe in the results of data that is automated. Therefore, we give AI and algorithms more power than they should have through a blind faith in their results.

The first source of bias often comes from the datasets AI algorithms use. Many training libraries are compiled by post-docs in the research field, but the data may not be published so the degree of bias in a dataset can’t always be determined.

Furthermore, these datasets are rarely updated or maintained after the creator graduates from their Ph.D. program, which creates bias propagation as companies continue to train their models on these libraries.

To combat this, Anamita Guha from IBM Quantum Computing encourages more diversity in the people who train AIs and build algorithms to reduce bias.

That’s only half the problem, though, as people often give more power to those algorithms through automation bias. One example of this can be seen in the criminal justice system. More and more courts are using AI and algorithms to sentence criminals to determine their parole and bail.

Judges were using a software called COMPAS to determines the likelihood of criminal recidivism, and their sentencing methods take into consideration the risk score COMPAS generates.

However, its algorithms were trained on data from previous court precedents, which was biased and generated longer sentences for African Americans than white counterparts being evaluated for the same crime. The judges evaluating those sentences believed in the validity of the software and were not aware of the bias in COMPAS.

The creator of the sentencing software, Tim Brennan, has testified that he didn’t design his software to be used in sentencing. “I wanted to stay away from the courts,” Brennan said, explaining that his focus was on reducing crime rather than punishment. Despite media coverage and criticisms, judges are still using COMPAS to inform their sentencing practices.

How can we build an ethical AI from Day 1?
To prevent misuses of AI like COMPAS in the court system, we need to think about ways to prevent such biases from occurring in the first place. One way to do so is to think about the ethical implications of using AI during the design phase of building a product.

Dan Wu, a legal engineer from Immuta, shared the need to instill interdisciplinary thinking in the room from day one of a company. To prevent myopic decision-making and short-term tradeoffs, he believes social scientists and behavioral scientists should be sitting in the same room as AI engineers when they make early product decisions.

“I think this is exactly why social scientists and behavioral scientists need to be in the room,” Dan said. “Because [they need to be] working with AI engineers to think about the risks… we have to build systems where there’s governance and trust by design so the default action that you take on an AI model, or when you’re analyzing data, is the safest.”

Recently, Twitter founder and CEO Jack Dorsey shared that his biggest regret was not hiring a social scientist, a behavioral economist, and game theorists during the early days of the company to better understand the addictive qualities of social media and its impact on society. After a company gets to Twitter’s size, it’s often too late to retrofit the product to deal with the behavioral ramifications of how it’s used.

However, it may still be possible to create bumpers and buffers along the way for startups. To prevent them from the pitfalls of large companies, Dan believes startups should consider many diverse points of view in their decision-making across multiple, cross-functional stakeholders.

Gauntlet founder Tarun Chitra spoke about the need to build a behavioral-driven test environment as opposed to a development test environment. While a development test is written from the perspective of the developer, a behavioral test is written to measure how a product will impact the end user.

Dan warned about the need to test risk and behavioral designs. If you are working in a regulated category like fintech or health tech, it’s understood that a single mistake can negatively affect people’s lives in a deep and intimate way. But startups in unregulated areas should also be thinking about the impact their products might have on vulnerable populations.

Each technologist should at least consider the deep-seated implications of their algorithm or model before unleashing their AI onto the world. Tools like NIST’s Risk Assessment and Omidyar Network’s can help you ask the right questions. This also goes beyond just thinking through the behavioral implications, but also designing the behavioral systems within an organization using frameworks like data protection by design.

Everyone in the room agreed that engineers need a code of ethics similar to the Hippocratic Oath. A few companies are starting to infuse ethical checks and balances for data science into their daily operations.

I left the dinner with our AI experts feeling hopeful. We believe that the future of AI is bright but also riddled with bias, risk, and blind spots. The responsibility rests on us as builders, thinkers, and investors to establish ethical codes of conduct, recognize human and automation bias in AI, and create a culture in organizations that fosters multidisciplinary conversations.

If anything, the backlash against big tech companies has taught us that just because we can build something doesn’t mean we should. The world needs more than just engineers to make decisions that reverberate through society. We need to facilitate a multi-party discourse amongst AI engineers, data scientists, behavioral scientists, lawyers, and the people who are impacted by the technology they build.

P.S. If you have any data & AI ethics questions or want to talk about inclusive smart cities, we recommend reaching out to a friend of Samsung NEXT. Dan is a Privacy Counsel & Legal Engineer at a leading automated data governance platform for analytics.

Special thanks to Ryan Lawler, Daniel Wu, Anamita Guha, Vin Tang, Yuval Greenfield, and Jesse Freeman for helping me with this post and pushing me to think a little further.

Originally published at https://samsungnext.com on November 6, 2019.

--

--

Bo Ren
Samsung NEXT NY

Product-focused investor empowering underestimated founders. Writer. Advisor.