Our Algorithms are Biased

By Miriam McKinney

Miriam McKinney is a data analyst at Johns Hopkins University’s Center for Government Excellence (GovEx - whose mission is to help governments use data to make informed decisions and improve people’s quality of life), a recent graduate from Columbia University’s Quantitative Methods in the Social Sciences program, and a co-creator of the newly-debuted Ethics & Algorithms Toolkit.

Feb 14, 2019 | Technology | 3 comments

Algorithms are enticing. Algorithms are fascinating. Algorithms are the “new and cool” tool for many of us. And algorithms can do a world of good. But algorithms can also be problematic.

 

Let’s be honest with ourselves.

All people have bias, all data have bias, therefore all algorithms have bias. That is the truth. If we are not perfect, then neither can be our creations. The sooner we all congregate around this truth, the sooner we might stop seeing shocking news stories like “Amazon’s hiring algorithm is sexist”, or “Google face recognition algorithm tags black people as gorillas”, or “controversial machine learning algorithm disparately sends black offenders to prison.” Algorithms are mirrors, reflecting the truths and inequalities that they recognize within our data. All they do is display them back to us; why are we shocked when we see them? So, yes, your (yes, you) algorithms are biased. Let’s just start there.

Our creations are not without flaw. Despite being advertised as entities that “simplify” and make processes more efficient, algorithms are still complex. They are nuanced, and should always require injections of purposeful thoughtfulness and extreme caution. They are complicated. They all have their issues. If we all admit this to one another—breaking down personal feelings of guilt or shame, and then together stand in these truths, we can all begin working towards solutions.

Let’s all say this together: All algorithms start out imperfect. That doesn’t mean they have to stay that way. Now, let’s get to work.

 

Something that can help.

Recently, a multidisciplinary, bi-coastal team (of which I am a part) created and debuted a tool that can help ameliorate the sometimes-problematic consequences that algorithms pose, called The Ethics & Algorithms Toolkit. For our team, the most feasible, helpful, and active solution for the issues that algorithms pose is risk assessment and mitigation. The risks are there, what are you going to do to address them?

 

Our work on algorithms

Like I’ve mentioned, our team just crafted a new tool that focuses on integrating several layers of ethical consideration into the process of evaluating algorithms. We are a group of passionate individuals from all different walks of life and career paths with a common interest: Fighting for better quality of life and outcomes for all. In order to do this in the context of algorithms, we decided to create a tool that bridges the gap between experienced data scientists and your average government practitioner. The challenge we faced was to create an interactive, intuitive risk scoring tool, that would be complex enough to address advanced data science concepts yet simple enough to not overwhelm the reader. And a tool that started as a lengthy, clunky, formal document transformed into a shortened, color-coded, plug-and-play tool for all.

The toolkit’s team is comprised of myself, a data analyst at the Center for Government Excellence (GovEx) and recent graduate from the Quantitative Methods in the Social Sciences (QMSS) program at Columbia University, Andrew Nicklin, director of Data Practices at GovEx, Joy Bonaguro, former Chief Data Officer in the City of San Francisco and current Director of People, Operations and Data at Corelight, Dave Anderson, advisory board member at Data Community DC, and Jane Wiseman, senior fellow at the Ash Center for Democratic Governance and Innovation at Harvard University. Moving forward, our friend Mo Johnson, Project Lead of the Global Data Ethics Project at Data for Democracy, will also be adding in her expertise. But enough about us, let’s get back to the toolkit.

 

The Ethics & Algorithms Toolkit

For almost a year, our team has been working on a toolkit to help readers navigate the nuanced, complicated conversations that surround algorithms and the data that they consume. The project came about after a small workshop held in the city of San Francisco in February of 2018. The conversation around data science and transparency for laypeople brought us to the idea that a new resource was needed to bridge the gap between data scientists and non-data scientists. Today, our tool exists in several parts that can and should be used in conjunction. We begin with a hearty background to get readers acclimated to terms like “black box” and “reinforcement learning” and our structure, then continue through our risk assessment (which comes with a handy worksheet) and risk mitigation sections, and conclude with appendices.

Beta version of our toolkit then received its official debut at D4GX (Data for Good Exchange held at Bloomberg HQ) in September.

 

Real use cases.

A large mid-Atlantic city (for privacy reasons, we cannot yet divulge which city) recently held an internal data science meeting to discuss the toolkit, and plans to use it for two upcoming data science projects: one around a housing initiative, and one around a public health initiative. During this meeting, I noticed how curious people were about how they could use our tool. I also noticed how much the toolkit sparked different paths for conversation, which is our intention. We want people to get into new, invigorating conversations with their colleagues about algorithms that they are using or want to use. Two other U.S. cities have contacted us personally to let us know that they have forked our work and are using it to structure data science projects. Conversations around the toolkit have been energetic and complimentary, with many people saying that this tool has reached them at the perfect time. We are excited to know that our work is resonating with so many.

We’ve also received a tremendous amount of positive feedback in 2018 at events like MetroLab, DataCon, and internal city meetings that GovEx has attended. In 2019, we hope to continue to gather feedback and partner with cities using the toolkit to work through live projects and evaluate algorithms.

 

Where are we headed next?

Looking to the future, we are really excited to see the unique and intimate ways in which our tool is used across the country. We hope to continue to build upon our work and have many ideas for how to make this tool even better. In order to do this, we will continue conversations with our partners, former colleagues, and friends around the country. Additionally, GovEx plans to integrate this toolkit into training coursework in 2019. To see our training offerings, you can visit https://govex.academy.

Discussion

Leave your comment below, or reply to others.

Please note that this comment section is for thoughtful, on-topic discussions. Admin approval is required for all comments. Your comment may be edited if it contains grammatical errors. Low effort, self-promotional, or impolite comments will be deleted.

3 Comments

  1. Excellent perspective on science and data management! I love the insights you offer and the direction you are taking to improve our results.

    These are powerful tools for making progress in understanding our communities and our ecosystems, and improving them (or preventing unanticipated bad consequences. There is no perfection in life or in algorithms because variables are always changing. But without measurement and modeling, our efforts at improvement become less effective than they could be. We will always not see or know how to measure some variables. The more help we get in seeing and measuring them (and their impacts), the better results we can achieve and the better algorithms we can develop. You are right, we need artists, governments, community engagement, plus creative thinkers and writers to help us gain insights about people experiencing life from various perspectives. The model of “the blind me and the elephant” can help us understand and appreciate the life we are living and experiencing — when we design algorithms, measure them and test the results. Thus we need collaboration in all dimensions of life.

    The development of Smart Cities, Happy Cities and Evolving Cities in multiple contexts are adding powerful insights and pleasure into our work on learning from our mistakes and making our future more resilient, sustainable and enjoyable for all citizens. We can repair the damage of waste and pollution by recycling and reusing the unhealthy byproducts of incomplete or defective processes, by understanding that algorithms are biased and by improving them. Murphy’s Law and corollaries provide valuable insights into new opportunities for improvement.

    I look forward to learning what new creative insights and breakthrough processes result from your toolking and your continued work on improving data science and transparency. My own work on integrating multiple dimensions of communities (eg, Energies, Environments and Economics, E-cubed) can benefit tremendously from using your new resource toolkit.

    Reply
  2. How do you define a desirable city environment?
    Considering it should be beneficial to all participants.

    Reply
  3. This question seems to be answered by your next sentence. I prefer the word “stakeholders” over “participants” because not all stakeholders can participate in decision making. Moreover, “stakeholders” and “participants” don’t fully understand what is truly in their own benefit at any given time — because their knowledge will always be incomplete. A desirable (optimal) city environment will meet the needs and happiness of all stakeholders — given current understanding of what is possible and what will work. Just as the “blind men” don’t understand what an elephant is until they compare notes and observations with all the other “blind” observers. That is why we must recognize that all understanding is imperfect and therefore we must remain open to learning more and adjusting our behavior to take advantage of new understandings. The best way of understanding “climate change” is to look at it from a global perspective so we do not miss some critical variables. We cannot simply educate the entire human race as we discover new variables, but we can best help them understand when we apply a global perspective.

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Read more from the Meeting of the Minds Blog

Spotlighting innovations in urban sustainability and connected technology

How Gen Z Impacts Urban Mobility

How Gen Z Impacts Urban Mobility

New mobility culture calls into question the commute and opens new options for city planning and commute patterns. Our study found almost two-thirds of Gen Z consumers would be willing to accept a longer commute in a self-driving vehicle. While the single driver commuter experience is generally perceived as bad, unhealthy, and stressful, the “we” commute of mobility culture could be a positive and healthy experience similar to today’s train commutes.

MetroLab’s 10 Principles for Government + University Partnerships

MetroLab’s 10 Principles for Government + University Partnerships

Using tools like algorithms and sensors, smart cities increase the quality of life for their residents, by making these communities cleaner, safer and healthier. When done thoughtfully smart cities efforts can also strive to make cities more inclusive and equitable. At the end of the day, it’s all about the people who live in these communities and making their interactions with city and/or county services easier and better.

California as an Example for Managing Urban Water in Drought Periods

California as an Example for Managing Urban Water in Drought Periods

Coordinated approaches are preferred for building urban drought resilience. Over the long term, a “trust but verify” policy can be more effective than the “better safe than sorry” approach of the mandate because the former encourages local suppliers to continue investing in diversified supplies. A good model is the stress-test approach the state adopted toward the end of the drought, which allowed local utilities to drop mandated conservation if they could demonstrate that they had drought-resilient supplies to last three more years.
In the wake of the drought, the state has adopted measures to improve information sharing, including a system for urban suppliers to provide regular updates on their supply situations. To encourage all agencies to prepare for more extreme droughts, urban water management planning documents must now address how suppliers would manage longer droughts.

Share This