Our Algorithms are Biased

by Feb 14, 2019Technology

Miriam McKinney

Miriam McKinney is a data analyst at Johns Hopkins University’s Center for Government Excellence (GovEx - whose mission is to help governments use data to make informed decisions and improve people’s quality of life), a recent graduate from Columbia University’s Quantitative Methods in the Social Sciences program, and a co-creator of the newly-debuted Ethics & Algorithms Toolkit.


Who will you meet?

Cities are innovating, companies are pivoting, and start-ups are growing. Like you, every urban practitioner has a remarkable story of insight and challenge from the past year.

Meet these peers and discuss the future of cities in the new Meeting of the Minds Executive Cohort Program. Replace boring virtual summits with facilitated, online, small-group discussions where you can make real connections with extraordinary, like-minded people.


 

Algorithms are enticing. Algorithms are fascinating. Algorithms are the “new and cool” tool for many of us. And algorithms can do a world of good. But algorithms can also be problematic.

 

Let’s be honest with ourselves.

All people have bias, all data have bias, therefore all algorithms have bias. That is the truth. If we are not perfect, then neither can be our creations. The sooner we all congregate around this truth, the sooner we might stop seeing shocking news stories like “Amazon’s hiring algorithm is sexist”, or “Google face recognition algorithm tags black people as gorillas”, or “controversial machine learning algorithm disparately sends black offenders to prison.” Algorithms are mirrors, reflecting the truths and inequalities that they recognize within our data. All they do is display them back to us; why are we shocked when we see them? So, yes, your (yes, you) algorithms are biased. Let’s just start there.

Our creations are not without flaw. Despite being advertised as entities that “simplify” and make processes more efficient, algorithms are still complex. They are nuanced, and should always require injections of purposeful thoughtfulness and extreme caution. They are complicated. They all have their issues. If we all admit this to one another—breaking down personal feelings of guilt or shame, and then together stand in these truths, we can all begin working towards solutions.

Let’s all say this together: All algorithms start out imperfect. That doesn’t mean they have to stay that way. Now, let’s get to work.

 

Something that can help.

Recently, a multidisciplinary, bi-coastal team (of which I am a part) created and debuted a tool that can help ameliorate the sometimes-problematic consequences that algorithms pose, called The Ethics & Algorithms Toolkit. For our team, the most feasible, helpful, and active solution for the issues that algorithms pose is risk assessment and mitigation. The risks are there, what are you going to do to address them?

 

Our work on algorithms

Like I’ve mentioned, our team just crafted a new tool that focuses on integrating several layers of ethical consideration into the process of evaluating algorithms. We are a group of passionate individuals from all different walks of life and career paths with a common interest: Fighting for better quality of life and outcomes for all. In order to do this in the context of algorithms, we decided to create a tool that bridges the gap between experienced data scientists and your average government practitioner. The challenge we faced was to create an interactive, intuitive risk scoring tool, that would be complex enough to address advanced data science concepts yet simple enough to not overwhelm the reader. And a tool that started as a lengthy, clunky, formal document transformed into a shortened, color-coded, plug-and-play tool for all.

The toolkit’s team is comprised of myself, a data analyst at the Center for Government Excellence (GovEx) and recent graduate from the Quantitative Methods in the Social Sciences (QMSS) program at Columbia University, Andrew Nicklin, director of Data Practices at GovEx, Joy Bonaguro, former Chief Data Officer in the City of San Francisco and current Director of People, Operations and Data at Corelight, Dave Anderson, advisory board member at Data Community DC, and Jane Wiseman, senior fellow at the Ash Center for Democratic Governance and Innovation at Harvard University. Moving forward, our friend Mo Johnson, Project Lead of the Global Data Ethics Project at Data for Democracy, will also be adding in her expertise. But enough about us, let’s get back to the toolkit.

 

The Ethics & Algorithms Toolkit

For almost a year, our team has been working on a toolkit to help readers navigate the nuanced, complicated conversations that surround algorithms and the data that they consume. The project came about after a small workshop held in the city of San Francisco in February of 2018. The conversation around data science and transparency for laypeople brought us to the idea that a new resource was needed to bridge the gap between data scientists and non-data scientists. Today, our tool exists in several parts that can and should be used in conjunction. We begin with a hearty background to get readers acclimated to terms like “black box” and “reinforcement learning” and our structure, then continue through our risk assessment (which comes with a handy worksheet) and risk mitigation sections, and conclude with appendices.

Beta version of our toolkit then received its official debut at D4GX (Data for Good Exchange held at Bloomberg HQ) in September.

 

Real use cases.

A large mid-Atlantic city (for privacy reasons, we cannot yet divulge which city) recently held an internal data science meeting to discuss the toolkit, and plans to use it for two upcoming data science projects: one around a housing initiative, and one around a public health initiative. During this meeting, I noticed how curious people were about how they could use our tool. I also noticed how much the toolkit sparked different paths for conversation, which is our intention. We want people to get into new, invigorating conversations with their colleagues about algorithms that they are using or want to use. Two other U.S. cities have contacted us personally to let us know that they have forked our work and are using it to structure data science projects. Conversations around the toolkit have been energetic and complimentary, with many people saying that this tool has reached them at the perfect time. We are excited to know that our work is resonating with so many.

We’ve also received a tremendous amount of positive feedback in 2018 at events like MetroLab, DataCon, and internal city meetings that GovEx has attended. In 2019, we hope to continue to gather feedback and partner with cities using the toolkit to work through live projects and evaluate algorithms.

 

Where are we headed next?

Looking to the future, we are really excited to see the unique and intimate ways in which our tool is used across the country. We hope to continue to build upon our work and have many ideas for how to make this tool even better. In order to do this, we will continue conversations with our partners, former colleagues, and friends around the country. Additionally, GovEx plans to integrate this toolkit into training coursework in 2019. To see our training offerings, you can visit https://govex.academy.

Discussion

Leave your comment below, or reply to others.

Please note that this comment section is for thoughtful, on-topic discussions. Admin approval is required for all comments. Your comment may be edited if it contains grammatical errors. Low effort, self-promotional, or impolite comments will be deleted.

3 Comments

  1. Excellent perspective on science and data management! I love the insights you offer and the direction you are taking to improve our results.

    These are powerful tools for making progress in understanding our communities and our ecosystems, and improving them (or preventing unanticipated bad consequences. There is no perfection in life or in algorithms because variables are always changing. But without measurement and modeling, our efforts at improvement become less effective than they could be. We will always not see or know how to measure some variables. The more help we get in seeing and measuring them (and their impacts), the better results we can achieve and the better algorithms we can develop. You are right, we need artists, governments, community engagement, plus creative thinkers and writers to help us gain insights about people experiencing life from various perspectives. The model of “the blind me and the elephant” can help us understand and appreciate the life we are living and experiencing — when we design algorithms, measure them and test the results. Thus we need collaboration in all dimensions of life.

    The development of Smart Cities, Happy Cities and Evolving Cities in multiple contexts are adding powerful insights and pleasure into our work on learning from our mistakes and making our future more resilient, sustainable and enjoyable for all citizens. We can repair the damage of waste and pollution by recycling and reusing the unhealthy byproducts of incomplete or defective processes, by understanding that algorithms are biased and by improving them. Murphy’s Law and corollaries provide valuable insights into new opportunities for improvement.

    I look forward to learning what new creative insights and breakthrough processes result from your toolking and your continued work on improving data science and transparency. My own work on integrating multiple dimensions of communities (eg, Energies, Environments and Economics, E-cubed) can benefit tremendously from using your new resource toolkit.

    Reply
  2. How do you define a desirable city environment?
    Considering it should be beneficial to all participants.

    Reply
  3. This question seems to be answered by your next sentence. I prefer the word “stakeholders” over “participants” because not all stakeholders can participate in decision making. Moreover, “stakeholders” and “participants” don’t fully understand what is truly in their own benefit at any given time — because their knowledge will always be incomplete. A desirable (optimal) city environment will meet the needs and happiness of all stakeholders — given current understanding of what is possible and what will work. Just as the “blind men” don’t understand what an elephant is until they compare notes and observations with all the other “blind” observers. That is why we must recognize that all understanding is imperfect and therefore we must remain open to learning more and adjusting our behavior to take advantage of new understandings. The best way of understanding “climate change” is to look at it from a global perspective so we do not miss some critical variables. We cannot simply educate the entire human race as we discover new variables, but we can best help them understand when we apply a global perspective.

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *

Read more from MeetingoftheMinds.org

Spotlighting innovations in urban sustainability and connected technology

Middle-Mile Networks: The Middleman of Internet Connectivity

Middle-Mile Networks: The Middleman of Internet Connectivity

The development of public, open-access middle mile infrastructure can expand internet networks closer to unserved and underserved communities while offering equal opportunity for ISPs to link cost effectively to last mile infrastructure. This strategy would connect more Americans to high-speed internet while also driving down prices by increasing competition among local ISPs.

In addition to potentially helping narrow the digital divide, middle mile infrastructure would also provide backup options for networks if one connection pathway fails, and it would help support regional economic development by connecting businesses.

Wildfire Risk Reduction: Connecting the Dots

Wildfire Risk Reduction: Connecting the Dots

One of the most visceral manifestations of the combined problems of urbanization and climate change are the enormous wildfires that engulf areas of the American West. Fire behavior itself is now changing.  Over 120 years of well-intentioned fire suppression have created huge reserves of fuel which, when combined with warmer temperatures and drought-dried landscapes, create unstoppable fires that spread with extreme speed, jump fire-breaks, level entire towns, take lives and destroy hundreds of thousands of acres, even in landscapes that are conditioned to employ fire as part of their reproductive cycle.

ARISE-US recently held a very successful symposium, “Wildfire Risk Reduction – Connecting the Dots”  for wildfire stakeholders – insurers, US Forest Service, engineers, fire awareness NGOs and others – to discuss the issues and their possible solutions.  This article sets out some of the major points to emerge.

Innovating Our Way Out of Crisis

Innovating Our Way Out of Crisis

Whether deep freezes in Texas, wildfires in California, hurricanes along the Gulf Coast, or any other calamity, our innovations today will build the reliable, resilient, equitable, and prosperous grid tomorrow. Innovation, in short, combines the dream of what’s possible with the pragmatism of what’s practical. That’s the big-idea, hard-reality approach that helped transform Texas into the world’s energy powerhouse — from oil and gas to zero-emissions wind, sun, and, soon, geothermal.

It’s time to make the production and consumption of energy faster, smarter, cleaner, more resilient, and more efficient. Business leaders, political leaders, the energy sector, and savvy citizens have the power to put investment and practices in place that support a robust energy innovation ecosystem. So, saddle up.

The Future of Cities

Mayors, planners, futurists, technologists, executives and advocates — hundreds of urban thought leaders publish on Meeting of the Minds. Sign up to follow the future of cities.

You have Successfully Subscribed!

Wait! Before You Leave —

Wait! Before You Leave —

Subscribe to receive updates on the Executive Cohort Program!

You have Successfully Subscribed!

Share This