We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

Everyone’s talking about ethics in AI. Here’s what they’re missing

1 0 0

The systems we require for sustaining our lives increasingly rely upon algorithms to function. Governance, energy grids, food distribution, supply chains, healthcare, fuel, global banking, and much else are becoming increasingly automated in ways that impact all of us. Yet, the people who are developing the automation, machine learning, and the data collection and analysis that currently drive much of this automation do not represent all of us, and are not considering all of our needs equally. We are in deep.

Most of us do not have an equal voice or representation in this new world order. Leading the way instead are scientists and engineers who don’t seem to understand how to represent how we live as individuals or in groups—the main ways we live, work, cooperate, and exist together—nor how to incorporate into their models our ethnic, cultural, gender, age, geographic or economic diversity, either. The result is that AI will benefit some of us far more than others, depending upon who we are, our gender and ethnic identities, how much income or power we have, where we are in the world, and what we want to do.

This isn’t new. The power structures that developed the world’s complex civic and corporate systems were not initially concerned with diversity or equality, and as these systems migrate to becoming automated, untangling and teasing out the meaning for the rest of us becomes much more complicated. In the process, there is a risk that we will become further dependent on systems that don’t represent us. Furthermore, there is an increasing likelihood that we must forfeit our agency in order for these complex automated systems to function. This could leave most of us serving the needs of these algorithms, rather than the other way around.

The computer science and artificial intelligence communities are starting to awaken to the profound ways that their algorithms will impact society, and are now attempting to develop guidelines on ethics for our increasingly automated world. The EU has developed principles for ethical AI, as has the IEEE, Google, Microsoft, and other countries and corporations. Among the more recent and prominent efforts is a set of principles crafted by the Organization for Economic Co-operation and Development (OECD), an intergovernmental organization that represents 37 countries for economic concerns and world trade.

In various ways, these standards are attempting to address the inequality that results from AI and automated, data driven systems. As OECD Secretary-General Angel Gurría put it in a recent speech announcing the guidelines, the anxieties around AI place “the onus on governments to ensure that AI systems are designed in a way that respects our values and laws, so people can trust that their safety and privacy will be paramount. These Principles will be a global reference point for trustworthy AI so that we can harness its opportunities in a way that delivers the best outcomes for all.”

However, not all ethics guidelines are developed equally—or ethically. Often, these efforts fail to recognize the cultural and social differences that underlie our everyday........

© Fast Company