Skip to main content
AI

Seven AI ethics experts predict 2022’s opportunities and challenges for the field

From developing more human-centric AI to overcoming fragmented approaches to ethical AI development, here’s what experts told us.
article cover

Francis Scialabba

10 min read

Just over one year ago, corporate AI ethics became a regular headline issue for the first time.

In December 2020, Google had fired Timnit Gebru—one of its top AI ethics researchers—and in February 2021, it would terminate her ethics team co-lead, Margaret Mitchell. Though Google disputes their version of events, the terminations helped push some of the field’s formerly niche debates to the forefront of the tech world.

Every algorithm, whether it’s dictating the contents of a social media feed or deciding if someone can secure a loan, could have real-world impacts and the potential to harm as much as it might help. Policymakers, tech companies, and researchers are all grappling with how best to address that fact, which has become impossible to ignore.

And that is, in a nutshell, the field of AI ethics.

To get a sense of how the field will evolve this year, we checked in with seven AI ethics leaders about the opportunities and challenges facing the field this year.

The question we posed: “What’s the single biggest advancement you foresee in the AI ethics field this year? Conversely, what’s the most significant challenge?”

These interviews have been edited for length and clarity.

headshot of Deborah Raji, fellow at Mozilla

Deborah Raji

I think for a long time, policymakers have sort of relied on narratives from corporations, research papers, and the media, and projected an image—a very idealistic image—of how well AI systems are working. But as they make their way into real-world systems and get deployed, we’re increasingly aware of the fact that these systems fail in really significant ways, and that those failures can actually result in a lot of real harm to those that are impacted.

Specifically, there’s been a lot of discussion on accountability for moderation systems, but we’re going to hear a conversation about the need for auditing and accountability more broadly. And specifically auditing from independent third-party actors—not just regulators, consultants, and internal teams, but actually getting some level of external scrutiny to assess the systems and challenge the narratives being told by the companies building them.

In terms of the actual obstacles to seeing that happen, I think that there’s a lot of incongruencies in terms of how auditors and algorithmic auditors currently work.

It’s all these different actors that want to hold these systems accountable, but are currently working in isolation from each other and not very well coordinated. You have internal auditors within companies, consultancies, [and] startups that are coming up with tools. Journalists, law firms, civil society—there’s just so many different institutions and stakeholders that identify as algorithmic auditors that I think there will need to be a lot more cohesion. So that might involve some kind of federal policy or agency to coordinate and certify all of these different actors. It could be the emergence of some kind of collective group that has individuals, organizations, and institutions pool their resources together, or develop tooling or methodologies that they can share in common.

headshot of Rumman Chowdhury, head of AI ethics at twitter

Rumman Chowdhury

The single biggest advancement I’m looking forward to is improvement in algorithmic choice for users. That means offering individuals increased agency, control, understanding, and transparency over the computational systems that govern their experience. At Twitter, we’re exploring a few different ways to do this. We want to continue implementing product changes that allow people to meaningfully customize and tailor what they see. We also want to make these features more accessible by helping people understand how our platform works.

Of course, when it comes to implementation, algorithmic choice presents complex challenges. There isn’t really a clear definition of what it means, to start. It’s also not enough to just provide choices for choice’s sake, nor to provide binary choices like “on” or “off.” The concept has huge potential to positively impact the existing power dynamic between platforms and the public.

headshot of Abhishek Gupta, founder of Montreal AI Ethics Institute

Abhishek Gupta

What the field of AI ethics as a whole will strive toward this year is more formalization around the process of conducting audits for bias and other AI ethics issues. In particular, this will come on the heels of many in-draft and close-to-ready regulations that are coming down the pipe, in both Europe and North America first and foremost, but also other countries like India, Vietnam, and China.

The formalization will also help tide over something else that has plagued the field: a lack of standardization in auditing guidelines and a lack of consistency in their application. It will also help with making audit results more meaningful to the end-users and regulators of these systems, allowing for comparisons and more informed choice in picking solutions that aligns with one’s values. It will be a natural pillar in support of the often-included principle of transparency in AI ethics.

A foreseeable challenge with this push toward formalization is the need for reconciliation across different proposed regulations, especially for those organizations who operate in cross-jurisdiction settings.

There is a related challenge of resourcing, both from an expertise and financial resources standpoint. We will need interdisciplinary experts with a background in technical and social science elements to be able to operationalize these requirements effectively. This expertise continues to remain rare at the moment.

headshot of Chrstine Custis, head of fairness, transparency, accountability and safety at PAI

Chrstine Custis

I believe [it will be] the interest in and time spent on all things participatory-related—making research questions, research design, and system development more inclusive, and broadening the scope of the way we ask these questions, so that we’re not just thinking about a system’s developers, users, or owners, but we’re also thinking about impacted non-users and other impacted groups.

In general, the picture I would paint is thinking of technology, not just in a way that tells us what it could do for us, but thinking about the bad scenarios too, like how could it be misused, or how could it be misinterpreted, and how can it impact the marginalized communities that maybe didn't have anything to do with building it?

So you're not just thinking about, ‘Oh, this is super great.’ But you actually think about how this might impact [others]—like the robots that could deliver a meal, but what about folks who are visually impaired? How will it impact their traversing the sidewalk?

[For the other part], I’m going to take the easy out, and I’m going to say it's the same. Because when you think about participation, it can be subjective—and it’s very difficult, in technology, when we consider the part that’s human and computer interaction, or the social aspects of technology. That work, academically speaking, is still in progress, and since there’s a lot that we still don’t understand about those social interactions in technology, participatory design will be a tough thing.

headshot of R. David Edelman, director of TENS project at MIT

R. David Edelman

This is the year that AI ethics [will] make the leap into concrete AI policy.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

Around the world, there’s this growing demand that we move beyond intractable questions and begin using law and policy to tame AI’s wildest imperfections. Take the trolley problem for autonomous vehicles: Yes, it raises hard, deep questions about choice in an era of machine decision-making. But we can’t wait for philosophical consensus to develop a basic driver’s test for self-driving cars—not when this technology could save some of the ~40,000 Americans dying every year on our roads. We know that AI—with the right safeguards—can do a lot of good in transportation, finance, medicine, education, and beyond. Defining those safeguards is the work of public policymakers; designing systems to respect boundaries is the work of ethical technologists.

We need more of both, working together, to ensure that longstanding principles of law apply to the use of this technology. AI might surround us, but it’s not yet ungovernable; we can still shape it to work for, not against us.

Our greatest risk? Spending more time admiring the problem—paralyzed by how spectacularly AI can fail—instead of channeling that energy into building the technological and legal tools we need in place to protect what we hold dear. Perfect AI isn’t coming any time soon; it’s up to us to get specific about where and when we’re willing to tolerate it, imperfections and all.

headshot of Seth Dobrin, chief AI officer at IBM

Seth Dobrin

We’re going to see a huge focus on human-centered AI, which is starting with the person that’s going to be either impacted by or is going to be using the AI in some fashion.

It’s about being able to look all the way back from the output of the AI to the data, to ensure that the data maintains privacy and was collected with appropriate consent, and to see how the model was trained.

There's a lot of niche players that help deliver what I just described—there's not a single company that can deliver all that in integrated fashion today. Today, we do a pretty good job of getting feedback on how models are being used and implemented, but that's kind of post-hoc. So the model is already built, it's already been deployed, and then it’s, “Oh yeah, we need to worry about trust, explainability, and fairness.”

What we need to start doing, and what I think you're going to start seeing over the course of the next 12 months, is this starting with the human and thinking upfront. So if we look back 10 years, when user experience, customer research, and user research really started developing, we don't really do that in AI today.

When it comes to the biggest challenge, regulation is kind of a double-edged sword. When we look at AI, I believe that well-thought-out precision regulation is required—meaning that there's not just generic regulation of AI, that specific use cases or outcomes are regulated. For instance, the EU has protected uses, generally for something that affects someone's health, wealth, or livelihood. The biggest challenge for companies is going to be knowing, for those protected classes, whether they’re compliant with the requirements, and how to measure that over time.

What I hope to see in the AI ethics field in the coming years is some attention to and incorporation of non-Western, especially African perspectives into inquiries of AI ethics.

Similarly, there exist multiple burning challenges and obstacles depending on your focus and where you’re coming from. This ranges from the normalization of surveillance, to bogus and pseudoscientific models being sold as “AI” to the extreme overhype of capabilities of AI, to blind trust in AI, to lack of open access (often due to proprietary rights) of datasets and models, to lack of protection and safeguard for whistleblowers, to lack of sufficient AI regulation and standards, to high concentration of power within Big Tech corporations and elite universities.

Having said all that, if I were forced to pick a single most significant challenge for AI ethics, I would say that the booming industry of affect recognition presents a danger. So much research and tools developed within computer vision end up making claims for things such as “emotion detection,” “gender prediction,” and “deception detection.” To be sure, these are old and mostly debunked pseudoscience, but they are now being resuscitated with the aid of machine learning tools. As this is a financially thriving field, it requires work from all angles to rectify its devastating impact on real people as a result of the application of such tools.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.