Skip to main content
Support

Announcing Beyond Bans: A New Research Program on Facial Recognition

New technologies promise to help make America more safe and secure, or improve our quality of life. Chief among these are biometric technologies like facial recognition technologies (FRT)-- but academia, the private sector, civil society, and government actors are all flagging ethical concerns. 

When considering areas like criminal justice, Congress has tried to protect constituents through bills like The Ethical Use of Artificial Intelligence Act. But despite a commitment to accelerate AI through the National Defense Authorization Act of 2021 (NDAA), many FRT bills are stop-gap measures that target a specific domain. Such narrow bans or moratoriums don’t account for the growth and proliferation of facial recognition in areas ranging from immigration control, airport security, autonomous vehicles, and social media, or recognize both challenges and opportunities. Further, while the Department of Defense (DoD) has issued five principles for ethical AI, guidance on how to translate these principles into practice is lacking.

The Wilson Center’s Science and Technology Innovation Program (STIP) seeks to understand, translate, and demonstrate the complexity of emerging technologies, like FRT, including safety, security, and ethical concerns. We focus on audiences ranging from the public policy community, to academic researchers, the press, and the general public.  Legislative and Executive Artificial Intelligence (AI) Labs are a nonpartisan, off-the-record forum for offering Congressional staff and Federal employees context and a deep dive on AI trends with leading experts from the field.

As part of this curriculum, we include training on bias. But we can’t yet offer cross-domain and actionable guidelines for mitigating ethical concerns, because these resources don’t yet exist. We would like to see that change.

STIP is launching a new research program, “Beyond Bans,” to help fill this gap. Beyond Bans will establish and explore the complexity behind a range of AI ethical issues, beginning with facial recognition.  Our research in this area shows this is not a simple issue and will take a unique approach with multiple stakeholders involved. One early publication on Policy Options for Facial Recognition and the Need for a Grand Strategy on AI advocates for considering ethical FRT through a more holistic lens that includes principles, definitions, standards, and concrete guidelines. 

We will build on this foundation to conduct new research into an under-explored aspect of facial recognition (FRT): consumer technologies.

Working Definitions

For the purposes of this research, a consumer is defined as a voluntary user of facial recognition technology. Consumers may include people who purchase - for example - cell phones and laptops with facial recognition security safeguards, or owners of autonomous vehicles. Consumers may also include people who select to use free facial recognition services, for example by downloading a social media app that uses FRT to tag friends in photos, or electing to walk through automated airport security control. They may use facial recognition technologies in public spaces, or in the privacy of their own homes. 

Much of today's public and policy discussions focus on mitigating the harms associated with the use of FRT in non-consumer contexts, like criminal justice. Analyzing such cases can help us understand where more attention is needed and, potentially, what potential pitfalls to avoid. But focusing on harms can’t shed light on whether, or how, to leverage FRT to benefit society.

Consumer technologies offer the opposite side of the coin. Understanding consumer applications can help elucidate the broad benefits of facial recognition, and specific use cases for how FRT can improve quality of life. Research on the consumer perspective can also help us understand what leads consumers to consider these technologies ethical. Words like “transparency” and “disclosure” are high-level principles. But what approaches do consumer FRT applications leverage to ensure that users have the information they need to trust an application? Understanding how to design FRT systems to ensure transparency, disclosure, and trust is necessary for any policy action on ethical FRT, in the consumer context, and beyond it.

Our research will begin with a survey of consumer FRT applications that are currently available on the market to understand high-level application domains and more specific use cases. This work is important for painting a comprehensive picture of facial recognition, and elucidating the opportunities of these new technologies, in addition to the risks. To narrow the scope of research, we will be focusing on  consumer FRT technology that: 

  1. Reasonable steps are taken to make consumers aware that they are participating in a facial recognition system; and,
  2. Consumers can meaningfully “opt out” of participating in the system, and have access to reasonable alternatives if they choose to do so.

We will then take a qualitative interview approach to understanding how these technologies are conceptualized, designed, implemented, and deployed. Taking a lifecycle approach will also help us understand opportunities for ethical interventions at various stages in the development process. There is currently a lack of research or guidance on best practices for developing and deploying these technologies, including considering which stakeholders might be considered “accountable” at various times. As our research evolves, we hope to encourage and contribute to this discussion.

We want to hear from you. If you are a policymaker, an FRT practitioner, or another researcher with insights to share, please reach out to Dr. Anne Bowser, at anne.bowser@wilsoncenter.org. And stay tuned for more information on how to move beyond bans to consider a broader, more holistic approach to ethical AI.

Beyond Bans is supported by private foundations and public companies, including Amazon. The Wilson Center maintains full independence over the research process and results.

 


Science and Technology Innovation Program

The Science and Technology Innovation Program (STIP) serves as the bridge between technologists, policymakers, industry, and global stakeholders.  Read more