Health Care

Outlook on AI and Civil Rights Law and Policy

[ad_1]

Artificial Intelligence: How Data and Algorithms Impact Civil Rights
Artificial intelligence aims to replace or augment human judgement in various domains; however, civil rights advocates have raised alarms regarding AI’s potential to perpetuate discrimination and bias. Absent appropriate alignment mechanisms, software that learns by example has the potential to replicate current inequities to produce sexism, racism and ableism. There are verified instances of AI being deemed unsuitable for use in the employment process, with one notable example of an AI hiring program that incidentally discriminated against women. The AI program used historical data of resumes and, learning from this male-dominated dataset, began penalizing any use of the word “women” in a resume. The AI tool was biased in favor of hiring men because it was trained on data of previously successful male applicants. AI tools also have exhibited  a high degree of misidentification of non-white faces and one program notoriously classified 28 Congressional lawmakers as felons. Even the government is not immune from using biased algorithms in its decision making. The Internal Revenue Service (IRS) admitted on May 16 that Black American taxpayers were three to five times more likely to be audited, despite there being no evidence that as a demographic Black Americans perpetrate fraud at a higher rate. The algorithm specifically targeted those low-income Americans receiving benefits from government programs, such as the Earned Income Tax Credit. In a letter to the Senate Finance Committee, the IRS promised to learn the source of racial disparity in its AI program and implement necessary changes before the next tax filing season.

Without proper oversight and alignment, AI biases increase the likelihood of people of color, women and individuals with disabilities being refused access to housing, financial services or employment opportunities. As described in the 2022 NIST Special Publication Towards a Standard for Identifying and Managing Bias in Artificial Intelligence, bias can be imbued in AI during its creation, permeate AI programs in biased datasets, and occur when the systems apply faulty assumptions. Bias in AI must be actively addressed in order to avoid these results.

The Biden Administration’s Efforts to Secure Civil Rights in AI
The AI Bill of Rights is the White House’s roadmap to ensure the spread of innovation does not further exacerbate existing underlying social, economic and racial issues. The guidelines focus on five pillars that the Biden Administration calls “common sense protections to which all Americans should be entitled”:

  1. Safe and Effective Systems
  2. Algorithmic Discrimination Protections
  3. Data Privacy
  4. Notice and Explanation
  5. Human Alternatives, Consideration and Fallback

The AI framework and technical companion provide a series of concrete steps companies can take to address bias throughout the AI development process. For example, the White House recommends that companies conduct proactive equity assessments in the design phase of their technology and use robust data samples that have been pre-reviewed for bias based on the historical and social context of the data to avoid discrimination. The AI Bill of Rights can also serve as a guide for federal agencies deploying AI programs, several of which have received dedicated AI budgets under the President’s FY 2024 request. While the AI Bill of Rights serves as a helpful guide for addressing bias, it is not legally binding. 

Without binding law, federal agencies are utilizing their existing authorities to challenge discriminatory AI outcomes. This April, the Consumer Financial Protection Bureau, Federal Trade Commission, Equal Employment Opportunity Commission (EEOC) and Department of Justice (DOJ) issued a joint statement pledging that the agencies will work collaboratively to enforce existing discrimination regulation as applied to bias in AI systems. Other agencies are currently crafting initiatives to provide federal AI accountability within the purview of their enforcement jurisdiction:

  • EEOC released technical guidance on the adverse impact of AI programs and provided advice to employers on how Title VII of the Civil Rights Act applies to the use of automated systems in employment decisions.
  • The Department of Education has announced its intention to provide recommendations for AI use and development in classrooms. 
  • The Department of Health and Human Services is providing a rule to prohibit discrimination in decisions related to access to care. 
  • The Department of Housing and Urban Development and DOJ have pledged to ensure that algorithms that screen for tenants and discriminate, thereby running afoul of the Fair Housing Act, will face legal action. 
  • The Department of Commerce’s National Telecommunications and Information Administration launched a request for public input on how to regulate and boost AI accountability in line with the AI Bill of Rights Blueprint.

Looking Forward to AI Regulation and Civil Rights Protections
On May 4, the White House released an updated fact sheet outlining new actions to promote the AI Bill of Rights Blueprint. First, the National Science Foundation announced that it will fund $140 million to launch seven new National AI Research Institutes. The Institutes work with a network of organizations, from higher education to industry, to pursue AI technological developments that are “ethical, trustworthy, responsible, and serve the public good.” Confronting the potential for discrimination during research and development phases can help assure that AI programs are beneficial to the public when launched.

Second, the Administration announced an independent commitment from leading AI developers to participate in a public evaluation of AI systems in August at DEF CON 31. The evaluations will provide much-needed transparency into the workings of AI models. In turn, developers can better understand how and why their programs might have discriminatory outputs and proactively address them. The goal is to explore how the AI models align with the principles of the AI Bill of Rights Blueprint.

Third, the Office of Management and Budget (OMB) will be releasing draft policy guidance on the use of AI systems by the government for public comment this summer. The guidance will set the standards that the federal government will follow in using AI in a safe and ethical manner. Public comments will help the OMB develop an actionable policy for the equitable use of AI by U.S. agencies. Individual companies can also adopt the guidance to prevent their AI programs from intruding on civil rights.

Civil rights advocates have highlighted how AI systems can inadvertently make discriminatory decisions on behalf of companies and even on behalf of the government. The White House has taken initial steps to address these concerns through its guidance documents. Congress is also beginning to review potential legislation that would address discrimination caused by AI. Companies using AI should consider both the underlying data of their AI programs and how the AI program leverages that data so that they can mitigate discriminatory outcomes and protect civil rights. Given the activity by the White House and federal agencies, more executive action addressing the intersection of civil rights and AI may be forthcoming.

Pillsbury is closely monitoring AI-related legislative and regulatory efforts. Our AI team helps startups, global corporations and government agencies navigate the landscape impacted by emerging developments in AI. For insights on these rapidly evolving topics, please visit our Artificial Intelligence practice page.

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also
Close
Back to top button