US federal agencies must appoint chief AI officers, according to new guidelines

What just happened? Vice President Kamala Harris has announced a new set of requirements for all US agencies designed to ensure the use of AI remains safe and non-discriminatory. These include the appointment of a chief AI officer to oversee each agency’s use of the technology. Additionally, travelers will be allowed to refuse facial recognition scans at airport security screenings without fear of consequences.

The requirements, which will come into effect on December 1, state that in addition to appointing an AI overseer, agencies will have to establish AI governance boards. Each agency is also required to publish a report online and to the Office of Management and Budget (OMB) showing a complete list of the AI systems they use, their reasons for using them, the associated risks, and how they intend to mitigate them.

A senior Biden administration official said that in some agencies, the chief AI officer will be a political appointee, whereas in others, it will not.

Agencies have already started hired for this position; the Department of Justice announced Jonathan Mayer as its first CAIO in February. OMB chair Shalanda Young said the government plans to hire 100 AI professionals by the summer.

“We have directed all federal agencies to designate a chief AI officer with the experience, expertise, and authority to oversee all AI technologies used by that agency, and this is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government, who are specifically tasked with overseeing AI adoption and use,” Harris told reporters.

The new requirements build on the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence announced by President Biden last October. It mandated, among other things, that clear guidance be provided to landlords, federal benefits programs, and federal contractors to address the ways AI is often used to deepen discrimination, bias, and facilitate abuses in justice, healthcare, and housing.

Harris gave an example of how the new requirements would work in practice: if the Veterans Administration wants to use AI in VA hospitals to help doctors diagnose patients, it would need to show the AI system does not produce “racially biased diagnoses.”

Other examples include travelers having the ability to opt out of the use of TSA facial recognition without being delayed or losing their place in line. Furthermore, human oversight will be required when AI is used for critical diagnosis decisions in federal healthcare systems, and when the technology is used to detect fraud in government services.

“If an agency cannot apply these safeguards, the agency must cease using the AI system, unless agency leadership justifies why doing so would increase risks to safety or rights overall or would create an unacceptable impediment to critical agency operations,” the OMB fact sheet reads.

The guidance adds that any government-owned AI models, code, and data should be released to the public unless they pose a risk to government operations.

 

Reference

Denial of responsibility! My Droll is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment