The Office of Management and Budget released a draft memorandum Jan. 13 providing guidance to agencies on how they should approach regulation of industry’s artificial intelligence applications.

The guidance emphasizes that agencies should consider how any regulatory action would potentially hinder expansion of AI use. The draft memo “calls on agencies, when considering regulations or policies related to AI applications, to promote advancements in technology and innovation."

“Agencies must avoid a precautionary approach that holds AI systems to such an impossibly high standard that society cannot enjoy their benefits,” OMB officials wrote. “Where AI entails risk, agencies should consider the potential benefits and costs of employing AI, when compared to the systems AI has been designed to complement or replace.”

To push federal agencies to create an fertile environment for AI development in private industry, the draft memo lists four examples of how federal agencies can facilitate AI use without taking any regulatory action:

  1. Access to federal data and models for AI R&D. OMB officials wrote that increasing access to department data is one use case that would assist AI innovation.
  2. Communication to the public. Outside of regulatory action, the draft memo told agencies that transparency with the public about how departments are using AI would have a “significant impact” on public perception of AI. Specifically, OMB officials wrote, RFIs pertaining to AI should include “underlying assumptions and uncertainties regarding expected outcomes, both positive and negative.”
  3. Agency participation in the development and use of voluntary consensus standards and conformity assessment activities. The draft memo tells agencies that they should work with the private sector to create develop AI consensus standards, adding that it will help agencies “develop expertise in AI and identify practical standards for use in regulation.”
  4. International regulatory cooperation. The draft memo calls on agencies to “engage in dialogues” with the international community that promote consistent regulatory approaches and are consistent with values like civil rights and liberties.

Throughout the draft memorandum, OMB expresses concerns about the federal government over-regulating AI to the extend that it hampers innovation and development of the technology. But there will be some cases where agencies will have to issue rules and regulations pertaining to AI applications. To avoid over-burdensome regulation, the draft memo includes 10 principles for use in government:

  1. Public trust in AI. Regulatory and non-regulatory actions need to be reliable, robust and trustworthy.
  2. Public participation: The public should have opportunities to participate in the rule-making process.
  3. Scientific integrity and information quality. The government’s approaches to AI should use scientific and technical information and processes.
  4. Risk assessment and management. Regulatory and non-regulatory approaches should be made after assessing risk and determining how to manage it.
  5. Benefits and costs. Agencies need to consider the full societal costs and benefits related to developing and using AI applications.
  6. Flexibility. Agency approaches to AI should be flexible and performance-based.
  7. Fairness and nondiscrimination. AI can reduce or increase discrimination. Both regulatory and non-regulatory approaches need to consider issues of fairness and nondiscrimination in outcomes.
  8. Disclosure and transparency. Agencies should be transparent in an effort to improve public trust in AI.
  9. Safety and security. Agencies should ensure that that they have controls in place to guarantee confidentiality, integrity and availability of data used by AI.
  10. Interagency coordination. OMB officials wrote that agencies need to coordinate with one another about shared experiences and “ensure consistency and predictability of AI-related policies.”

As written in the draft, agencies would have to create submit plans to OMB identifying the statutory authority through which they get AI regulatory authority, report stakeholder feedback on existing regulatory barriers on AI applications and identify “high-priority” AI applications under the purview of agency’s authority.

OMB will accept comments until March 13.

Andrew Eversden covers all things defense technology for C4ISRNET. He previously reported on federal IT and cybersecurity for Federal Times and Fifth Domain, and worked as a congressional reporting fellow for the Texas Tribune. He was also a Washington intern for the Durango Herald. Andrew is a graduate of American University.

Share:
In Other News
Load More