In March, the EU approved the world’s first major set of regulatory guidelines to govern the use of artificial intelligence. The ground-breaking legislation is intended to enforce that the development, use, and implementation of AI is safe and secure. Lawmakers around the world have described this legislation as a major milestone for AI governance, predicting that other countries might follow in their footsteps.

Shortly after, the UN passed a standalone AI resolution, led by the U.S., that promotes the safe use of AI in accordance with international humanitarian law. While the U.S. is at the forefront of the AI industry, the federal government has been cautious to pass sweeping legislation regulating it domestically.

That is not stopping the federal government from implementing AI – in fact, they’re prioritizing AI implementation at a rapid pace. In October 2023, the White House released the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence to accelerate AI adoption in the U.S. while seeking to minimize harm. This document also directs federal agencies to consider many different avenues of AI governance to develop standards for adoption.

While this executive order and many other federal initiatives are on the right track, there are a lot of questions lingering around AI governance. With the EU leading efforts in regulatory AI legislation, the federal government needs to do everything it can to ensure the safe adoption of AI and be a driving force for international standards. This starts with following and setting the standards for AI risk management.

Build on existing frameworks

While the government considers new mandates around AI, there are other resources available to use as a starting point for safe use, including the National Institute of Standards and Technology’s AI Risk Management Framework. This framework can be used to improve trustworthiness in AI systems but is only intended for voluntary use – so it’s up to developers and manufacturers to take the extra step for security.

The concept of building on existing frameworks can also be done by looking at similar major tech advancements, such as the government’s move to the cloud. This transition took a long time to fully grasp, with over ten years passing between the first cloud migration projects and the establishment of the Federal Risk and Authorization Management Program, or FedRAMP. We can use what we learned from the last decade of cloud technology development to accelerate how we bring AI to the government and the people.

This is accomplished by building on existing governance models like the Federal Information Security Modernization Act, or FISMA, and FedRAMP ATOs (Authority to Operate) and apply the concept to AI adoption. These rules will help ensure AI systems are safe and secure, without having to start from scratch.

Collaborating for success

Beyond looking at similar frameworks for guidance, government and technology leaders alike should look for trustworthy external resources to lead the way. Industry and government partnerships will be crucial to reach our common goals.

As the government continues to adopt AI, federal leaders should look to the private sector for establishing structure, control and driving adoption at scale. A few organizations are leading the way with established centers of excellence and AI advisory committees, which are offering free and informative counsel around the safe adoption of AI.

In turn, the private sector should consider public resources, such as NIST’s U.S. AI Safety Institute Consortium, or AISIC. With the nation’s leading AI stakeholders, AISIC is developing guidelines and standards for AI measurement and policy to mitigate the risks associated with the new technology. Many of these standards in development are related to the priorities established by the UN’s AI resolution, including sustainable development goals and measuring the trustworthiness of AI models.

As the U.S. inevitably awaits further AI regulation, there’s no time to waste in doing what we can to ensure its safe adoption. Prioritize security within your own AI adoption processes and seek the best external resources to guide the journey. It will be a long way until we all truly get our arms around AI, but we can do our part to accelerate the process in the most ethical and secure way possible.

Gaurav (GP) Pal is founder and CEO of stackArmor, a Tysons, Virginia-based supplier of computer security services.

Share:
In Other News
Load More