With the release of two new Artificial Intelligence (AI) policies, The White House has provided clear direction for federal agencies regarding how to embrace AI to improve efficiency, effectiveness, and overall service delivery. However, the integration of AI into the fabric of federal operations demands a principled approach. Agencies must safeguard public trust and mitigate risk, while leveraging AI’s transformative power.
Below are 7 key considerations for federal agencies as they put in place processes for using AI for software development responsibly.
- Put the supporting structure in place. Start with developing the necessary infrastructure and governance to manage risk from the use of AI, especially risk related to information security and privacy. A central tenet of responsible AI adoption is maintaining and fostering public trust. Agencies must prioritize the use of trustworthy AI that is safe, secure, and accountable.
- Manage AI accountability and risk. For "high-impact AI" – defined as AI whose output serves as a principal basis for decisions with significant legal, material, binding, or safety effects – agencies must implement minimum risk management practices. These include pre-deployment testing, comprehensive AI impact assessments, and ongoing monitoring for performance and potential adverse impacts. Agencies must, provide human oversight and intervention where appropriate, and offer consistent remedies or appeals for individuals affected by AI-enabled decisions.
- Test and validate the models – regularly. Agencies will need to conduct ongoing resting and validation of AI model performance. When procuring AI systems or services, agencies should seek detailed demonstrations and tests in environments closely reflecting the intended real-world operating environment
- Monitor and measure usage. Establish processes to measure, monitor, and evaluate the use of AI applications as early as possible – ideally, before usage begins. Understanding how the applications and tools are being used is critical to identifying where there may be missed opportunities, areas of risk, or misalignment.
- Policy refresh. Federal agencies are required to update their internal policies in critical areas to effectively integrate AI. Specifically, agencies must revisit and revise policies related to:
- IT infrastructure, including software tools and code management.
- Data, covering data inventory and access.
- Cybersecurity, including system authorizations and monitoring for AI.
- Privacy, to align with AI usage.
The purpose of these mandatory updates is to ensure alignment with OMB Memorandum M-25-21, Executive Order 14179, Executive Order 13960, and all other relevant legal requirements. This will establish the necessary frameworks for the responsible and effective adoption of AI.
- Build your library of use cases. Putting time into a well documented, curated set of use cases helps support consistent use of AI tools, ensures proper prompting, and helps teams adopt AI solutions faster. Additionally, federal agencies are mandated to create and publicly share inventories of their AI use cases, including those related to generative-AI, as outlined in Executive Order 13960 and further clarified by OMB Memorandum M-24-10.
- Build the right team. The successful adoption of AI hinges on having a skilled workforce. Agencies must prioritize recruiting, hiring, training, and retaining technical talent in AI roles. Achieving AI literacy for non-practitioners involved in AI is also essential for effective governance and oversight.
How Sonar supports federal AI adoption
Sonar's integrated code quality and code security solutions – SonarQube Server, Cloud, and IDE – help ensure the integrity of code powering AI initiatives and directly support The White House directives for increased use of AI. Here are three ways Sonar can accelerate and safeguard AI adoption for software development.
- AI-Generated Code Assurance: Sonar provides AI Code Assurance, a structured process for validating AI-generated code, ensuring it meets high standards of quality. AI Code Assurance helps developers use AI in their coding confidently. It puts strong quality checks and thorough analysis in place to proactively identify problems in AI-created code. Any project with AI code, whether automatically detected or tagged by a person, goes through the AI Code Assurance process. This ensures that every new piece of code meets the highest standards of quality and security before it moves to production.
- Proactive Issue Detection and Remediation: With features like AI CodeFix, Sonar leverages Large Language Models (LLMs) to suggest code fixes for issues identified during analysis. This enables developers to address problems early in the development lifecycle, leading to more robust and secure AI applications. SonarQube IDE further empowers developers by providing real-time feedback and guidance as they code, whether writing it themselves or accepting suggestions from AI assistants.
- Enforcing Coding Standards: Quality Gates in SonarQube allow agencies to define and enforce code quality standards for both AI-generated and developer-written code, preventing the deployment of code that doesn't meet the required criteria. This is directly aligned with the need for rigorous risk management for federal AI systems, particularly those deemed high-impact.
Embracing the Future of Federal Services with Responsibility
The integration of AI holds immense promise for the future of federal services. By adhering to the guiding principles outlined in this blog post, federal agencies can navigate this transformative journey responsibly, ensuring that AI is leveraged to enhance public good, improve efficiency, and maintain the trust of the American people. Embracing tools like SonarQube to ensure code quality and security will be a critical component of this responsible adoption, paving the way for an innovative and trustworthy future for AI in the federal government.
Learn more about Federal agency adoption of AI in our detailed guide.