Security and Privacy Takeaways From the AI Executive Order
As the use of Artificial Intelligence (AI) becomes more widespread, concerns about potential security and privacy risks that come along with AI have grown. To address those concerns, President Biden issued an Executive Order to establish new standards for AI security and privacy practices. The Executive Order also set new standards focused on civil rights, consumer and worker rights, and competition.
The AI Executive Order includes the following security and privacy actions:
- Developers of the most powerful AI systems must share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model and must share the results of all red-team safety tests.
- The National Institute of Standards and Technology (NIST) will set rigorous standards for extensive red-team testing to ensure AI systems are safe, secure, and trustworthy before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks.
- Develop strong new standards for biological synthesis screening to protect against the risks of using AI to engineer dangerous biological materials. Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.
- Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.
- Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge.
- Order the development of a National Security Memorandum that directs further actions on AI and security, to be developed by the National Security Council and White House Chief of Staff. This document will ensure that the United States military and intelligence community use AI safely, ethically, and effectively in their missions, and will direct actions to counter adversaries’ military use of AI.
Here are some takeaways from the security-related actions in the Executive Order and how they could impact future AI projects and business.
- Through promoting a collaborative development of an AI-talent workforce, enlisting the expertise of both public and private sector professionals alike through best practice workforce development initiatives; responsible innovation, testing, deployment, and continuous monitoring of AI technologies, the Executive Order is targeting the more safe, secure, and ethical use of AI technology.
- Significant focus placed upon risk management initiatives supporting previous AI policies such as the Blueprint for an AI Bill of Rights, AI Risk Management Framework (NIST AI 100-1), and Executive Order 14091 of February 16, 2023 (further advancing racial equity and support for underserved communities through the federal government) to develop a companion resource to NIST AI 100-1 introduces additional layers of risk oversight for risk identification and mitigation pertaining to the development and deployment of AI technology.
- Driving global initiatives to advance the responsible use of AI development by leading efforts to address and manage risks abroad through engaging key international allies promotes standardized development and implementation of AI technology within the United States, and abroad, increasing the understanding and awareness of the Executive Order’s root purpose in promoting safe and secure AI.
- Regulatory and third-party compliance professionals will need to be versed in the anticipated governmental standards pertaining to AI to ensure all organizationally utilized AI remains in compliance with their respective requirements.
- Protect Americans’ privacy by prioritizing federal support for accelerating the development and use of privacy-preserving techniques, including ones that use cutting-edge AI and that let AI systems be trained while preserving the privacy of the training data.
- Strengthen privacy-preserving research and technologies, such as cryptographic tools that preserve individuals’ privacy, by funding a Research Coordination Network to advance rapid breakthroughs and development. The National Science Foundation will also work with this network to promote the adoption of leading-edge privacy-preserving technologies by federal agencies.
- Evaluate how agencies collect and use commercially available information, including information they procure from data brokers, and strengthen privacy guidance for federal agencies to account for AI risks. This work will focus in particular on commercially available information containing personally identifiable data.
- Develop guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems. These guidelines will advance agency efforts to protect Americans’ data.
Here are some thoughts on how the Executive Order could impact the privacy efforts for organizations involved in AI.
- With this EO, the Biden Administration approaches AI not with fear but with hope that with the appropriate controls, AI can be leveraged to benefit and improve the privacy of Americans.
- A key focus is where there is an imbalance of power between the users of AI and the parties impacted by the decisions. The employment context is specifically identified as a context that needs to have tight controls and permissions built in.
- Transparency is a cornerstone with the requirement for agencies to post annual AI Use Case Inventories and also a requirement for agencies to notify negatively impacted individuals.
- Monitoring and testing are also critical with impact assessment, testing, and independent reviews being required.
- As with other initiatives in the privacy space, the EO looks to strike a balance of allowing agencies to leverage AI while also ensuring there are controls in place to reduce the likelihood of misuse.
The main issue between industry and regulators is how to balance the need to manage “real risk” versus slowing down an emerging industry. This is a classic balance that regulators and the federal government must strike so that they do not stifle innovation. The current regulation, as it’s written, is a voluntary commitment from the 15 major AI players within the United States. This executive order does provide transparency around data privacy concerns including what data is being used and how it’s being protected. Overall, a win for the industry. Most industry insiders believe there is minimal short-term impact to their business and the technology being developed. Going forward, as more regulation is layered on, it will be increasingly important to ensure against industry players using these efforts to create an environment where regulatory capture runs rampant, thus reducing competition and slowing down innovation.
CompliancePoint offers a full suite of privacy and security services designed to help organizations protect their data and comply with all applicable laws and regulations. To learn more about how we can help, reach out to us at firstname.lastname@example.org.
Finding a credible expert with the appropriate background, expertise, and credentials can be difficult. CompliancePoint is here to help.