Although AI has been around in some form or another for over 70 years, recent technological advancements have brought it to the AI we are familiar with today. AI, in the simplest terms, are computers and machines that replicate the decision-making and problem-solving abilities of the human mind. This is more than just figuring out a math equation like a calculator. These machines have their own intelligence and can make determinations based on the input provided. AI can not only understand and respond to language, but it can also learn to create visuals of molecules and imagery that never existed, and it can even learn the grammar of software code. Some examples of AI today are Siri, virtual customer service agents, search engine advertisements, Chat GPT, and even automated stock trading platforms.
AI can perform these human-like capabilities through Machine Learning (ML) and deep learning. ML uses structured data input by humans to understand and make decisions on data sets. The structured data is placed in a hierarchy to help the machine learn and make decisions. On the other hand, deep learning can take in unstructured datasets (i.e., text, images, etc.) and determine the hierarchy of the information on its own through learning algorithms. Deep learning does not require human interaction to operate. AI’s power to learn and process data is precisely why Government agencies have recently taken an interest in the technology and how it can be leveraged to protect society.
Executive Order 13960: Promoting Trustworthy AI in the Federal Government
Anyone who has worked in or with the Government knows that when new technology is available, it cannot be instantly incorporated into the day-to-day operations of an Agency. Government information is sensitive and can severely impact the nation’s security if released unintentionally by new technologies. Because of the security factor, new technology goes through many reviews and governance development before being available for internal use. AI is no different.
Executive Order 13960 states that AI is recognized as a powerful tool that benefits Government initiatives. Some examples of this include improving agency operations, processes, and procedures, meeting strategic goals, reducing costs, enhancing oversight of the use of taxpayer funds, increasing efficiency and mission effectiveness, and improving the quality of services (Federal Register, 2023). A common theme in this executive order is the responsible use of AI by the Government and fostering trust from the public that AI is being used consistently with all applicable laws, including privacy, civil rights, and civil liberties.
Since the release of Executive Order 13960, the Government has met with major AI development firms to set standards for responsible AI development, worked with Agencies to ensure that algorithmic bias is not used with AI, and met with top AI experts on managing risks using AI technologies. Due to the potential dangers of AI (discrimination, unethical usage, etc.), the Government is taking steps to ensure AI usage within Agencies accomplishes the mission initially set forth before AI was available.
Incorporating AI into DHS
Because of how powerful AI can be, different Government agencies are integrating AI technologies into their processes to improve the outcomes of their missions. The Department of Homeland Security (DHS) is one of those agencies starting to incorporate AI into their mission sets. DHS’ mission is the commitment to relentless resilience, striving to prevent future attacks against the United States and our allies, responding decisively to natural and man-made disasters, and advancing American prosperity and economic security long into the future (dhs.gov, 2023). AI technologies assist personnel in combatting fentanyl trafficking, strengthening supply chain security, countering child sexual exploitation, and protecting critical infrastructure. Further, AI will create new means for carrying out the DHS mission while protecting the nation’s interests.
But before AI can be fully operational, policies must be implemented to ensure bias and discrimination are left out of the equation. One area in which AI can assist and be potentially harmful is facial recognition and face capture technologies. As discussed earlier, ML involves a human providing data sets that AI can learn. The AI will perform that way if these data sets are corrupted with bias or discriminatory intent.
DHS Policies and Actions to Promote Responsible AI Use
Facial recognition (FR) and face capture (FC) technologies are used to identify and verify an individual’s identity. In the case of DHS, this technology is used to help identify possible terrorists or other people who are threats to the United States. AI’s deep learning abilities allow DSH to identify someone and pick them out in the background of a video or picture. Providing specific data sets to detect certain people could lead to questions on biases with the technology. Because of this potential ethical issue, DHS has set forth Directive 026-11, which requires that all uses of FR and FC technologies be thoroughly tested to ensure no unintended bias or disparate impact per national standards (dhs.gov, 2023). Further, DHS will not use FR or FC technologies to profile, target, or discriminate against individuals solely for exercising their Constitutional rights or to enable systemic, indiscriminate, or wide-scale monitoring, surveillance, or tracking. While not a guarantee, this directive will significantly socialize the responsible use of AI in face recognition and face capture technologies.
To also support Directive 026-11, Policy Statement 139-06 was created to ensure DHS systems, programs, and AI activities will conform to the requirements of Executive Order 13960, Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government. In this policy, DHS will only acquire and use AI in a manner that is consistent with the Constitution and all other applicable laws and policies, including those addressing privacy, civil rights, and civil liberties, and only where AI adoption improves mission effectiveness. In addition, DHS will continually strive to minimize inappropriate bias by utilizing standards required by law and policy.
In tandem with the AI policies DHS abides by, there are several actions DSH is taking to promote AI safety and security.
AI Safety and Security Advisory Board (AISSB)—The AISSB will comprise private and Government AI experts. They will provide information and recommendations for improving security, resilience, and incident response related to using AI in critical infrastructure to the Secretary and infrastructure community.
AI Safety and Security Pilot Program—DHS will collaborate with the Department of Defense to develop an AI capability to fix vulnerabilities in critical US government networks. The pilot will incorporate the Cybersecurity and Infrastructure Security Agency’s (CISA) cybersecurity best practices and vulnerability management process to increase the cybersecurity of AI systems, which include responsible mission use; assurance of AI systems including hardware and software evaluation; interagency and public collaboration; and developing a next-generation workforce of AI talent.
Researching Adversarial Use—The only way to stay ahead of adversaries is to know the AI tools they use to cause harm. DHS technology experts research, test, and deploy technologies to protect against various AI-based threats, including biological and chemical threats to and from AI systems. DHS will also use its experience in technology evaluation and CISA’s cybersecurity expertise to run real-world tests and monitor high-risk AI systems used in critical infrastructure.
Combatting AI Intellectual Property Theft—The Department will develop guidance and other resources to help private-sector actors mitigate the risks of AI-related IP theft. DHS will also help update the IP Enforcement Coordinator Joint Strategic Plan on IP Enforcement to address AI-related issues (dhs.gov, 2023).
Hiring and Retaining AI Talent—DHS will need top talent for their AI initiatives; therefore, candidates could come from anywhere in the world. As such, DHS is prioritizing the adjudication of visa petitions for applicants with skills in AI and other emerging technologies. Additionally, this pilot program provides work authorization for the principal parole applicant and any dependents.
The Future of Responsible AI and DHS
DHS is making efforts to ensure that using AI to protect the nation is done responsibly. A combination of internal and external policies outlining the appropriate use of AI for agencies will help mitigate the risks of using AI for mission purposes. DHS’s incorporation of these policies into their day-to-day operations will set a precedent for ethical AI use throughout the government. Industry will be affected as well. Contractors providing AI services to DHS customers must abide by these policies and meet or exceed responsible use requirements set forth by the policies being adopted. Overall, the actions of DHS to promote responsible use of AI will benefit the security of our nation by improving mission capabilities along with responsible usage of AI.