The Department of Defense Employs AI for Employee Screening, but Only After Clearing 'The Mother's Intuition Checkpoint'
The Defense Counterintelligence and Security Agency, which grants security clearances to millions of American workers, employs AI to expedite its tasks. The agency's director prohibits the use of 'black boxes'.
The Department of Defense Employs AI for Employee Screening, but Only After Clearing 'The Mother's Intuition Checkpoint'
Before permitting his over 13,000 Pentagon staff to access information regarding an American citizen, Defense Counterintelligence and Security Agency (DCSA) Director David Cattler advises them to ponder: would my mother realize that the government could do this?
Cattler refers to this as the "mother test", which serves as a straightforward sensible evaluation of DCSA's operations. This sprawling agency, responsible for granting or denying U.S. security clearances to numerous workers, uses AI to expedite its tasks while adhering to this principle.
DCSA manages more than 95% of federal government employees' security clearances, necessitating the completion of countless investigations each year. In 2024, the agency sought AI tools to analyze and organize the vast volume of private information it had access to.
These aren't advanced generative AI models such as ChatGPT, Bard or Claude; instead, the agency utilizes technology that big tech companies have employed for years, employing systems that exhibit their inner workings more clearly than most large language models do. Cattler identified the most promising application of these tools as prioritizing outstanding threats.
Although these tools have significant potential, their misuse could put data security at risk and introduce bias into the government systems. Nonetheless, Cattler remained hopeful that some less ostentatious functions of AI could revolutionize the agency, on the condition that they are not 'black boxes'.
"We must comprehend why it is credible and how it functions," Cattler explained. "We need to demonstrate that these tools deliver the claimed results objectively, in a highly compliant and consistent manner, when employed for the purposes I've described."
Many may not even view these tools as AI. Cattler was thrilled with the idea of constructing a real-time heatmap of DCSA-secured facilities, with risks plotted across it, updating whenever other government agencies receive information about potential threats. Such a tool, he believed, could help DCSA "determine where to position the metaphorical firetrucks." This wouldn't reveal new information; it would simply present existing information in a more tactical manner.
Matthew Scherer, a senior policy counsel at the Center for Democracy and Technology, stated that AI could be utilized to effectively collate and organize information that has already been gathered and authenticated. Nonetheless, the critical consequence follows — making critical decisions, such as flagging issues during the background check process or gathering data from social media profiles — which can pose hazards. For instance, AI systems still struggle to differentiate between individuals with identical names, which may result in misidentification.
"I would harbor concerns if the AI system was making recommendations or favoring particular applicants," Scherer said. "Then, you're moving into 'automated decision system' territory."
Cattler confirmed that the department had refrained from using AI to identify new risks, even in prioritization. Although, bias and privacy concerns may surface. The agency faces challenges while collaborating with AI companies, such as determining which private data to input into proprietary algorithms and the subsequent uses of those algorithms for the inputted data. Companies offering AI products to the general public have unintentionally breached trust by disclosing private data, which they were entrusted with — a catastrophe that would be detrimental if such an incident occurred with data in the custody of the Pentagon itself.
AI might also introduce bias into Department of Defense systems. Algorithms mirror the limitations of their creators and the data they are trained on, and DCSA relies on oversight from the White House, Congress, and other administrative bodies to mitigate prejudices in its systems. A 2022 RAND Corporation report cautioned that AI could introduce biases into the security clearance vetting system "potentially due to programmer biases or historical racial differences."
Cattler conceded that societal values shaping algorithms, including those within the Pentagon, evolve over time. Today, he claimed, the department is far less tolerant of extremist viewpoints than it once was but somewhat more accepting of individuals who recently overcame addiction to alcohol or drugs. "It was actually illegal in many parts of the United States to be gay, until not too long ago," he said. "That was a bias that the system may have needed to address."
- The Defense Counterintelligence and Security Agency (DCSA), led by Director David Cattler, prohibits the use of 'black boxes' in their AI tools when handling information related to security clearances.
- Cattler emphasizes the importance of understanding the workings and credibility of AI tools used in the Defense Department of Defense (DOD), including the Defense Counterintelligence and Security Agency, to ensure objective and consistent results.
- The DCSA, responsible for managing over 95% of federal government employees' security clearances, sought AI tools in 2024 to analyze and organize their vast volume of private information, advocating for the use of transparent systems instead of 'black boxes'.
- Director Cattler envisions using AI to create a real-time heatmap of DCSA-secured facilities, allowing for a more tactical presentation of existing information and helping to prioritize resources for potential threats.
- The department of defense, under Cattler's leadership, faces challenges in collaborating with AI companies, particularly concerning the input of private data into proprietary algorithms and the subsequent uses of those algorithms for the inputted data.