Latest From the Blog

  • Perils of Public AI from a Counterintelligence Perspective: The Madhu Gottumukkala Case

    The Perils of Public AI from a Counterintelligence Operator’s View: A Case Study on Madhu Gottumukkala’s Reckless Use of ChatGPT

    In the clandestine world of national security, the line between operational success and catastrophic failure is often measured in millimeters of discretion. The recent revelation that Madhu Gottumukkala, the acting director of the Cybersecurity and Infrastructure Security Agency (CISA), utilized a public, commercially available version of ChatGPT to process “for official use only” (FOUO) documents is not merely a procedural misstep. It is an incredibly stupid counterintelligence debacle, I mean, “of the highest order” (Sakellariadis, 2026). This incident exposes a chasm of staggering depth between the rapid adoption of transformative technology and the foundational principles of information security that have, until now, protected the nation’s most sensitive secrets. From my perspective as a counterintelligence expert, Gottumukkala’s actions were not born of ignorance but of a dangerous arrogance, a presumption that his position insulated him from the very rules he was sworn to enforce. This presumption is a gift to adversarial FIS and a nightmare for those tasked with defending the integrity of our intelligence apparatus.

    The Inherent Treachery of Public Large Language Models

    To understand the gravity of Gottumukkala’s error, one must first dissect the fundamental architecture and data policies of public Large Language Models (LLMs) like OpenAI’s ChatGPT. These models are not inert tools; they are dynamic, cloud-hosted systems designed to learn and evolve from user interactions. OpenAI’s policy, while occasionally nuanced, has consistently maintained that submitted data may be retained and used to train and refine their models (OpenAI, 2025). This means that every prompt, every document fragment, and every query entered into the public interface becomes part of a vast, aggregated dataset. For a civilian user, this might raise privacy concerns. For a government official handling sensitive material, it represents an unauthorized and uncontrolled data spill of potentially catastrophic proportions.

    The data itself is only half the problem. The metadata generated by the interaction, i.e., user’s IP address, device fingerprinting, session timings, and the very nature of the queries, etc., provides a rich tapestry of intelligence for a determined adversary. A sophisticated FIS such as China’s Ministry of State Security (MSS) or Russia’s SVR does not need to directly breach OpenAI’s servers to benefit. They can analyze the model’s outputs over time to infer the types of questions being asked by government entities. If an official uploads a contracting document related to a critical infrastructure project, the model’s subsequent, more knowledgeable answers about that specific topic could signal a point of interest. This is a form of signals intelligence (SIGINT) by proxy, where the adversary learns not what we know, but what we are focused on, thereby revealing strategic priorities and operational vulnerabilities.

    Furthermore, the security of these public platforms is a moving target. While no direct evidence of a major breach of OpenAI’s training data is publicly available, the possibility cannot be discounted. The U.S. intelligence community operates on the principle of need-to-know and compartmentalization precisely because no system is impenetrable. Deliberately placing sensitive data into a system with an opaque security posture, governed by a private company with its own corporate interests and potential vulnerabilities, is an abdication of the most basic tenets of information security. The 2023 breach of MoveIt Transfer, a widely used file-transfer software, which impacted hundreds of organizations, including government agencies, serves as a stark reminder that even trusted third-party systems can be compromised (CISA, 2023). Gottumukkala’s actions effectively created a similar, albeit digital, vulnerability by choice.

    The Anatomy of an Insider Threat: Arrogance as a Vector

    Counterintelligence professionals spend their careers identifying and mitigating insider threats, which are often categorized as malicious, coerced, or unintentional. Gottumukkala’s case falls into a particularly insidious subcategory, . . . the entitled or arrogant insider. This is an individual who, often due to seniority or perceived importance, believes that security protocols are for lesser mortals. His reported actions paint a textbook picture. Faced with a blocked application, he did not seek to understand the policy or use the approved alternative; he reportedly demanded an exemption, forcing his subordinates to override security measures designed to protect the agency (Sakellariadis, 2026). He just assumed that the rules simply did not apply to him.

    This behavior is more than a simple lapse in judgment. It is a systemic cancer. When a leader demonstrates a flagrant disregard for established rules, it erodes the entire security culture of an organization. Junior personnel, witnessing a senior official flout policy without immediate repercussion, receive a clear message. The rules are flexible, especially for the powerful. This creates an environment ripe for exploitation, where other employees may feel justified in likewise ignoring rules that they don’t find convenient, exponentially increasing the agency’s attack surface. Adversarial FIS are adept at exploiting this kind of cultural rot. They understand that a demoralized workforce with a cynical view of leadership is more susceptible to coercion, recruitment, or simple negligence.

    Gottumukkala’s reported professional history amplifies these concerns. His documented failure to pass a counterintelligence-scope polygraph examination is a monumental red flag that should have precluded any role involving access to sensitive operational or intelligence information (Sakellariadis, 2026). A polygraph is not a perfect lie detector, but in the counterintelligence context, it is a critical counterespionage tool for assessing an individual’s trustworthiness, susceptibility to coercion, and potential for undeclared foreign contacts. A failure in this screening is a definitive signal of elevated risk. Making matters worse, he sought to remove CISA’s Chief Information Officer (CIO), the very official responsible for maintaining the agency’s cybersecurity posture (Sakellariadis, 2026). This pattern suggests a hostility toward institutional oversight that is antithetical to the role of a cybersecurity leader in addition to hostility towards basic INFOSEC protocols.

    The Strategic Cost of a Single Data Point

    The documents in question were reportedly FOUO, not classified. This distinction, while bureaucratically significant, is strategically irrelevant to a capable adversary. FOUO documents often contain the building blocks of classified intelligence. They can reveal details about sources and methods, sensitive but unclassified contract information about critical infrastructure, internal deliberations on policy, and/or the identities and roles of key personnel involved in national security efforts.

    Consider a hypothetical but plausible scenario. A FOUO document details a DHS contract with a private firm to harden the cybersecurity of a specific sector of the electrical grid. Uploaded to a public AI, this data point is now part of a larger model. An adversary, through persistent querying of the public AI, could potentially coax the model into revealing more about this sector’s vulnerabilities than it otherwise would. Even if the model does not explicitly reveal the document, the adversary’s knowledge of the type of work being done allows them to focus their espionage, cyberattacks, or influence operations on that specific firm or sector. The FOUO document becomes the breadcrumb that leads the adversary to the feast. The Office of the Director of National Intelligence (ODNI) has repeatedly warned in its annual threat assessments that adversaries prioritize unclassified data collection to build a mosaic of intelligence (ODNI, 2025). Each piece is harmless on its own, but together they form a clear and actionable picture.

    The existence of secure, government-controlled alternatives makes this incident all the more infuriating. The Department of Homeland Security has developed and deployed its own AI-powered tool, DHSChat, specifically designed to operate within a secured federal network, ensuring that sensitive data does not leave the government’s digital ecosystem (DHS, 2024). Gottumukkala’s insistence on using the public, less secure option over the purpose-built, secure one is the action of someone who either lacks a fundamental understanding of the threat landscape or simply doesn’t give a shit. In either case, the result is the same. It is an unnecessary forced error, and self-inflicted wound on national security.

    The Imperative of Accountability and a Zero-Tolerance Mandate

    The response to this incident should be unequivocal and severe. The Department of Homeland Security’s own Management Directive 11042.1 mandates that any unauthorized disclosure of FOUO information be investigated as a security incident, potentially resulting in “reprimand, suspension, removal, or other disciplinary action” (DHS, 2023). Anything less than a full counterintelligence investigation, coupled with Gottumukkala’s immediate removal from any position of trust, signals a tacit acceptance of reckless behavior.

    This case should catalyze a broader policy shift across the entire Intelligence Community which has been visibly altered by current leadership. A zero-tolerance policy for the use of public AI tools with any government data, let alone sensitive information, must be implemented and enforced without exception. This requires more than a memo. It requires robust technical controls, including network-level blocks to prevent such data exfiltration and continuous monitoring for policy violations. It also demands a cultural reset led from the very top, where security is not seen as a bureaucratic hurdle but as an integral component of every mission.

    The arrogance displayed by Madhu Gottumukkala is a counterintelligence nightmare. The arrogance and hubris are breathtaking. This case represents a willful blindness to the reality of the threats we face, or worse, zero concern whatsoever for the protection of national security assets. Our adversaries are relentless, sophisticated, and constantly probing for weaknesses. We cannot tolerate bureaucrats who view security protocols as optional. The integration of AI into our national security architecture holds immense promise, but that promise can only be realized if it is guided by the enduring principles of vigilance, discipline, and respect for the sanctity of sensitive information. To do otherwise is not just foolish. It is a betrayal of the public trust and a dereliction of the duty to protect the nation.

    ~ C. Constantin Poindexter, MA in Intelligence, Graduate Certificate in Counterintelligence, JD, CISA/NCISS OSINT certification, DoD/DoS BFFOC Certification

    Bibliography

    • Department of Homeland Security. (2023). Management Directive 11042.1: Safeguarding Sensitive But Unclassified (For Official Use Only) Information. Retrieved from DHS.gov
    • Department of Homeland Security. (2024). DHS’s Responsible Use of Generative AI Tools. Retrieved from DHS.gov
    • National Counterintelligence and Security Center. (2025). Annual Threat Assessment: Adversary Exploitation of Leaked Data. Washington, D.C.: Office of the Director of National Intelligence.
    • OpenAI. (2025). ChatGPT Data Usage Policy. Retrieved from OpenAI.com
      Sakellariadis, J. (2026, January 27). Trump’s Acting Cyber Chief Uploaded Sensitive Files into a Public Version of ChatGPT. POLITICO. Retrieved from Politico.com
    • Cybersecurity and Infrastructure Security Agency (CISA). (2023, June 1). *AA23-165A: MOVEit Transfer Vulnerability Exploit

More Posts