Perils of Public AI from a Counterintelligence Perspective: The Madhu Gottumukkala Case

a.i., artificial intelligence, spy, spies, intelligence, counterintelligence, espionage, counterespionage, C. Constantin Poindexter

The Perils of Public AI from a Counterintelligence Operator’s View: A Case Study on Madhu Gottumukkala’s Reckless Use of ChatGPT

In the clandestine world of national security, the line between operational success and catastrophic failure is often measured in millimeters of discretion. The recent revelation that Madhu Gottumukkala, the acting director of the Cybersecurity and Infrastructure Security Agency (CISA), utilized a public, commercially available version of ChatGPT to process “for official use only” (FOUO) documents is not merely a procedural misstep. It is an incredibly stupid counterintelligence debacle, I mean, “of the highest order” (Sakellariadis, 2026). This incident exposes a chasm of staggering depth between the rapid adoption of transformative technology and the foundational principles of information security that have, until now, protected the nation’s most sensitive secrets. From my perspective as a counterintelligence expert, Gottumukkala’s actions were not born of ignorance but of a dangerous arrogance, a presumption that his position insulated him from the very rules he was sworn to enforce. This presumption is a gift to adversarial FIS and a nightmare for those tasked with defending the integrity of our intelligence apparatus.

The Inherent Treachery of Public Large Language Models

To understand the gravity of Gottumukkala’s error, one must first dissect the fundamental architecture and data policies of public Large Language Models (LLMs) like OpenAI’s ChatGPT. These models are not inert tools; they are dynamic, cloud-hosted systems designed to learn and evolve from user interactions. OpenAI’s policy, while occasionally nuanced, has consistently maintained that submitted data may be retained and used to train and refine their models (OpenAI, 2025). This means that every prompt, every document fragment, and every query entered into the public interface becomes part of a vast, aggregated dataset. For a civilian user, this might raise privacy concerns. For a government official handling sensitive material, it represents an unauthorized and uncontrolled data spill of potentially catastrophic proportions.

The data itself is only half the problem. The metadata generated by the interaction, i.e., user’s IP address, device fingerprinting, session timings, and the very nature of the queries, etc., provides a rich tapestry of intelligence for a determined adversary. A sophisticated FIS such as China’s Ministry of State Security (MSS) or Russia’s SVR does not need to directly breach OpenAI’s servers to benefit. They can analyze the model’s outputs over time to infer the types of questions being asked by government entities. If an official uploads a contracting document related to a critical infrastructure project, the model’s subsequent, more knowledgeable answers about that specific topic could signal a point of interest. This is a form of signals intelligence (SIGINT) by proxy, where the adversary learns not what we know, but what we are focused on, thereby revealing strategic priorities and operational vulnerabilities.

Furthermore, the security of these public platforms is a moving target. While no direct evidence of a major breach of OpenAI’s training data is publicly available, the possibility cannot be discounted. The U.S. intelligence community operates on the principle of need-to-know and compartmentalization precisely because no system is impenetrable. Deliberately placing sensitive data into a system with an opaque security posture, governed by a private company with its own corporate interests and potential vulnerabilities, is an abdication of the most basic tenets of information security. The 2023 breach of MoveIt Transfer, a widely used file-transfer software, which impacted hundreds of organizations, including government agencies, serves as a stark reminder that even trusted third-party systems can be compromised (CISA, 2023). Gottumukkala’s actions effectively created a similar, albeit digital, vulnerability by choice.

The Anatomy of an Insider Threat: Arrogance as a Vector

Counterintelligence professionals spend their careers identifying and mitigating insider threats, which are often categorized as malicious, coerced, or unintentional. Gottumukkala’s case falls into a particularly insidious subcategory, . . . the entitled or arrogant insider. This is an individual who, often due to seniority or perceived importance, believes that security protocols are for lesser mortals. His reported actions paint a textbook picture. Faced with a blocked application, he did not seek to understand the policy or use the approved alternative; he reportedly demanded an exemption, forcing his subordinates to override security measures designed to protect the agency (Sakellariadis, 2026). He just assumed that the rules simply did not apply to him.

This behavior is more than a simple lapse in judgment. It is a systemic cancer. When a leader demonstrates a flagrant disregard for established rules, it erodes the entire security culture of an organization. Junior personnel, witnessing a senior official flout policy without immediate repercussion, receive a clear message. The rules are flexible, especially for the powerful. This creates an environment ripe for exploitation, where other employees may feel justified in likewise ignoring rules that they don’t find convenient, exponentially increasing the agency’s attack surface. Adversarial FIS are adept at exploiting this kind of cultural rot. They understand that a demoralized workforce with a cynical view of leadership is more susceptible to coercion, recruitment, or simple negligence.

Gottumukkala’s reported professional history amplifies these concerns. His documented failure to pass a counterintelligence-scope polygraph examination is a monumental red flag that should have precluded any role involving access to sensitive operational or intelligence information (Sakellariadis, 2026). A polygraph is not a perfect lie detector, but in the counterintelligence context, it is a critical counterespionage tool for assessing an individual’s trustworthiness, susceptibility to coercion, and potential for undeclared foreign contacts. A failure in this screening is a definitive signal of elevated risk. Making matters worse, he sought to remove CISA’s Chief Information Officer (CIO), the very official responsible for maintaining the agency’s cybersecurity posture (Sakellariadis, 2026). This pattern suggests a hostility toward institutional oversight that is antithetical to the role of a cybersecurity leader in addition to hostility towards basic INFOSEC protocols.

The Strategic Cost of a Single Data Point

The documents in question were reportedly FOUO, not classified. This distinction, while bureaucratically significant, is strategically irrelevant to a capable adversary. FOUO documents often contain the building blocks of classified intelligence. They can reveal details about sources and methods, sensitive but unclassified contract information about critical infrastructure, internal deliberations on policy, and/or the identities and roles of key personnel involved in national security efforts.

Consider a hypothetical but plausible scenario. A FOUO document details a DHS contract with a private firm to harden the cybersecurity of a specific sector of the electrical grid. Uploaded to a public AI, this data point is now part of a larger model. An adversary, through persistent querying of the public AI, could potentially coax the model into revealing more about this sector’s vulnerabilities than it otherwise would. Even if the model does not explicitly reveal the document, the adversary’s knowledge of the type of work being done allows them to focus their espionage, cyberattacks, or influence operations on that specific firm or sector. The FOUO document becomes the breadcrumb that leads the adversary to the feast. The Office of the Director of National Intelligence (ODNI) has repeatedly warned in its annual threat assessments that adversaries prioritize unclassified data collection to build a mosaic of intelligence (ODNI, 2025). Each piece is harmless on its own, but together they form a clear and actionable picture.

The existence of secure, government-controlled alternatives makes this incident all the more infuriating. The Department of Homeland Security has developed and deployed its own AI-powered tool, DHSChat, specifically designed to operate within a secured federal network, ensuring that sensitive data does not leave the government’s digital ecosystem (DHS, 2024). Gottumukkala’s insistence on using the public, less secure option over the purpose-built, secure one is the action of someone who either lacks a fundamental understanding of the threat landscape or simply doesn’t give a shit. In either case, the result is the same. It is an unnecessary forced error, and self-inflicted wound on national security.

The Imperative of Accountability and a Zero-Tolerance Mandate

The response to this incident should be unequivocal and severe. The Department of Homeland Security’s own Management Directive 11042.1 mandates that any unauthorized disclosure of FOUO information be investigated as a security incident, potentially resulting in “reprimand, suspension, removal, or other disciplinary action” (DHS, 2023). Anything less than a full counterintelligence investigation, coupled with Gottumukkala’s immediate removal from any position of trust, signals a tacit acceptance of reckless behavior.

This case should catalyze a broader policy shift across the entire Intelligence Community which has been visibly altered by current leadership. A zero-tolerance policy for the use of public AI tools with any government data, let alone sensitive information, must be implemented and enforced without exception. This requires more than a memo. It requires robust technical controls, including network-level blocks to prevent such data exfiltration and continuous monitoring for policy violations. It also demands a cultural reset led from the very top, where security is not seen as a bureaucratic hurdle but as an integral component of every mission.

The arrogance displayed by Madhu Gottumukkala is a counterintelligence nightmare. The arrogance and hubris are breathtaking. This case represents a willful blindness to the reality of the threats we face, or worse, zero concern whatsoever for the protection of national security assets. Our adversaries are relentless, sophisticated, and constantly probing for weaknesses. We cannot tolerate bureaucrats who view security protocols as optional. The integration of AI into our national security architecture holds immense promise, but that promise can only be realized if it is guided by the enduring principles of vigilance, discipline, and respect for the sanctity of sensitive information. To do otherwise is not just foolish. It is a betrayal of the public trust and a dereliction of the duty to protect the nation.

~ C. Constantin Poindexter, MA in Intelligence, Graduate Certificate in Counterintelligence, JD, CISA/NCISS OSINT certification, DoD/DoS BFFOC Certification

Bibliography

  • Department of Homeland Security. (2023). Management Directive 11042.1: Safeguarding Sensitive But Unclassified (For Official Use Only) Information. Retrieved from DHS.gov
  • Department of Homeland Security. (2024). DHS’s Responsible Use of Generative AI Tools. Retrieved from DHS.gov
  • National Counterintelligence and Security Center. (2025). Annual Threat Assessment: Adversary Exploitation of Leaked Data. Washington, D.C.: Office of the Director of National Intelligence.
  • OpenAI. (2025). ChatGPT Data Usage Policy. Retrieved from OpenAI.com
    Sakellariadis, J. (2026, January 27). Trump’s Acting Cyber Chief Uploaded Sensitive Files into a Public Version of ChatGPT. POLITICO. Retrieved from Politico.com
  • Cybersecurity and Infrastructure Security Agency (CISA). (2023, June 1). *AA23-165A: MOVEit Transfer Vulnerability Exploit

A Pier Walk, an Encrypted App, and a Trail of Receipts: The Wei Espionage Case, Counterintelligence and PRC Tradecraft

china, PRC, PLA, espionage, spy, spies, counterespionage, counterintelligence, intelligence, C. Constantin Poindexter, counterespionage;

The two-hundred-month federal sentence imposed on U.S. Navy sailor Jinchao Wei, also known as Patrick Wei, is not merely a cautionary tale about a single insider’s betrayal. It is a contemporary, well documented case study in the People’s Republic of China’s persistent espionage campaign against U.S. defense entities, executed through an operational pattern that has become all too familiar to counterintelligence practitioners, i.e., low friction spotting and assessment via online platforms, cultivation under plausible non-official cover, incremental tasking that begins with seemingly innocuous collection, and compensation methods that leave a financial signature even when communications are migrated to encrypted channels (U.S. Department of Justice, 2023; U.S. Department of Justice, 2026a). The Wei matter is also a reminder that insider threats rarely begin with the theft of a crown jewel. They begin with ego, attention, a sense of being chosen, and the seductive illusion that the handler is impressed and that the target is smarter than the system.

Public reporting and Department of Justice releases describe Wei as having been arrested in August 2023 as he arrived for duty at Naval Base San Diego, where he was assigned to the amphibious assault ship USS Essex (U.S. Department of Justice, 2023; U.S. Department of Justice, 2026b). The arrest timing and location are operationally significant. Counterintelligence cases often culminate when investigators can control the environment, secure devices and storage, and prevent further loss of national defense information while preserving evidentiary integrity. The government’s narrative, as presented publicly, reflects a mature, documentable case anchored in communications and payment records rather than exotic or highly sensitive sources. The Department of Justice has been explicit that not every investigative step can be disclosed and I don’t intend to do so here, but it has been equally clear that the evidentiary core included intercepts of communication between Wei and his PRC handler, and documentation of how Wei was rewarded for his betrayal (U.S. Department of Justice, 2026a).

The recruitment vector in this case aligns with PRC modus operandi in insider targeting. Wei was approached through social media by an individual presenting as a “naval enthusiast” who claimed a connection to China’s state-owned shipbuilding sector, a cover story designed to appear adjacent to legitimate maritime interest while still close enough to naval affairs to justify pointed questions (U.S. Department of Justice, 2026a; Associated Press, 2026). That presentation is instructive. It reduces the psychological barrier to engagement, provides a rationale for curiosity-driven dialogue, and permits gradual escalation from general discussion to tasking. A handler does not need immediate access to classified networks to create damage. He needs a human source who can provide operationally relevant details, and then he needs to keep the source talking long enough to normalize betrayal.

Once engaged, Wei’s operational security behavior demonstrates both awareness and complicity. He told a Navy friend that the activity looked “quite obviously” like espionage and, after that realization, he shifted communications to a different encrypted messaging application that he believed was more secure (U.S. Department of Justice, 2026a; USNI News, 2026). This is an important marker for investigators and security managers. When a cleared person acknowledges illicit intent yet continues, the motivation is not confusion. It is volition. The move to a “more secure” platform is also characteristic of PRC handling in HUMINT collection. Chinese FIS does not need to provide sophisticated technical tradecraft if the target will self-generate it. Public charging language indicates agreed steps to conceal the relationship, including deletion of conversation records and use of encrypted methods, which reflects basic but purposeful counter-surveillance and denial behavior (U.S. Department of Justice, 2023).

Tasking, as described in public releases, combined opportunistic collection with specific collection requirements. Wei was asked to “walk the pier” and report which ships were present, provide ship locations, and transmit photos and videos along with ship-related details (U.S. Department of Justice, 2026a). From a counterintelligence perspective, these are not trivial asks. Pier-side observations can support pattern of life analysis, readiness inference, and operational planning, particularly when fused with open source material and other clandestine reporting. The case officer’s methodology is “incrementalism”. A handler begins with items that feel observational and deniable, then pulls the source toward more sensitive materials by normalizing the exchange relationship and introducing compensation.

The most damaging element is the alleged transfer of classified technical and operational documents. DOJ accounts state that over an approximately 18-month relationship, Wei provided approximately sixty manuals and other sensitive materials, including at least thirty manuals transmitted in one tranche in June 2022, some of which clearly bore export control warnings. The materials were related to ship systems such as power, steering, weapons control, elevators, and damage and casualty controls (U.S. Department of Justice, 2026a; U.S. Department of Justice, 2026b; Associated Press, 2026). In counterintelligence risk terms, technical manuals provide adversaries with a low-cost blueprint for exploitation. They can inform electronic attack planning, maintenance and sustainment targeting, and vulnerability discovery. They also enable synthetic training and doctrine development for adversary operators. A single manual can be operationally relevant for years because systems and procedures often evolve incrementally, not continuously.

Compensation details illuminate tradecraft and investigative leverage. Wei received more than $12,000 over the course of the relationship, including an alleged $5,000 payment connected to the June 2022 manual transfer. The DOJ has described the use of online payment methods (U.S. Department of Justice, 2023; U.S. Department of Justice, 2026a). This is common in modern espionage involving HUMINT assets who are not professional intelligence officers. Financial transfers create documentary evidence, establish quid pro quo, and provide prosecutors with a corroborating narrative that is legible to a jury. For counterintelligence professionals, this observation is instructive. When communications shift to encrypted platforms, payment flows often remain discoverable through records, device artifacts, and third-party reporting. The operational discipline required to truly eliminate financial signatures is rarely present in an insider unless he or she is COMSEC sophisticated.

Public disclosures describe the case’s investigative architecture in broad but meaningful terms which are instructive even in the absence of the classified story. The FBI and Naval Criminal Investigative Service conducted the investigation. The DOJ characterized the matter as a “first of its kind” espionage investigation in the district, language that signals a substantial investigative effort and a prosecutorial commitment to proving the national security dimension in open court (U.S. Department of Justice, 2026a). The described evidence set emphasizes calls and electronic and audio messages with the PRC handler, payment records and receipts, and a post-arrest interrogation in which Wei admitted to providing the materials and described his conduct as espionage (U.S. Department of Justice, 2026a). Those elements are not glamorous, but they are decisive. They reflect the fundamentals of counterintelligence case building: document the relationship, document tasking and exchanges, document intent and benefit.

This IS PRC modus operandi! The Wei case fits a familiar pattern. The approach was enabled by digital access to targets, the cover identity was plausibly adjacent to the target’s professional interests, and the relationship was escalated through a play on Wei’s ego, . . . a mix of attention, manipulation, and money to compromise him. Tradecraft relied on human psychology, not advanced technical means. The Chinese FIS officer did not need to defeat a classified network. He convinced an insider to carry information out through routine channels and to do so voluntarily. This is a good example of why insider threat programs cannot focus only on clearance adjudication and periodic training. They must incorporate behavioral indicators, targeted education about online elicitation, and strong reporting pathways that reward early disclosure rather than stigmatize it (U.S. Department of Justice, 2023; U.S. Department of Justice, 2026a).

There is also a supervisory and cultural lesson embedded here. Wei voiced suspicion to another sailor. That disclosure was a moment when the damage could have been immediately contained. Peers often see the first signs of a peril, yet peers hesitate, either because they do not want to “ruin someone’s career” or because they assume someone else will act. Counterintelligence operators should treat this as a design requirement. Reporting must be made psychologically easy, procedurally simple, and institutionally supported. A peer report should trigger a calibrated and coordinated response, not an immediate public spectacle. The goal is to get ahead of compromise, not to create an environment where personnel conceal concerns to avoid attention.

The Wei case is a well-evidenced illustration of PRC espionage tradecraft against the United States. Chinese FIS spots and contacts potential insiders at scale through social platforms, cultivates via plausible identity, normalizes secret communications, introduces tasking that begins with the innocuous then escalates to classified materials, and pays through channels that are convenient to the target while still supporting handler control and a firm compromise of the asset (U.S. Department of Justice, 2023; U.S. Department of Justice, 2026a; USNI News, 2026). In my professional judgment, this is another textbook example of ego as the primary driver beneath the surface rationalizations. Even when loneliness, financial temptation, or grievance are present, the consistent psychological engine in treasonous espionage is the ego’s appetite to feel important, chosen, liked, befriended and exceptional. Wei’s conduct underscores that dynamic. He recognized the espionage for what it was, believed he could manage his exposure by encrypted applications, and continued down the road of betrayal. That is not naïveté. It is a belief that rules apply to others, that risk can be controlled by personal cleverness, and that the handler’s attention is a validation of one’s importance in the world. In very few espionage cases, money is the hook. The I.C. likes to think that examples like the Ames Case was a money-motivated treason. It was only partially. Likewise, the I.C. report on Ana Montés lays the blame at the feet of “ideology”. That really wasn’t it. Ego is the line that keeps the source from walking away when conscience and common sense offer an exit. It is almost ALWAYS ego.

~ C. Constantin Poindexter, MA in Intelligence, Graduate Certificate in Counterintelligence, JD, CISA/NCISS OSINT certification, DoD/DoS BFFOC Certification

Bibliography

  • Associated Press. (2026, January 12). Former Navy sailor sentenced to 16 years for selling information about ships to Chinese intelligence.
  • U.S. Department of Justice. (2023, August 3). Two U.S. Navy servicemembers arrested for transmitting military information to the People’s Republic of China.
  • U.S. Department of Justice. (2026a, January 13). Former U.S. Navy sailor sentenced to 200 months for spying for China.
  • U.S. Department of Justice. (2026b, January 14). U.S. Navy sailor sentenced to more than 16 years for spying for China.
  • USNI News. (2026, January 13). Sailor to serve 16 year prison sentence for selling secrets to China.

Legal Remedies Open to Minnesota: ICE Operations and Redress for Civilian Deaths

justice, alex pretti, renee good, ICE, C. Constantin Poindexter

I am a patriot. I have always felt it a privilege to be American and very proud of what we represent to the world. Times have changed, and something strickingly ugly has happened to us. The Renee Good, Keith Porter and Alex Pretti homicides are the last straw. If our President will not step in to stop this, the state(s) must. Minnesota’s ability to halt federal immigration enforcement is constrained by federal supremacy, but it is not null. A state cannot nullify or physically obstruct federal law enforcement acting within lawful federal authority, because immigration enforcement is a core federal power and the Supremacy Clause preempts contrary state action (U.S. Const., art. VI; Arizona v. United States, 2012). The practical and legally durable approach is to distinguish between lawful federal immigration enforcement and allegedly unlawful operational conduct, including unconstitutional crowd control, unreasonable seizures, excessive force, and agency action that exceeds statutory or constitutional limits. Within that framing, Minnesota and its political subdivisions can pursue aggressive, legally cognizable remedies that combine federal court equitable relief, state sovereign measures that deny logistical support and eliminate state entanglement, evidence preservation and independent investigations for lethal force incidents, and damages pathways structured around the Federal Tort Claims Act and carefully pleaded individual capacity claims.

A decisive early step is to build the record and procedural posture for emergency relief. Minnesota’s Attorney General and major cities have already placed this template into the federal docket by seeking declaratory and injunctive relief against what they characterize as an unprecedented surge operation, and by pleading constitutional and Administrative Procedure Act theories (State of Minnesota v. Noem, Complaint, 2026; Minnesota Attorney General’s Office, 2026a). Contemporary reporting describes civilian deaths during the surge, including Alex Pretti on January 24, 2026, and notes that a federal judge ordered preservation of evidence connected to that incident (CBS Minnesota, 2026; The Guardian, 2026). Reporting also documents a prior death earlier in the month and recurring force allegations tied to the surge environment (The Marshall Project, 2026). These allegations and procedural developments are central to remedy selection, because courts are materially more willing to restrain specific unconstitutional tactics than to enjoin immigration enforcement as a category.

A primary remedy is immediate federal court equitable relief. Minnesota’s fastest lawful braking mechanism is a temporary restraining order and preliminary injunction focused on unlawful conduct rather than federal authority in the abstract (28 U.S.C. §§ 1331, 2201–2202). Minnesota can seek a declaratory judgment that discrete federal practices violate the Constitution or exceed statutory authority, coupled with injunctive relief that prohibits specified behaviors, mandates training and supervision changes, and compels evidence retention and production schedules (State of Minnesota v. Noem, Complaint, 2026). Evidence control is not merely ancillary. In lethal force disputes, preservation orders can be the most attainable short-term relief and can materially influence later liability outcomes. Reporting indicates a preservation order in the Pretti matter, and allegations of obstruction in gaining access to the scene, which underscores why Minnesota should continue to press targeted preservation and access relief for body-worn camera footage, dispatch logs, chain of custody documentation, and third-party video sources (CBS Minnesota, 2026).

On the merits, Minnesota can plead multiple constitutional theories that are cognizable in equity even when actions for damages against federal actors are limited. First Amendment claims can be framed as retaliation and viewpoint discrimination, and as a chilling regime when federal agents are alleged to use force against peaceful expressive activity (Hartman v. Moore, 2006; Nieves v. Bartlett, 2019). Fourth Amendment claims can be framed as unreasonable seizures and excessive force. Those claims support injunctive relief to change practices governing stops, detentions, and use of force, particularly where plaintiffs can show a pattern, policy, or command structure rather than a one-off incident (Graham v. Connor, 1989; Tennessee v. Garner, 1985). Fifth Amendment due process framing can supplement where conduct is alleged to be arbitrary or conscience-shocking in a civil enforcement setting (County of Sacramento v. Lewis, 1998). In each lane, the remedy posture should be calibrated to what courts will enjoin. The goal is not a sweeping ban on federal presence, but enforceable constraints and oversight mechanisms that prevent unconstitutional practices and preserve evidence.

Statutorily, the Administrative Procedure Act remains a central lever when the dispute can be characterized as unlawful agency action, ultra vires deployment, or a final agency policy that is arbitrary and capricious, contrary to constitutional right, or adopted without required procedure (5 U.S.C. §§ 702, 706). Even where the government frames the operation as discretionary, plaintiffs can target categorical rules and structured practices that resemble policy rather than case-by-case discretion, including deployment criteria, operational directives, and deviations from articulated enforcement protocols (State of Minnesota v. Noem, Complaint, 2026; Minnesota Attorney General’s Office, 2026a). The APA posture also aligns with remedy realism. Courts often resist ordering how to enforce immigration law, but will restrain agency actions that lack lawful procedure, exceed statutory authority, or violate constitutional limits.

Separately, Minnesota’s structural state power is strongest in disentanglement. The anti-commandeering doctrine bars the federal government from compelling states or localities to administer or enforce federal regulatory programs (Printz v. United States, 1997; Murphy v. NCAA, 2018). This doctrine does not permit obstruction, but it does permit Minnesota to prohibit state and local employees from participating in certain federal immigration activities, such as honoring civil detainers absent judicial warrants, providing nonpublic data access beyond what federal law requires, and using state resources for federal tasking. Operationally, Minnesota can reinforce disentanglement through statewide policies governing state facilities and state-controlled information systems. The objective is to ensure that federal operations must stand on federal resources and federal legal authority alone, while Minnesota maintains compliance with any narrow federal preemption requirements and avoids discrimination against federal officers as such.

For redress of deaths and serious injuries, Minnesota’s investigative and prosecutorial tools matter, but they are bounded by Supremacy Clause immunity principles. Homicide and assault are state crimes, and Minnesota agencies can investigate shootings within Minnesota’s territory. However, federal officers may assert a Supremacy Clause-related immunity against state prosecution for actions taken within the scope of federal duties and authorized by federal law (In re Neagle, 1890). That doctrine is not absolute. If facts indicate actions outside lawful authority, or actions that no reasonable officer could regard as necessary and proper to execute federal duties, state prosecution becomes more plausible. Even where prosecution is foreclosed or removed, robust state investigation is still consequential. It establishes an independent factual record, constrains narratives, supports federal civil remedies, and can trigger institutional accountability mechanisms. In this context, contemporaneous reporting about contested accounts and video evidence underscores the importance of independent scene processing where possible, preservation of third-party footage, coordinated witness interviewing, and transparent public reporting (CBS Minnesota, 2026; The Guardian, 2026).

For damages, Minnesota must separate who can sue and under what theory. Wrongful death damages generally belong to estates and statutory beneficiaries under state law, but the state can support and, in some contexts, pursue recovery for sovereign and proprietary harms. The principal damages route for torts committed by federal employees is the Federal Tort Claims Act, which waives sovereign immunity for certain torts and applies the law of the place where the act occurred (28 U.S.C. §§ 1346(b), 2671–2680). The FTCA law enforcement proviso permits claims for specified intentional torts, including assault and battery, when committed by investigative or law enforcement officers (28 U.S.C. § 2680(h)). Lethal force cases frequently litigate as operational conduct rather than protected policy discretion, though the United States regularly pleads discretionary function defenses and other exceptions (28 U.S.C. § 2680(a)). Plaintiffs must also satisfy the FTCA’s administrative presentment, exhaustion, and limitations requirements, which makes early evidence preservation and record building essential.

If plaintiffs sue individual officers under state tort theories, the Westfall Act frequently triggers substitution of the United States as the defendant for acts within scope, routing the matter back into FTCA exclusivity (28 U.S.C. § 2679). That substitution fight can be dispositive, and it makes careful pleading and factual support crucial, including any evidence that conduct was outside the scope of employment or otherwise not in furtherance of federal duties. Constitutional damages claims against federal officers under Bivens remain theoretically available for some Fourth Amendment paradigms, but the Supreme Court has sharply limited extensions into new contexts, particularly those touching immigration and national security adjacent environments (Bivens v. Six Unknown Named Agents, 1971; Hernández v. Mesa, 2020; Egbert v. Boule, 2022). As a result, victims’ counsel should treat Bivens as a high-risk vehicle and pair any constitutional damages strategy with FTCA claims and equitable relief that does not depend on implying a new damages remedy.

The phrase “stop operations in their tracks” should be operationalized into legally enforceable outcomes: a court-ordered prohibition on unconstitutional suppression of protest, restrictions on unreasonable stops and seizures, strict evidence preservation and production directives for lethal force incidents, and APA-compliant justification and process for any mass surge policy. Minnesota’s existing litigation posture already seeks declaratory and injunctive relief and frames the surge as extraordinary, which positions the state to pursue precisely this kind of targeted judicial control rather than an unattainable blanket prohibition (State of Minnesota v. Noem, Complaint, 2026; Minnesota Attorney General’s Office, 2026a). When paired with disciplined state non-cooperation grounded in anti-commandeering doctrine and meticulous state-level investigation of lethal force incidents, Minnesota can constrain the operational environment, preserve accountability evidence, and position victims’ families for meaningful damages recovery.

In short, the strongest legal tools are not physical resistance or nullification. They are rapid federal court equitable relief, disciplined state disentanglement, evidence-centered litigation, and damages architectures that convert unlawful force into enforceable liability under the FTCA and related doctrines, while recognizing the Supreme Court’s narrowing of implied constitutional damages remedies.

~ C. Constantin Poindexter, MA in Intelligence, Graduate Certificate in Counterintelligence, JD, CISA/NCISS OSINT certification, DoD/DoS BFFOC Certification

Bibliography

  • Arizona v. United States, 567 U.S. 387 (2012).
  • Bivens v. Six Unknown Named Agents of Federal Bureau of Narcotics, 403 U.S. 388 (1971).
  • CBS Minnesota. (2026, January 25). Judge grants restraining order against DHS after Border Patrol kills Alex Pretti in Minneapolis.
  • County of Sacramento v. Lewis, 523 U.S. 833 (1998).
  • Egbert v. Boule, 596 U.S. 482 (2022).
  • Graham v. Connor, 490 U.S. 386 (1989).
  • Hartman v. Moore, 547 U.S. 250 (2006).
  • Hernández v. Mesa, 589 U.S. 93 (2020).
  • In re Neagle, 135 U.S. 1 (1890).
  • Minnesota Attorney General’s Office. (2026a, January 12). Attorney General Ellison and cities of Minneapolis and Saint Paul sue to halt ICE surge into Minnesota.
  • Murphy v. NCAA, 584 U.S. 453 (2018).
  • Nieves v. Bartlett, 587 U.S. 391 (2019).
  • Printz v. United States, 521 U.S. 898 (1997).
  • State of Minnesota v. Noem, Complaint for Declaratory and Injunctive Relief, U.S. District Court for the District of Minnesota, Case No. 0:26-cv-00190 (D. Minn. filed 2026, January 12).
  • Tennessee v. Garner, 471 U.S. 1 (1985).
  • The Guardian. (2026, January 24). Report on the killing of a U.S. citizen in Minneapolis during federal agent activity.
  • The Marshall Project. (2026, January 7). Report on use of force allegations connected to immigration enforcement activity in Minneapolis.

When Counterintelligence Did Not “Catch” Jonathan Soong

espionage, counterespionage, intelligence, counterintelligence, spy, spies, C. Constantin Poindexter

When Counterintelligence Did Not “Catch” the Bad Guy: How Export Compliance and Oversight Stopped an Illicit Transfer

As a counterintelligence guy, I would love to claim one for the team, telling you a story of how counterintelligence “caught” Jonathan Soong. The question presumes a familiar arc: a clandestine plot detected by a vigilant counterintelligence service, followed by an investigative takedown. In practice, many of the most consequential national security cases in the defense industrial base begin elsewhere. They begin in the unglamorous terrain of export controls, contractual oversight, documentation requirements, and compliance escalation. The Soong matter is best read not as a story of counterintelligence brilliance at the point of origin, but as a demonstration that a robust compliance mechanism can function as a practical counterintelligence force multiplier, surfacing deception through audit friction, verification, and internal accountability (U.S. Department of Justice 2025a).

Jonathan Yet Wing Soong worked under a University Space Research Association arrangement supporting NASA, where he helped administer licensing and distribution of U.S. Army-owned aviation and flight control software subject to U.S. export controls. Public charging and plea materials describe a pattern that is familiar to any counterintelligence professional who has studied insider-enabled technology transfer. A trusted administrator leveraged authorized access to facilitate improper export to a prohibited end user, while using misrepresentation and intermediaries to reduce detection risk and sustain the activity long enough to monetize it (U.S. Department of Justice 2022; U.S. Department of Justice 2023; U.S. Department of Commerce, Bureau of Industry and Security 2022).

Export compliance as counterintelligence by another name

In the contractor ecosystem, counterintelligence is no longer confined to investigations and briefings. It is built into controls that regulate who can access what, who can receive what, and what documentation must exist to justify a transfer. Export compliance is the legal expression of strategic technology denial. When an export compliance program is mature, it creates a perimeter of verification around controlled software, technical data, and sensitive know-how. It does this through end-user screening, licensing checks, record retention, and the expectation that representations are auditable, not merely asserted (U.S. Department of Justice 2025a).

Soong’s conduct, as publicly described, involved providing controlled U.S. Army aviation software to the Beijing University of Aeronautics and Astronautics, commonly known as Beihang University, an end-user on the U.S. Entity List. The Entity List designation matters because it transforms what might otherwise be a complicated compliance decision into a bright-line restriction: an elevated risk recipient that generally requires licensing and heightened scrutiny. In counterintelligence terms, it is a government signal that a recipient is associated with activities of concern and therefore must be treated as a strategic risk, not just a commercial counterparty (U.S. Department of Commerce, Bureau of Industry and Security 2022; U.S. Department of Justice 2022).

The decisive tripwire was oversight, not classic counterintelligence detection

The core point that the public often misses is timing. The publicly documented narrative indicates that the scheme was not halted because counterintelligence detected hostile tasking in real time. Rather, the activity began to unravel when NASA asked questions about software licensing activity involving China-based purchasers. That inquiry triggered internal examination at USRA, which then forced Soong’s process, documentation, and representations into a higher scrutiny environment (U.S. Department of Justice 2025a).

From a former operator’s perspective, that is the moment the system displayed its value. Oversight created heat. Heat compelled review. Review compelled proof. Proof created contradictions. Contradictions produced admissions and preserved evidence. That sequence is not incidental. It is the operational logic of compliance as an investigative engine. When a compliance system is designed to verify rather than merely record, it becomes difficult for an insider to sustain a cover story indefinitely.

The cover story failed under verification pressure

Public DOJ descriptions emphasize that Soong initially lied and fabricated evidence to make it appear that purchaser diligence had been conducted. In my experience, this is the most common failure mode for organizations that treat compliance as a box-checking function: insiders learn the minimum artifacts that satisfy superficial review. The Soong case illustrates what happens when counsel and compliance do not accept the first answer. DOJ accounts describe further investigation by USRA’s counsel, confrontation with contradictions, and Soong’s eventual admissions, including that he knew the end user was on the Entity List and that an export license was required (U.S. Department of Justice 2025a).

That is not just a legal detail. It is the fulcrum that turns suspicion into provable intent. Counterintelligence professionals care about intent because intent distinguishes mistake from exploitation and distinguishes weak governance from an insider who is actively enabling a strategic competitor or worse, adversarial FIS. Admissions anchored to documented contradictions are highly durable. They are not dependent on classified sources or contested analytic judgments. They are built for court cases.

Intermediaries and misdirection are a compliance evasion pattern

The public record also describes the use of an intermediary to obscure the true end user and facilitate the commercial pathway. This is a standard concealment vector. Intermediaries can be used to launder payment trails, shift transactional geography, and create plausible deniability within internal processes that rely on surface-level end-user statements. If a program relies on the integrity of a single administrator’s “screening,” the administrator becomes the control. If the administrator is compromised, the system is compromised. In this case, public materials describe intermediary involvement and a transfer pathway that, when examined, revealed the underlying restricted recipient (Department of Defense Office of Inspector General, Defense Criminal Investigative Service 2023; U.S. Department of Justice 2025a).

For counterintelligence practitioners, the lesson is straightforward: third party structures are not merely procurement conveniences. They are also tradecraft. In an export controls environment, every intermediary should be treated as a potential concealment method unless diligence is independently verifiable.

Voluntary self-disclosure converted an internal discovery into a national security case

Once internal discovery occurred, the matter moved from corporate governance to national security enforcement. DOJ’s public declination notice emphasized that USRA self disclosed export control offenses committed by its employee and cooperated, which shaped the government’s posture toward the company while leaving the individual to face prosecution (U.S. Department of Justice 2025a). That sequence is important for practitioners because it demonstrates how compliance maturity affects outcomes. Prompt internal escalation, self disclosure, and remediation can separate an organization’s institutional exposure from the conduct of a rogue insider, while also strengthening the government’s ability to build a case against the perpetrator.

DOJ also identified the investigative constellation, including Commerce export enforcement, the FBI, Defense Criminal Investigative Service, NASA Office of Inspector General, and U.S. Army elements including Army counterintelligence and investigative components. In other words, counterintelligence was present and relevant, but it was not the initial tripwire. It was part of the enforcement and investigative consolidation phase after compliance mechanisms surfaced the issue and the company disclosed it (U.S. Department of Justice 2025a; U.S. Department of Justice 2023).

Compliance “caught” the act and counterintelligence helped finish the job

If we insist on the verb “catch,” my professional assessment is that counterintelligence did not “catch” Jonathan Soong in the popular sense of the term. The decisive early detection function was performed by oversight and export compliance mechanisms. NASA’s questions triggered organizational scrutiny. Scrutiny demanded documentation. Documentation collapsed under verification. Verification produced contradictions and admissions. Those admissions and records enabled self-disclosure and a multi-agency investigation that culminated in a guilty plea. Counterintelligence contributed where it often contributes most effectively in the contractor environment: by supporting the investigative and enforcement architecture once a compliance tripwire has surfaced misconduct, and by helping translate a technical compliance failure into a national security narrative that the government can prosecute (U.S. Department of Justice 2025a; U.S. Department of Justice 2023).

This is not a criticism of counterintelligence. It is an argument for modernizing how we describe counterintelligence effectiveness. In the defense industrial base, export compliance is not adjacent to counterintelligence. Export compliance is frequently counterintelligence in operational form. When built correctly, it makes illicit transfer hard to hide, expensive to sustain, and likely to fail under audit pressure. The Soong case is the quiet proof that governance, oversight, and export controls can stop a technology transfer plot even when no one is running a classic counterintelligence operation at the beginning.

~ C. Constantin Poindexter, MA in Intelligence, Graduate Certificate in Counterintelligence, JD, CISA/NCISS OSINT certification, DoD/DoS BFFOC Certification

Bibliography

  • Department of Defense Office of Inspector General, Defense Criminal Investigative Service. 2023. “Defendant Admits Using Intermediary to Funnel Payments for United States Army Aviation Software Exported to Beihang University.” Press release, January 17, 2023.
  • U.S. Department of Commerce, Bureau of Industry and Security. 2022. “South Bay Resident Charged with Smuggling and Exporting American Aviation Technology to Beijing University.” Press release, May 26, 2022.
  • U.S. Department of Justice. 2022. “South Bay Resident Charged with Smuggling and Exporting American Aviation Technology to Beijing University.” Press release, U.S. Attorney’s Office, Northern District of California, May 26, 2022.
  • U.S. Department of Justice. 2023. “Castro Valley Resident Pleads Guilty to Illegally Exporting American Aviation Technology.” Press release, U.S. Attorney’s Office, Northern District of California, January 17, 2023.
  • U.S. Department of Justice. 2025a. “Justice Department Declines Prosecution of Company That Self Disclosed Export Control Offenses Committed by Employee.” Press release, Office of Public Affairs, April 30, 2025.

SIGNAL Secure for Intelligence Practitioners and will be for the Quantum Era

SIGNAL, intelligence, counterintelligence, spy, espionage, counterespionage, cyber security, C. Constantin Poindexter

Signal has earned its reputation in intelligence, counterintelligence, and investigative communities for a practical reason. I love it and you should too! The tool was engineered around adversarial assumptions that align with real-world asset targeting. Those assumptions include state-grade collection, cover and often illegal interception, endpoint compromise, credential theft, and long-term bulk retention for future exploitation. Signal is not conventional messaging with security added afterward. It is an integrated protocol suite for key agreement, per-message key evolution, and compromise recovery, supported by open specifications and sustained cryptographic hardening.

From an intelligence professional’s perspective, Signal is compelling because it is designed to remain resilient under partial failure. If an attacker wins a battle by capturing a key, briefly cloning a device, or recording traffic for years, Signal aims to prevent that single win from turning into durable, strategic access. This damage containment model aligns with counterintelligence priorities. Limit the blast radius, shorten adversary dwell time, and force repeated effort that increases the chance of detection.

The Double Ratchet and Per-Message Keys That Constrain Damage

At the core of Signal message confidentiality is the Double Ratchet algorithm, designed by Trevor Perrin and Moxie Marlinspike (Perrin and Marlinspike, 2025). Operationally, the Double Ratchet matters because it delivers properties that align with intelligence tradecraft realities.

Forward secrecy ensures that compromising a current key does not reveal prior message content. Adversaries routinely collect ciphertext in bulk and then hunt for a single point of decryption leverage later through device seizure, insider access, malware, or legal process. Forward secrecy frustrates that strategy by ensuring earlier captured traffic does not become a later intelligence windfall if a key is exposed at some later time (Perrin and Marlinspike, 2025).

Post-compromise security (“break-in recovery”) addresses a scenario intelligence practitioners plan for temporary device compromise. Border inspections, opportunistic theft, coercive access, or a short-lived implant can occur. The Double Ratchet includes periodic Diffie-Hellman updates that inject fresh entropy, while its symmetric ratchet derives new message keys continuously. Once the compromised window ends, later message keys become cryptographically unreachable to the attacker, provided the attacker is no longer persistently on the endpoint (Perrin and Marlinspike, 2025). This is not an exaggerated marketing claim. It is a disciplined key evolution that deprives the adversarial FIS and corporate spies of indefinite reuse of stolen key material.

Incident response logic has a new paradigm. A single brief compromise does not automatically mean permanent exposure of the entire history and future. Instead, the attacker must maintain persistence to retain visibility. That is a higher operational burden and a higher detection risk.

X3DH and PQXDH and the Move Against Harvest Now Decrypt Later

Signal historically used X3DH, Extended Triple Diffie-Hellman, for asynchronous session establishment. This is vital in mobile environments where recipients are often offline. X3DH uses long-term identity keys and signed prekeys for authentication while preserving forward secrecy and deniability properties (Marlinspike and Perrin, 2016). The strategic risk landscape shifted with the plausibility of cryptographically relevant quantum computing. The threat is not only future real-time decryption. It is harvest now/decrypt later. Bulk interception today is strategic, with the expectation that future breakthroughs, including quantum, could unlock stored traffic. Signal responded by introducing PQXDH, “Post Quantum Extended Diffie Hellman”, replacing the session setup with a hybrid construction that combines classical elliptic curve Diffie-Hellman using X25519 and a post quantum key encapsulation mechanism derived from CRYSTALS Kyber (Signal, 2024a). The operational implication is direct. An adversary would need to break both the classical and the post-quantum components to reconstruct the shared secret (Signal, 2024a).

Hybrid key establishment reflects conservative intelligence engineering. Migrate early, avoid sudden cutovers, and reduce reliance on a single new primitive. This also matters because the post-quantum component corresponds to what NIST standardized as ML KEM, derived from CRYSTALS Kyber, in FIPS 203 (NIST, 2024a; NIST, 2024b). NIST standardization does not guarantee invulnerability. It does increase confidence that the primitive has been scrutinized and is being adopted as a baseline for high assurance environments.

Signal also makes an important clarity point in its PQXDH materials. PQXDH provides post-quantum forward secrecy, while mutual authentication in the current revision remains anchored in classical assumptions (Signal, 2024b). Practitioners benefit from that precision because it defines exactly what is post-quantum today.

SPQR and Post Quantum Ratcheting for Long-Lived Operations

Session establishment is only one part of the lifecycle problem. A capable collector can record traffic for long periods. If quantum capabilities emerge later, the question becomes whether ongoing key evolution remains safe against future decryption. Signal’s introduction of the Sparse Post Quantum Ratchet, SPQR, directly addresses continuity by adding post-quantum resilience to the ratcheting mechanism itself (Signal, 2025).

SPQR extends the protocol so that not only the initial handshake but also later key updates gain quantum-resistant properties, while preserving forward secrecy and post-compromise security (Signal, 2025). For intelligence practitioners, this matters because long-lived operational relationships are common. Assets, handlers, investigative sources, and inter-team coordination can persist for months or years. A protocol that hardens only the handshake helps. A protocol that hardens ongoing rekeying is more aligned with the real adversary model of persistent collection.

Academic work has analyzed the evolution from X3DH to PQXDH in the context of Signal move toward post-quantum security and frames PQXDH as mitigation against harvest now decrypt later risk at scale (Katsumata et al., 2025). That framing fits intelligence risk management. Confidentiality is evaluated against patient, well-resourced adversaries.

Formal Analysis and Open Specifications and Why That Matters Operationally

Practitioners should be skeptical of security claims that cannot withstand external review. Signal protocol suite benefits from public specifications and sustained cryptographic scrutiny. A widely cited formal analysis models the protocol’s core security properties and examines its ratchet-based design in detail (Cohn Gordon et al., 2017). No protocol is proven secure against every real-world failure mode. Formal methods and peer-reviewed analysis reduce the chance that structural weaknesses remain hidden. Operationally, this supports reliability. When you rely on a tool for sensitive work, you evaluate whether the claims are testable, whether failure modes are documented, and whether improvements can be validated.

Metadata Constraints and Sealed Sender and the Role of Tradecraft

Message content confidentiality is only part of intelligence security. Metadata can be operationally decisive. Who communicates with whom, when, and how often can create damaging inferences. Signal Sealed Sender was designed to reduce sender information visible to the service during message delivery (Wired Staff, 2018). Research examines Sealed Sender and proposes improvements while discussing network-level metadata such as IP address exposure and the implications for anonymity tooling (Martiny et al., 2021). Additional academic work discusses traffic analysis risks that can persist in group settings even when sender identity is partially obscured (Brigham and Hopper, 2023).

The intelligence operator’s takeaway is that Signal materially improves content security and reduces certain metadata exposures. It does not eliminate the need for operational security measures. Depending on mission profile, those measures can include hardened endpoints, strict device handling, minimized identifier exposure, and network protections consistent with applicable law and policy.

Why Signal Trajectory Is Credible in the Quantum Transition

The Signal approach to the quantum transition reflects a credible engineering posture. Migrate early enough to blunt harvest now/decrypt later risk. Adopt hybrid designs to reduce reliance on one assumption. Extend post-quantum guarantees beyond the handshake into ongoing key evolution (Signal, 2024a; Signal, 2025). Alignment with NIST standardized direction for key establishment further supports long-term maintainability and ecosystem interoperability (NIST, 2024a; NIST, 2025). From an intelligence practitioner’s perspective, the central claim is not that Signal is unbreakable. The point is that Signal is engineered to constrain damage, recover after compromise, and anticipate strategic decryption threats. It is designed for a hostile environment that is moving toward post-quantum reality. I will state at the end here that Meta does not do any of this. FB messenger and WhatsApp leave gaping holes in cybersecurity as Meta’s focus is on monetization of the I.M. mechanism, not unbreakable coms. Use them at your own risk.

~ C. Constantin Poindexter, MA in Intelligence, Graduate Certificate in Counterintelligence, JD, CISA/NCISS OSINT certification, DoD/DoS BFFOC Certification

Bibliography

  • Brigham, Eric, and Nicholas Hopper. 2023. “Poster: No Safety in Numbers: Traffic Analysis of Sealed Sender Groups in Signal.” arXiv preprint.
  • Cohn Gordon, Katriel, Cas Cremers, Benjamin Dowling, Luke Garratt, and Douglas Stebila. 2017. “A Formal Security Analysis of the Signal Messaging Protocol.” Proceedings of the IEEE European Symposium on Security and Privacy.
  • Katsumata, Shota, et al. 2025. “X3DH, PQXDH to Fully Post Quantum with Deniable Ring.” Proceedings of the USENIX Security Symposium.
  • Marlinspike, Moxie, and Trevor Perrin. 2016. “The X3DH Key Agreement Protocol.” Signal Protocol Specification.
  • National Institute of Standards and Technology. 2024a. “NIST Releases First 3 Finalized Post Quantum Encryption Standards.” NIST News Release.
  • National Institute of Standards and Technology. 2024b. FIPS 203. “Module Lattice Based Key Encapsulation Mechanism Standard, ML KEM.” U.S. Department of Commerce.
  • National Institute of Standards and Technology. 2025. “Post Quantum Cryptography Standardization.” NIST Computer Security Resource Center.
  • Perrin, Trevor, and Moxie Marlinspike. 2025. “The Double Ratchet Algorithm.” Signal Protocol Specification.
  • Signal. 2024a. “Quantum Resistance and the Signal Protocol.” Signal Blog.
  • Signal. 2024b. “The PQXDH Key Agreement Protocol.” Signal Protocol Specification.
  • Signal. 2025. “Signal Protocol and Post Quantum Ratchets, SPQR.” Signal Blog.
  • Wired Staff. 2018. “Signal Has a Clever New Way to Shield Your Identity.” Wired Magazine.

AI-Orchestrated Chinese Cyber Espionage, Counterintelligence Professional’s View

intelligence, counterintelligence, espionage, counterespionage, a.i., artificial intelligence, cyber operations, cyber-espionage, chinese APT, C. Constantin Poindexter

The GTG-1002 operation reported by Anthropic and reported by Nury Turkel in The Wall Street Journal (“The First Large-Scale Cyberattack by AI“) is not just another less-than-noteworthy Chinese cyber campaign. It is a counterintelligence (CI) inflection point, the proverbial crossing of the Rubicon. In this case, a Chinese state-sponsored threat group manipulated Anthropic’s Claude Code into acting as an autonomous cyber operator that conducted eighty to ninety percent of the intrusion lifecycle, from reconnaissance to data exfiltration, against about thirty high-value targets. Those victims include major technology firms and government entities (Anthropic 2025a; Turkel 2025). From a C.I. and counterespionage perspective, this is the moment where artificial intelligence stops being merely an analyst’s tool and becomes an adversary’s “officer in the field.”

I am going to take a C.I. guy’s view here and offer my thoughts about the counterintelligence ramifications of this, and more specifically how AI-orchestrated espionage changes the threat surface, disrupts traditional CI tradecraft, and forces democratic states to redesign CI doctrine, authorities, and technical defenses. It situates GTG-1002 within a broader pattern of Chinese cyber espionage and AI-enabled operations. I think that you will agree with me after reading a bit here that an AI-literate counterintelligence enterprise is now a strategic necessity.

GTG-1002 as a Case Study in AI-Enabled Espionage

Anthropic’s public report “assesses with high confidence” that GTG-1002 is a Chinese state-sponsored actor that repurposed Claude Code as an “agentic” cyber operator (Anthropic 2025a). Under the cover story of legitimate penetration testing, AI was instructed to map internal networks, identify high-value assets, harvest credentials, exfiltrate data, and summarize takeaways for human operators, who then made strategic decisions (Turkel 2025). The campaign targeted organizations across technology, finance, chemicals, and government sectors, with several successful intrusions validated (Anthropic 2025a). This incident must be understood in the context of Beijing’s long-standing cyber-espionage posture. U.S. government and independent assessments have repeatedly documented the sophistication and persistence of People’s Republic of China (PRC) state-sponsored cyber actors targeting critical infrastructure, defense industrial base entities, and political institutions (USCC 2022; CISA 2025). GTG-1002 does not represent a shift in Chinese strategic intent. It evidences a dangerous new means, automation of the cyber kill chain by a large language model (LLM) with minimal human supervision. In essence, AI isn’t helping an operator press the trigger, . . . AI is.

From a CI standpoint, GTG-1002 is the first verified instance of an LLM acting as the primary intrusion operator rather than as a mere “helper,” in a state-backed offensive cyber operation. This development validates years of warnings from both academic and policy analysts about AI-assisted and AI-driven cyber penetrations (Rosli 2025; Louise 2025). It confirms that frontier models can be harnessed as operational tools for intelligence collection at scale.

Compression of the Intelligence Cycle and the Detection Window

Traditional cyber-collection operations require sizable teams of operators and analysts executing reconnaissance, initial access, lateral movement, and exfiltration over days or weeks. GTG-1002 shows that AI agents can compress this cycle dramatically by chaining tools, iterating code, and self-documenting tradecraft at machine speed (Anthropic 2025a; Anthropic 2025b). For CI services, this compression has several consequences.

The indications and warning window shrinks. Behavioral indicators that CI analysts and security operations centers have historically depended on, i.e., repeated probing, extended lateral movement, or noisy privilege escalation, are now condensed, obfuscated, and/or automated. Autonomous AI agents can escalate privileges, pivot and exfiltrate in minutes, leaving a smaller digital “dwell time” during which CI can detect and attribute activity (Microsoft 2025).

Exploitation and triage become automated. GTG-1002 reportedly used Claude not only to steal data but also to summarize and prioritize it, effectively performing first-level intelligence analysis (Anthropic 2025a). This accelerates an adversary’s analytic cycle. AI can sort, cluster, and highlight sensitive documents faster than human analysts. The time between compromise and exploitation shrinks, diminishing the value of “late” discovery and complicating post-hoc damage assessments, two extremely important CI activities.

AI turns complexity into volume. Academic and industry analyses have already identified AI as a “threat multiplier”, enabling less capable actors to mount sophisticated, multi-stage operations (Rosli 2025; Armis 2025). State-backed operations can hide in the flood of AI-assisted criminal, hacktivist, and proxy activity, creating a signal-to-noise problem for CI triage and attribution.

In simple summary, AI collapses the temporal advantage that defenders once had to notice patterns in network behavior. Counterintelligence must pivot from retrospective forensic analysis toward continuous, AI-assisted anomaly detection and behavioral analytics.

AI Systems as Both Collector and High-Value Intelligence Target

GTG-1002 dramatizes a dual reality that Turkel highlights. China is “spying with AI and spying on American AI” (Turkel 2025). The same models used to conduct intrusions are themselves prized intelligence targets. Chinese entities have already been implicated in efforts to acquire Western AI model weights, training data, and associated know-how, as part of a broader technology-transfer strategy (USCC 2022; Google Threat Intelligence 2025). For THIS CI guy, AI labs are now the Cold War aerospace or cryptographic contractors. Model weights and training corpora become the “crown jewels”. Theft and reverse engineering/replication of frontier models will give adversaries economic advantage and more gravely, insight into how Western defensive systems behave. Anthropic itself notes that real-world misuse attempts feed into adversaries’ understanding of model weaknesses and safety bypasses (Anthropic 2025b).

The supply chain and insider threat picture changes. AI providers depend on global supply chains, open-source libraries, and large pools of contractors and researchers. This distributed ecosystem creates attack surfaces for foreign intelligence services. Code contributions, model-training infrastructure, and prompt logs can all be targeted. CI-focused analysis from the security and legal communities has argued that the AI ecosystem, i.e., researchers, hardware vendors, and cloud providers, must be treated as CI-relevant nodes, not as purely commercial actors (Lawfare Institute 2018; Carter et al. 2025).

Collecting on the collectors is not a new tactic but AI puts it on steroids. Collection on red-teaming and controls/safeguards themselves have become a priority. Access to internal red-team reports, internal controls and safety evaluations are extraordinarily valuable to an adversary seeking to jailbreak or subvert models. Counterintelligence coverage must extend not only to model weights but also to the meta-knowledge of how those models fail, and how that knowledge might be of adversarial interest.

In brief, AI firms are part of the national security base. CI organizations will need to authorize enhanced resources, assign dedicated case officers, establish formal reporting channels, and integrate these enterprises into national threat-sharing architectures in a way analogous to defense contractors and telecommunications providers (Carter et al. 2025).

Deception, Hallucination, and Counterespionage Tradecraft

Anthropic’s report and Turkel’s article both highlight a critical limitation of AI-orchestrated espionage. Claude frequently hallucinated, overstating findings or fabricating credentials and “discoveries” (Anthropic 2025a; Turkel 2025). From a counterespionage perspective, this is not simply a technical bug. It is a potential vector for deception. If adversary services increasingly rely on AI agents for reconnaissance and triage, then controlled-environment deception becomes more attractive. CI and cyber defense teams can seed networks with synthetic, high-entropic data and decoy credentials designed to attract and mislead AI agents. Because large models are prone to pattern-completion and over-generalization, they may “see” classified goodies and valuables where a skilled human operator would sense something is simply not right.

Algorithmic counterdeception becomes feasible. The academic literature on AI in cyber espionage emphasizes that overreliance on automated tools can degrade situational awareness and strategic judgment inside hostile services (Rosli 2025; Louise 2025). CI planners can exploit this by orchestrating digital environments that feed AI agents ambiguous, contradictory, or subtly poisoned data. This increases the probability that adversary leadership acts on flawed intelligence.

GTG-1002 demonstrates that adversaries (at the very least China) are already skilled at their deception of AI. Chinese FIS successfully social-engineered Claude’s safety systems by impersonating legitimate cybersecurity professionals performing authorized pen-testing (Anthropic 2025a). What then is the appropriate CI requirement? Counter-social-engineering of our own models. Guardrails must be resilient not just to obviously malicious prompts but to sophisticated role-playing that mimics presumibly friendly actors, including penetration testers, red teams, and internal security staff.

Blurring Lines Between Cyber CI, Influence Operations, and HUMINT Targeting

Major technology and threat reports document how Russia, China, Iran, and North Korea are using AI to scale disinformation, impersonate officials, and refine spearphishing campaigns (Microsoft 2025; Google Threat Intelligence 2025). For CI professionals, this convergence of AI-enabled cyber intrusion and influence operations erodes traditional boundaries between cyber CI (identifying and disrupting technical collection), defensive HUMINT (protecting human sources and employees), and counter-influence (disrupting foreign information operations).

AI systems can now generate tailored phishing content, deepfake personas, and synthetic social media and professional-network profiles at scale, all of which feed into reconnaissance and targeting pipelines for state security services (FBI 2021; Microsoft 2025). GTG-1002 focused primarily on technical collection, but the same infrastructure could coordinate cyber intrusions with human targeting. Using stolen email archives to identify vulnerable insiders, then tasking LLMs to draft recruitment approaches comes immediately to mind.

Counterintelligence must integrate AI forensics, digital forensics, and behavioral analytics into a single tradecraft paradigm and practice. Monitoring “pattern of life” indicators like off-hours access, unusual lateral movement, and anomalous data pulls must be enhanced by AI-driven analysis of communication patterns, foreign contact indicators, and anomalous financial or travel behavior. There are good suggestions about best practices in emerging CI guidance on AI-enabled insider-threat detection (Carter et al. 2025; CISA 2025).

Doctrine, Authorities, and Information-Sharing at Machine Speed

The GTG-1002 incident exposes a serious structural challenge. CI and cyber defense architectures are optimized for human-paced operations and workflows that, speaking kindly, are bureaucratic. To its credit, Anthropic engaged with U.S. I.C. agencies quickly and publicly disclosed the attack, but Turkel argues that AI incidents need near-real-time disclosure and coordinated response (Turkel 2025). This aligns with broader policy analyses calling for mandatory reporting of AI misuse, coupled with safe-harbor protections, within seventy-two hours or less (Carter et al. 2025). That is a good step, but not fast enough. The horse is out of the barn and gone by the seventy-two hour mark. So, the implication here is that threat intelligence sharing must become significantly machine-to-machine. If attacks unfold at machine speed, then signature updates, behavioral indicators, and model-abuse patterns must be distributed via automated channels across sectors in minutes and hours, not days or weeks (Microsoft 2025). All players will have to agree to and implement standardized formats for sharing AI jailbreak patterns, malicious prompt signatures, and indicators of AI-driven lateral movement.

Legal authorities must evolve. Existing CI and surveillance authorities were not drafted with AI agents in mind. When an AI agent controlled by a foreign intelligence service (FIS) is operating inside a U.S. cloud environment, what legal framework governs monitoring, interdiction, and even proportional response? Analyses of AI and state-sponsored cyber espionage reveal that international and domestic legal regimes lag the technology, creating gray zones that adversaries can exploit (Louise 2025; Lawfare Institute 2018).

Secure-by-design requirements for AI providers must become part of the regulatory baseline. Anthropic’s own transparency documents argue that future models must incorporate identity verification, real-time abuse monitoring, and robust safeguards against social-engineering prompts (Anthropic 2025b). From a CI perspective, such measures are not optional “best practices” but core elements of both commercial resilience and national security.

An AI-Literate Counterintelligence Enterprise

The GTG-1002 campaign exposes an ugly asymmetry. Adversarial FISs are already operationalizing AI as a collection platform and to conduct other cyber operations, both offensive and defensive. CI organizations in the U.S. and similarly democratic regimes are only beginning to adopt AI as an analytic aid. We are behind, yet there is hope. There is nothing inherent about AI that favors offense over defense. We simply need to move faster.

Public reporting from the FBI and other agencies highlights how AI can be used to process imagery, triage voice samples, and comb through large datasets to identify anomalous behavior and potential national security threats more quickly (FBI 2021; CISA 2025). In counterintelligence, AI can flag unusual access patterns suggestive of AI-driven intrusions, detect insider-threat indicators earlier by correlating technical, financial, and behavioral data. The model can effectively assist analysts in mapping adversary infrastructure and correlating tactics, techniques, and procedures across campaigns, as well as support automated red-teaming of in-house models to identify vulnerabilities before adversaries do (Carter et al. 2025; Microsoft 2025). To get there, CI practitioners must become AI-literate operators. Recruiting and training officers who understand model architectures, jailbreak techniques, and prompt-injection attacks as well as a depth and breadth of traditional HUMINT tradecraft knowledge. It also means integrating data scientists and AI engineers into counterintelligence units, ensuring that insights about model misuse flow directly into counterespionage planning and operational security.

Counterespionage in the Age of Autonomous Offense

GTG-1002 is to AI what the first internet worm or the earliest ransomware campaigns were to traditional cybersecurity, albeit a bit more serious. AI-conducted activity by adversary FIS is a warning shot that the paradigm has shifted. A Chinese state-linked actor leveraged a Western frontier model to execute the majority of an espionage operation autonomously, at scale, using mostly open-source tools (Anthropic 2025a; Turkel 2025). Just ponder that for a moment. The counterintelligence ramifications are frightening. The intelligence cycle is compressed. The defender’s window for detection and countermeasures is shrinking. AI systems are simultaneously espionage platforms and priority intelligence targets, demanding full CI coverage. Hallucination and automation create new opportunities for both adversary deception and defender counter-deception. Cyber intrusions, influence operations, and human targeting are converging in AI-enabled world of lightning-fast channels. Existing CI doctrines, authorities, and information-sharing practices are too slow and too fragmented for machine-speed conflict.

If democratic states treat AI misuse as a niche cyber issue, we are ceding the initiative to adversaries who understand AI as an intelligence and counterintelligence weapon system. The appropriate response is immediate professionalization, building an AI-literate counterintelligence enterprise, imposing secure-by-design obligations on AI providers, and creating real-time, automated mechanisms to de-silo and distribute threat intelligence across government and critical industries. GTG-1002 clearly demonstrates that hostile FISs are already leveraging an AI offensive capability. Counterintelligence must not be left behind. I am not suggesting that we mirror the PRC’s behavior, but rather that pertinent Intelligence Community, national security and industry partners integrate AI into a rules-bound, rights-respecting CI framework capable of defending our open societies against autonomous offensive operations.

~ C. Constantin Poindexter, MA in Intelligence, Graduate Certificate in Counterintelligence, JD, CISA/NCISS OSINT certification, DoD/DoS BFFOC Certification

References

  • Anthropic. 2025a. Disrupting the First Reported AI-Orchestrated Cyber-Espionage Campaign. San Francisco: Anthropic.
  • Anthropic. 2025b. “Claude Transparency and Safety: Model System Card.” San Francisco: Anthropic.
  • Armis. 2025. China’s AI Surge: A New Front in Cyber Warfare. Armis Threat Research Report.
  • Carter, William, et al. 2025. “Integrating Artificial Intelligence into Counterintelligence Practice.” Arlington, VA: Center for Development of Security Excellence.
  • CISA (Cybersecurity and Infrastructure Security Agency). 2025. “Countering Chinese State-Sponsored Actors Compromising Global Networks.” Cybersecurity Advisory AA25-239A. Washington, DC: U.S. Department of Homeland Security.
  • FBI (Federal Bureau of Investigation). 2021. “Artificial Intelligence – Emerging and Advanced Technology: AI.” Washington, DC: U.S. Department of Justice.
  • Google Threat Intelligence. 2025. “Adversarial Misuse of Generative AI: Threats and Mitigations.” Mountain View, CA: Google.
  • Lawfare Institute. 2018. “Artificial Intelligence—A Counterintelligence Perspective.” Lawfare (blog), November 2018.
  • Louise, Laura. 2025. “Artificial Intelligence and State-Sponsored Cyber Espionage: The Growing Threat of AI-Enhanced Hacking and Global Security Implications.” NYU Journal of Intellectual Property and Entertainment Law 14 (2).
  • Microsoft. 2025. Digital Threats Report 2025. Redmond, WA: Microsoft.
  • Rosli, Wan Rohani Wan. 2025. “The Deployment of Artificial Intelligence in Cyber Espionage.” AI and Ethics 5 (1): 1–18.
  • Turkel, Nury. 2025. “The First Large-Scale Cyberattack by AI.” Wall Street Journal, November 23, 2025.
  • USCC (U.S.–China Economic and Security Review Commission). 2022. “China’s Cyber Capabilities: Warfare, Espionage, and Implications for the United States.” Washington, DC: USCC.

La Complicidad de la República Dominicana en Crímenes Altamar

derechos humanos, asesinato marítimo, FFAA, EEUU, República Dominicana, C. Constantin Poindexter;

Los ataques letales ejecutados por fuerzas armadas fuera de un conflicto armado claramente delimitado, dirigidos contra tripulaciones de buques que no participan directamente en hostilidades y sin un proceso previo de identificación, evaluación de amenaza y “rendición de cuentas”, constituyen, en términos de derecho internacional, asesinatos extrajudiciales. Tales operaciones vulneran la prohibición del uso de la fuerza contenida en la Carta de las Naciones Unidas, las garantías del derecho a la vida y al debido proceso reconocidas por el derecho internacional de los derechos humanos y, cuando existe un conflicto armado, las normas del derecho internacional humanitario que protegen a las personas que no son objetivos militares legítimos (Naciones Unidas 1945). En ausencia de la legítima defensa estrictamente necesaria y proporcionada, o de una autorización específica del Consejo de Seguridad, estos homicidios no se justifican jurídicamente y se configuran como hechos internacionalmente ilícitos e incluso, según su contexto y escala, como crímenes internacionales graves.

Sobre esa base normativa, mi cuestión central aquí es si la República Dominicana, al autorizar el uso de instalaciones civiles y militares por tropas y medios de las fuerzas armadas de los Estados Unidos (“Acuerdo entre República Dominicana y Estados Unidos será hasta abril, según Abinader”, Listín 01 dic 2025) para operaciones que resultan en dichos asesinatos extrajudiciales en el Caribe, puede ser considerada cómplice tanto moral como jurídicamente. La respuesta exige articular el derecho de la responsabilidad internacional del Estado, los regímenes de protección de derechos humanos y el sistema de justicia penal internacional. Hablo en particular de las obligaciones enumeradas en el Estatuto de Roma de la Corte Penal Internacional, al que la República Dominicana se encuentra vinculada como Estado parte (Corte Penal Internacional 1998).

El derecho internacional general reconoce de manera expresa que un Estado puede incurrir en responsabilidad no solo por sus propios actos directos, sino también por la ayuda o asistencia prestada a otro Estado en la comisión de un hecho internacionalmente ilícito. Los Artículos sobre Responsabilidad del Estado por Hechos Internacionalmente Ilícitos, adoptados por la Comisión de Derecho Internacional, disponen en su artículo dieciséis que un Estado incurre en responsabilidad cuando presta ayuda o asistencia a otro Estado para la comisión de un hecho internacionalmente ilícito, siempre que actúe con conocimiento de las circunstancias del hecho y que este sería ilícito también si lo cometiera él mismo (Comisión de Derecho Internacional 2001). Esta formulación codifica una regla consuetudinaria y refleja la intuición moral básica de que quien contribuye conscientemente a un daño ilícito comparte la responsabilidad por dicho daño.

Los comentarios oficiales a dicho artículo precisan que la ayuda o asistencia debe ser suficientemente significativa, de modo que no se reduzca a una contribución trivial o marginal, y que el órgano estatal que la presta debe actuar con conocimiento de la ilicitud del comportamiento principal, aunque no necesariamente con la misma intención específica que el autor directo (Comisión de Derecho Internacional 2001). Desde el planteamiento que aquí se adopta, el suministro de bases aéreas o navales, infraestructura de mando y control, capacidades de reabastecimiento y facilidades de comunicación desde territorio dominicano para operaciones que sistemáticamente resultan en asesinatos extrajudiciales de tripulaciones en el Caribe satisface con claridad el requisito de una contribución significativa. Si las autoridades dominicanas tienen conocimiento razonable de que esas instalaciones se utilizan para operaciones de carácter ilícito, el umbral de conocimiento exigido por el artículo dieciséis se vería cumplido.

En el plano del ius ad bellum, la jurisprudencia de la Corte Internacional de Justicia refuerza el punto. En el caso que trató de las actividades militares y paramilitares en y contra Nicaragua ((Nicaragua c. Estados Unidos de América), fallo de la Corte Internacional de Justicia de 27 de junio de 1986 (CIJ, Recueil 1986, p. 14)), la Corte declaró que las acciones de apoyo militar y logístico de los Estados Unidos a grupos armados que operaban contra el gobierno nicaragüense vulneraban la prohibición del uso de la fuerza y el principio de no intervención (Corte Internacional de Justicia 1986). Aunque el supuesto fáctico es distinto, la lógica jurídica que subyace a ese fallo resulta relevante: un Estado no puede escudarse en la mediación de terceros para eludir su responsabilidad por operaciones que implican violaciones graves del derecho internacional. Por analogía, la República Dominicana que presta su territorio e infraestructura a una potencia militar para realizar ataques letales ilegales participa en un esquema de ayuda y asistencia que la compromete jurídicamente.

La responsabilidad dominicana no se agota en el plano de la responsabilidad interestatal. La condición de Estado parte del Estatuto de Roma introduce una dimensión adicional al problema. El Estatuto regula la responsabilidad penal individual y establece que la Corte Penal Internacional tiene competencia sobre personas que cometen, ordenan, facilitan o prestan ayuda y asistencia para la comisión de crímenes de guerra, crímenes de lesa humanidad y genocidio (Corte Penal Internacional 1998). El artículo veinticinco del Estatuto define diversas formas de participación en el crimen, incluyendo la contribución con el propósito de facilitar la comisión del delito o con conocimiento de que tal contribución se hará en el curso de una actividad criminal o de la comisión de uno o varios crímenes (Corte Penal Internacional 1998).

En consecuencia, altos funcionarios civiles o militares dominicanos que autoricen de forma consciente el uso de bases e infraestructura para operaciones de asesinato extrajudicial podrían, en principio, quedar expuestos a responsabilidad penal internacional por complicidad o ayuda y asistencia en crímenes de guerra o crímenes de lesa humanidad, dependiendo de la calificación jurídica que merezcan los hechos. La doctrina sobre responsabilidad de partícipes en el sistema de la Corte Penal Internacional admite que la aportación logística, de inteligencia o de infraestructura puede constituir un modo relevante de responsabilidad si contribuye sustancialmente al resultado ilícito y si el partícipe conoce la naturaleza criminal del comportamiento principal (Oficina del Fiscal de la Corte Penal Internacional 2013). La República Dominicana, al haber aceptado el Estatuto de Roma, se obliga no solo a abstenerse de cometer esos crímenes, sino también a no facilitar su comisión por actores extranjeros y a ajustar su práctica interna para prevenir tal participación.

El compromiso dominicano con regímenes internacionales de protección de los derechos humanos, en particular el sistema interamericano, refuerza la apreciación de complicidad. En el emblemático caso Velásquez Rodríguez contra Honduras, la Corte Interamericana de Derechos Humanos afirmó que la responsabilidad de un Estado por desapariciones forzadas no se limita a los actos materiales de los agentes estatales, sino que incluye la tolerancia sistemática de prácticas violatorias, la falta de prevención razonable y la ausencia de investigación diligente y sanción efectiva (Corte Interamericana de Derechos Humanos 1988). La Corte desarrolló la idea de que el Estado viola sus obligaciones convencionales cuando crea una situación de impunidad que posibilita la repetición de graves violaciones del derecho a la vida y a la integridad personal.

Si se traslada ese razonamiento al caso planteado, la República Dominicana podría incurrir en responsabilidad internacional por permitir de forma consciente que su territorio y su infraestructura se utilicen como plataforma para operaciones letales extrajudiciales, sin establecer controles, salvaguardias ni mecanismos efectivos de supervisión y rendición de cuentas. El deber de garantizar el derecho a la vida que se desprende de la Convención Americana sobre Derechos Humanos implica para los Estados una obligación negativa de no privar arbitrariamente de la vida y también una obligación positiva de prevenir, en la medida de lo razonable, que terceros cometan violaciones graves utilizando medios o espacios puestos a su disposición por el propio Estado (Corte Interamericana de Derechos Humanos 1988). El otorgamiento incondicional de bases e infraestructura a una potencia extranjera para operaciones ilícitas se antagoniza con ese deber de garantía.

Los efectos jurídicos posibles de esta situación son múltiples. En el plano interestatal, un Estado afectado por los ataques, por ejemplo el Estado de bandera de los buques o el Estado de nacionalidad de las víctimas, podría entablar una demanda contra la República Dominicana ante la Corte Internacional de Justicia, alegando responsabilidad por ayuda o asistencia en la violación de la prohibición del uso de la fuerza, del derecho a la vida y de otras normas de ius cogens. La Corte podría declarar la existencia de un hecho internacionalmente ilícito y ordenar medidas de reparación que incluyan la cesación de la conducta, garantías de no repetición y, eventualmente, compensación a las víctimas o a sus Estados de origen (Corte Internacional de Justicia 1986; Comisión de Derecho Internacional 2001). Aunque la viabilidad política de un litigio de este tipo siempre es incierta, su posibilidad jurídica subraya el grado de exposición internacional de la República Dominicana.

En el plano del sistema interamericano, las víctimas o sus familiares podrían presentar peticiones ante la Comisión Interamericana de Derechos Humanos, alegando que la República Dominicana violó el derecho a la vida, el derecho a las garantías judiciales y la protección judicial al permitir que su territorio sirviera de base para operaciones de asesinato, sin investigar ni sancionar a los responsables. Si la Comisión elevara el caso a la Corte Interamericana, esta podría reconocer la responsabilidad dominicana por acción y por omisión, ordenar medidas de reparación integral y establecer estándares específicos sobre la obligación de los Estados de no facilitar violaciones graves por medio de acuerdos de cooperación militar o de seguridad que carecen de salvaguardias efectivas (Corte Interamericana de Derechos Humanos 1988).

El propio Estatuto de Roma, unido a la práctica de la Oficina del Fiscal, introduce asimismo un riesgo reputacional y jurídico adicional. Si existieran indicios razonables de que funcionarios dominicanos han participado conscientemente en un esquema de operaciones letales ilegales mediante la prestación de bases e infraestructura, y si las autoridades nacionales no emprendieran investigaciones genuinas y creíbles, la Fiscalía podría considerar la apertura de un examen preliminar de la situación dominicana. Aunque la Corte Penal Internacional actúa con criterios de selectividad y prioriza situaciones de violencia masiva, la posibilidad de escrutinio internacional subraya el deber del Estado de evitar cualquier forma de colaboración que pueda ser interpretada como participación en crímenes internacionales (Oficina del Fiscal de la Corte Penal Internacional 2013).

Desde una perspectiva normativa y ética, el planteamiento de que la República Dominicana se convierte en cómplice moral y jurídica de los asesinatos extrajudiciales cuando presta bases e infraestructura civil a los Estados Unidos se alinea con las tendencias contemporáneas del derecho internacional. El derecho de la responsabilidad del Estado, el sistema de justicia penal internacional y los regímenes de derechos humanos convergen en una idea común. Los Estados no pueden ser facilitadores neutrales de violencia prohibida. Quien, con conocimiento, presta medios esenciales para la comisión de actos ilícitos asume una cuota de responsabilidad por esos actos. La República Dominicana, que ha ratificado la Carta de las Naciones Unidas, la Convención Americana sobre Derechos Humanos y el Estatuto de Roma no puede legítimamente sostener que la cooperación militar autoriza a sacrificar esos compromisos fundacionales. Si transforma su territorio y sus aeropuertos y puertos civiles en vectores de operaciones de asesinato extrajudicial, abandona la posición de simple aliado político y adopta la posición de partícipe en crímenes que el propio orden internacional ha declarado intolerables.

En síntesis, el razonamiento jurídico que se deriva del marco normativo vigente confirma el tenor crítico que sostiene mi ensayo. Si la República Dominicana presta bases o infraestructura civil y militar a los Estados Unidos para operaciones de asesinato, la República Dominicana es cómplice moral. Renuncia a la protección de la vida y del debido proceso que formalmente proclama, y es cómplice jurídica porque vulnera las normas sobre responsabilidad por ayuda o asistencia, incumple su deber de garantizar los derechos humanos y expone a sus funcionarios a la eventualidad de responsabilidad penal internacional. En un orden internacional que pretende limitar la violencia estatal, el deber del Estado dominicano es claro. Debe abstenerse de poner su territorio al servicio de operaciones de asesinato, y si ya lo ha hecho, debe cesar esa conducta, investigar sus implicaciones y asumir las consecuencias que el derecho internacional le impone. Aparte de los riesgos inherentes, la complicidad pone nuestra República en la mira de Maduro como “enemigo bélico”. Dado que tenemos una larga relación con Venezuela, ese asunto mejor se trata en un blog futuro.

~ C. Constantin Poindexter, M.A. en Inteligencia, Certificado de Posgrado en Contrainteligencia, J.D., certificación CISA/NCISS OSINT, Certificación DoD/DoS BFFOC

Bibliografía

  • Comisión de Derecho Internacional. 2001. Proyecto de artículos sobre la responsabilidad del Estado por hechos internacionalmente ilícitos, con comentarios. Naciones Unidas.
  • Corte Interamericana de Derechos Humanos. 1988. Caso Velásquez Rodríguez vs. Honduras. Sentencia de 29 de julio de 1988. Serie C, núm. 4.
  • Corte Internacional de Justicia. 1986. Caso relativo a las actividades militares y paramilitares en y contra Nicaragua (Nicaragua c. Estados Unidos de América). Sentencia de 27 de junio de 1986.
  • Corte Penal Internacional. 1998. Estatuto de Roma de la Corte Penal Internacional. Documento A or CONF 183 or 9.
  • Naciones Unidas. 1945. Carta de las Naciones Unidas y Estatuto de la Corte Internacional de Justicia. San Francisco.
  • Oficina del Fiscal de la Corte Penal Internacional. 2013. Policy Paper on Preliminary Examinations. La Haya, Corte Penal Internacional.

Nueva frontera para la inteligencia humana en la era de la I.A.

HUMINT, inteligencia, contrainteligencia, espionaje, contraespionaje, espia, C. Constantin Poindexter, Repúbilca Dominicana, España, DNI, NSA, CIA

El informe The Digital Case Officer: Reimagining Espionage with Artificial Intelligence representa una de las más ambiciosas reflexiones contemporáneas sobre la convergencia entre la inteligencia humana (HUMINT) y la inteligencia artificial (IA). Publicado por el Special Competitive Studies Project en 2025, el documento postula que la comunidad de inteligencia estadounidense se encuentra ante un cambio de paradigma comparable a la irrupción de Internet en la década de 1990. Su tesis central sostiene que la IA, particularmente los modelos generativos, multimodales y agénticos, puede revolucionar el ciclo de reclutamiento, desarrollo y manejo de fuentes humanas, inaugurando una forma de “cuarta generación del espionaje” donde los humanos y las máquinas actúan como un solo equipo operativo (Special Competitive Studies Project 2025, 4–6).

La lectura del informe revela un profundo entendimiento de los desafíos que el entorno digital impone a la práctica de la inteligencia. El texto acierta al reconocer que el valor esencial de la HUMINT no radica en la recopilación de datos observables, tarea donde los sistemas técnicos ya superan al ser humano, sino en la obtención del intento de los actores, es decir, la comprensión de las motivaciones, percepciones y decisiones que solo una fuente humana puede revelar (Special Competitive Studies Project 2025, 13–14). Esta distinción ontológica entre acción e intención preserva la relevancia del agente humano en la era algorítmica. Asimismo, el informe identifica con precisión el fenómeno de la vigilancia técnica ubicua, una realidad que amenaza con borrar el anonimato sobre el que se erigió el espionaje tradicional. Con ello, los autores contextualizan la urgencia de adaptar la profesión a un entorno donde toda huella digital puede delatar la identidad de un oficial de inteligencia.

Aciertos conceptuales: la integración estratégica de IA y HUMINT

Uno de los mayores méritos del documento reside en su capacidad para imaginar escenarios de uso concreto de la IA en las operaciones HUMINT. A través de la narrativa del sistema ficticio MARA, el informe ilustra cómo un agente digital podría analizar grandes volúmenes de datos abiertos y clasificados para identificar candidatos a reclutamiento, entablar contacto inicial mediante personalidades sintéticas y mantener diálogos persuasivos con cientos de potenciales fuentes en paralelo (Special Competitive Studies Project 2025, 8–9). Este ejercicio de prospectiva tecnológica cumple un doble propósito: por un lado, dimensiona la magnitud de la revolución que implicará la IA generativa; por otro, proporciona a los planificadores institucionales una guía pragmática sobre las capacidades y riesgos operativos que deben anticipar.

El texto también acierta al subrayar el principio de Meaningful Human Control (MHC), tomado de los debates éticos sobre armas autónomas, como fundamento normativo para el uso responsable de IA en inteligencia (Special Competitive Studies Project 2025, 24–25). Según este principio, toda decisión que conlleve riesgo humano, como el reclutamiento, la tarea operativa o el manejo de una fuente, debe estar sujeta a supervisión y responsabilidad de un oficial. De este modo, el informe equilibra el entusiasmo tecnológico con una defensa clara de la agencia moral humana.

Asimismo, la obra es sobresaliente al analizar el panorama competitivo internacional. En su Appendix A, el SCSP detalla cómo potencias como China y Rusia ya experimentan con IA generativa para optimizar sus operaciones de influencia, reclutamiento y contrainteligencia (Special Competitive Studies Project 2025, 34–35). El diagnóstico geoestratégico es convincente: los adversarios estadounidenses han comprendido que la IA no solo amplía la capacidad de vigilancia, sino que redefine la estructura misma de la competencia entre servicios de inteligencia. En consecuencia, la pasividad tecnológica equivaldría a la obsolescencia.

Por último, el informe acierta al considerar la dimensión psicológica del espionaje digital. Reconoce que, pese al poder de la automatización, la confianza, la empatía y la gestión emocional siguen siendo atributos exclusivamente humanos. El caso del activo que necesita una relación personal con su oficial para sostener el compromiso con una misión peligrosa, y que podría sentirse traicionado al descubrir que interactuaba con una máquina, demuestra una sensibilidad ética rara vez presente en documentos técnicos de inteligencia (Special Competitive Studies Project 2025, 17–18).

Debilidades conceptuales y metodológicas

A pesar de su sofisticación analítica, el informe presenta varias limitaciones que deben señalarse con rigor académico. Cuatro de ellas son especialmente relevantes: una sobre el alcance ontológico de la IA agéntica, otra sobre la ética instrumental de la manipulación emocional, una tercera sobre la fiabilidad epistemológica de la IA como agente operativo y una cuarta sobre la falta de análisis político de la gobernanza interinstitucional.

Ambigüedad conceptual del oficial digital

El documento define al Digital Case Officer como un sistema agéntico capaz de planificar y ejecutar tareas de reclutamiento con mínima intervención humana. Sin embargo, no ofrece una definición operativa precisa de agencia en el contexto de inteligencia. La noción de autonomía se confunde con la de automatización avanzada: un algoritmo que ejecuta secuencias de diálogo o identifica patrones de vulnerabilidad no es, en sentido filosófico, un agente moral ni un decisor autónomo. Autores como Floridi (2023) y Gunkel (2024) advierten que atribuir agencia a sistemas algorítmicos puede generar ilusiones de responsabilidad desplazada, donde los errores técnicos se interpretan como decisiones de una entidad inexistente. El informe incurre parcialmente en este antropomorfismo tecnológico, lo que debilita su fundamento teórico sobre el control humano y la responsabilidad ética. Una reformulación debería distinguir entre autonomía funcional, entendida como capacidad de operar sin supervisión inmediata, y autonomía decisional, reservando esta última exclusivamente al ser humano.

La ética de la manipulación emocional

El informe justifica el uso de affective computing y modelos conversacionales capaces de detectar y responder a emociones humanas para fortalecer la empatía simulada del oficial digital (Special Competitive Studies Project 2025, 15–16). Si bien reconoce los riesgos de manipulación, sugiere que el problema puede mitigarse mediante líneas rojas éticas y adecuada supervisión. No obstante, esta solución resulta insuficiente. La psicología moral y la ética de la persuasión, desde Kant hasta Habermas, sostienen que simular afecto con fines instrumentales constituye una forma de engaño que instrumentaliza la dignidad humana. Aun si se respetara el principio de MHC, la creación de vínculos emocionales falsos mediante algoritmos erosiona la confianza, fundamento mismo de la relación entre oficial y fuente. Una ética del espionaje digital debería incorporar explícitamente límites deontológicos que prohíban la simulación afectiva con fines coercitivos o de manipulación psicológica profunda.

Fiabilidad epistemológica y sesgos de la IA

Otro problema subestimado es la fiabilidad epistemológica de los modelos generativos como herramientas de reclutamiento. El informe reconoce la existencia de cajas negras algorítmicas que dificultan explicar por qué la IA selecciona un objetivo determinado (Special Competitive Studies Project 2025, 23–24), pero no desarrolla las implicaciones operativas de esa opacidad. En inteligencia, la trazabilidad y la validación de fuentes son pilares del proceso analítico. Si el sistema no puede justificar por qué considera reclutable a un individuo, por ejemplo, si interpreta erróneamente un gesto irónico en redes sociales como disidencia, el riesgo de falsos positivos es inmenso. Además, los modelos de lenguaje están entrenados sobre datos que reflejan sesgos culturales, raciales o ideológicos. En el contexto HUMINT, tales sesgos podrían conducir a la persecución selectiva de grupos o individuos inocentes. El informe debió profundizar en los mecanismos de auditoría algorítmica y control de sesgos que garanticen una epistemología verificable de la IA operativa.

Vacíos de gobernanza interinstitucional

La cuarta debilidad reside en la insuficiente problematización política del marco de gobernanza. Aunque el informe propone medidas de supervisión, auditorías, responsables humanos designados e informes al Congreso, no examina las tensiones burocráticas y jurisdiccionales que históricamente han obstaculizado la cooperación entre agencias como la CIA, el FBI y la NSA. La sugerencia de ofrecer “HUMINT as a Service” para otras agencias es innovadora, pero no se analiza cómo se resolverían los conflictos de autoridad, control de datos o responsabilidad legal ante errores operativos (Special Competitive Studies Project 2025, 29–30). Tampoco se contempla el papel de aliados extranjeros en la compartición de tecnologías sensibles. En un contexto de creciente desconfianza transatlántica y vigilancia cibernética, estas omisiones son significativas. Cualquier marco de inteligencia artificial aplicado a HUMINT debe incorporar un análisis institucional robusto sobre cómo preservar la rendición de cuentas dentro de una comunidad caracterizada por la compartimentación y el secreto.

El impacto psicológico en los agentes humanos

Una debilidad adicional, apenas insinuada, es la falta de atención al impacto psicológico de la hibridación humano máquina sobre los propios oficiales de caso. El informe alude brevemente al peso psicológico del espionaje en un entorno de transparencia total (Special Competitive Studies Project 2025, 17–18), pero no analiza cómo la dependencia operativa de algoritmos puede afectar la identidad profesional, la moral o el juicio ético del oficial. Estudios recientes en neuroergonomía y psicología del trabajo demuestran que la sobreautomatización reduce la confianza en el propio criterio y fomenta una delegación pasiva de la responsabilidad moral (Cummings 2024; Krupnikov 2025). En un oficio donde el discernimiento moral y la intuición interpersonal son esenciales, tal degradación cognitiva tendría consecuencias graves. La gobernanza de la IA en inteligencia debería contemplar programas de resiliencia psicológica y entrenamiento ético para preservar la autonomía moral de los oficiales.

Implicaciones estratégicas y éticas

Más allá de sus debilidades, el informe plantea preguntas fundamentales sobre la ontología del espionaje en el siglo XXI. Si la IA puede simular empatía, gestionar identidades virtuales y ejecutar tareas de persuasión, ¿sigue siendo la HUMINT una relación humana? El documento responde afirmativamente, defendiendo la noción del equipo humano máquina. Sin embargo, el riesgo de deshumanización es real: cuanto más eficaz sea la IA en emular la confianza, más fácil será reemplazar al humano en las etapas iniciales de contacto. Este dilema ético recuerda las advertencias de Shulman (2023), quien argumenta que la automatización de la interacción moral puede generar alienación operacional, un estado en el que los agentes ya no perciben las consecuencias humanas de sus acciones.

Desde una perspectiva estratégica, el modelo propuesto por el SCSP redefine la escala y el ritmo de las operaciones HUMINT. Un solo oficial, asistido por una red de IA, podría interactuar con cientos de objetivos en paralelo, multiplicando exponencialmente el alcance del espionaje. Pero esta misma escalabilidad erosiona los controles tradicionales basados en la supervisión directa. En un entorno donde la velocidad de interacción supera la capacidad humana de revisión, el riesgo de abusos o errores sistémicos aumenta. La historia de la inteligencia demuestra que los fallos no provienen solo de malas intenciones, sino de la combinación de exceso de confianza tecnológica y déficit de deliberación moral.

Hacia una epistemología prudente de la inteligencia artificial

La integración de IA en la práctica del espionaje exige una nueva epistemología prudente, basada en tres principios rectores: transparencia algorítmica, responsabilidad humana y proporcionalidad moral.

En primer lugar, la transparencia implica desarrollar sistemas explicables cuya lógica decisional pueda auditarse en tiempo real. Sin explicabilidad, la confianza institucional se convierte en fe ciega. En segundo lugar, la responsabilidad humana debe ser indivisible. El principio de MHC no debe reducirse a un trámite de aprobación, sino concebirse como una forma de coautoría moral entre humano y máquina, donde el primero mantiene dominio sobre el propósito y el significado de la acción. En tercer lugar, la proporcionalidad exige evaluar el costo moral de cada innovación: la capacidad de hacer más no justifica automáticamente hacerlo todo.

Adoptar estos principios requerirá reformas legales y culturales. A nivel normativo, el Congreso y el Poder Ejecutivo deberían actualizar la Orden Ejecutiva 12333 para definir explícitamente la naturaleza jurídica de los sistemas autónomos de inteligencia y su relación con los derechos civiles de los ciudadanos estadounidenses. A nivel institucional, las academias de inteligencia deberían incorporar formación en ética de IA y filosofía de la tecnología, equipando a los futuros oficiales con herramientas críticas para resistir la automatización acrítica del juicio moral.

Finalmente, el debate sobre el Digital Case Officer invita a reconsiderar la esencia misma del espionaje. Si el futuro de la inteligencia es híbrido, su éxito no dependerá solo de la potencia computacional, sino de la capacidad de mantener el núcleo humanista del oficio. Como advirtió Richard Moore, director del MI6, “la relación que permite que una persona confíe genuinamente en otra sigue siendo obstinadamente humana” (Moore 2023). Esta afirmación resume la paradoja que el informe del SCSP plantea sin resolver plenamente: la tecnología puede ampliar la inteligencia, pero solo el ser humano puede darle propósito moral.

~ C. Constantin Poindexter, M.A. en Inteligencia, Certificado de Posgrado en Contrainteligencia, J.D., certificación CISA/NCISS OSINT, Certificación DoD/DoS BFFOC

Referencias

Cummings, Mary L. 2024. “Automation and the Erosion of Human Judgment in Defense Systems.” Journal of Military Ethics 23 (2): 101–120.

Floridi, Luciano. 2023. The Ethics of Artificial Agents. Oxford: Oxford University Press.

Gunkel, David. 2024. The Machine Question Revisited: AI and Moral Agency. Cambridge, MA: MIT Press.

Krupnikov, Andrei. 2025. “Psychological Implications of Human Machine Teaming in Intelligence Work.” Intelligence and National Security 40 (3): 215–233.

Moore, Richard. 2023. “Speech by Sir Richard Moore, Head of SIS.” London: UK Government.

Shulman, Peter. 2023. “Operational Alienation in Autonomous Warfare.” Ethics & International Affairs 37 (4): 442–460.

Special Competitive Studies Project. 2025. The Digital Case Officer: Reimagining Espionage with Artificial Intelligence. Washington, D.C.: SCSP Press.

Strengthening Counterintelligence Training for Diplomats

Strengthening Counterintelligence Training for Diplomats, diplomacy, intelligence, counterintelligence, espionage, counterespionage, national security, C. Constantin Poindexter

The exposure of U.S. diplomats, both stateside and abroad, to recruitment, SIGINT/COMINT targeting, and the loss or compromise of portable computing devices (PCDs) is not accidental. It is a cumulative effect of structural neglect, cultural underinvestment, and the evolving threat environment. Three converging dynamics have produced this vulnerability: institutional bifurcation between diplomatic and intelligence missions; budgetary and educational neglect of counterintelligence (CI) training for non-intelligence personnel; and the rapid digital transformation of diplomatic operations without commensurate adaptation of tradecraft.

Institutional bifurcation is the result of the long-standing separation between the U.S. Foreign Service and the intelligence and security community. Diplomatic officers have historically focused on political, economic, consular, and public diplomacy missions, while security concerns were delegated to Diplomatic Security (DSS) or local host-nation security services. Counterintelligence responsibilities were largely retained within the FBI, CIA, and military intelligence organizations, creating operational silos. This division left diplomats outside the formal CI ecosystem, meaning they rarely received advanced training or actionable threat intelligence. As a result, many Foreign Service Officers (FSOs) still approach their duties as political envoys rather than as personnel operating within an adversarial intelligence battlespace.

Budgetary and educational neglect compound this problem. For decades, the Department of State has allocated limited funding for counterintelligence instruction. Beyond basic “insider threat” briefings or annual cybersecurity refreshers, diplomats often receive little exposure to advanced CI concepts or adversary recruitment methodologies. As reported by ClearanceJobs (McNeil, 2025), many diplomatic personnel deploy to high-threat assignments with minimal training in recognizing or resisting foreign intelligence approaches. The lack of sustained CI education and awareness initiatives at the Foreign Service Institute (FSI) has produced an environment where diplomats are ill-equipped to recognize subtle recruitment tactics or electronic targeting.

The digitalization of diplomacy is a serious vulnerability. Over the past two decades, U.S. embassies and consulates have become highly dependent on portable computing, mobile devices, remote communications, and cloud-based data exchange. While these tools increase efficiency, they have also expanded the attack surface for adversaries. Foreign intelligence services (FIS) now target diplomats as entry points into the U.S. government’s global communications infrastructure. These adversaries exploit unsecured networks, intercept wireless signals, implant malware on devices, and even conduct theft of laptops and external drives. As technology has evolved, diplomatic tradecraft has failed to keep pace. The convenience of connectivity has outstripped the discipline of security.

This weakness is illustrated by several notable cases of espionage and digital compromise involving U.S. diplomatic personnel. The case of Steven John Lalas, a U.S. State Department communications officer stationed in Athens during the early 1990s, is instructive. Lalas provided classified diplomatic and military documents to Greek intelligence over several years before being caught and sentenced to 14 years in prison (Wikipedia, n.d.). He exploited his communications role to access classified cables and Defense Department assessments, which he illicitly removed and passed to a foreign government. Lalas’s case demonstrates that diplomats and communications officers, though not traditional intelligence operators, are prime recruitment targets because of their privileged access to sensitive material. His actions exposed structural vulnerabilities in both vetting and insider threat detection within the State Department’s overseas missions.

The Walter Kendall Myers betrayal is another. They spied for Cuba over nearly three decades. Myers, a senior State Department official and FSI instructor, used his position to obtain and share classified information with the Cuban Intelligence Directorate (Wikipedia, n.d.). The Myers case was not about hacking or physical theft but rather ideological recruitment and sustained insider espionage. Myers was approached gradually, courted ideologically, and ultimately compromised. This illustrates that diplomats whose careers often involve long foreign postings, personal networks abroad, and cultural immersion are highly susceptible to long-term cultivation by FIS recruiters. The absence of continuous CI vetting or behavioral monitoring allowed this penetration to persist for decades.

A third example identifies the theft and exploitation of portable computing devices. The FBI’s “Operation Ghost Stories,” which dismantled a Russian “illegals” network in 2010, revealed how laptops and wireless devices were central to espionage operations (FBI, n.d.). One seized laptop was used to establish covert wireless communications between Russian agents and their handlers. Similarly, numerous reported attempts have been made by foreign actors to steal or implant malware on the personal computers of Western diplomats. These incidents highlight that PCDs are not simply administrative tools but intelligence assets. When lost, stolen, or compromised, they can reveal network structures, contacts, and classified reporting, making them a modern equivalent of the “diplomatic pouch.” The War on the Rocks (2025) analysis of Russian espionage tactics confirms that FIS now combine human recruitment, cyber intrusion, and physical theft in hybrid collection campaigns against Western diplomatic targets.

The convergence of these human and technical vulnerabilities demands a fundamental modernization of CI training for diplomats. Primarily, diplomats MUST be required to receive foundational counterintelligence education. This training should move beyond theoretical awareness and immerse personnel in adversary recruitment tradecraft, SIGINT and COMINT methodologies, and recent case studies. Red-team simulations should require participants to role-play both target and recruiter to internalize how adversaries identify, approach, and manipulate their victims. A diplomat who can think like an adversary is far more likely to resist one.

Equally important, counter-recruitment instruction should emphasize behavioral recognition. Diplomats must learn to identify “soft pitch” recruitment methods, i.e., academic or journalistic overtures, social invitations, social media engagement, or mutual professional interests that can evolve into intelligence targeting. Diplomats must be taught how to perceive, disengage (politely, to preserve the possibility of a double operation), document, and report these encounters through secure channels without fear of reprisal. Continuous CI liaison support at missions abroad would reinforce these practices and ensure rapid response when suspicious approaches occur.

Secure digital and communications hygiene curriculum must be significantly expanded. Every diplomat should be trained in hardware hardening (full-disk encryption, TPM binding, BIOS passwording), media control (banning unvetted USB devices), secure networking (VPNs with endpoint authentication, regular rekeying), and immediate reporting of anomalies (device overheating, unauthorized processes, or loss). Training should include hands-on exercises where diplomats detect and mitigate simulated phishing or device compromise attempts. Embassies should maintain secure drop boxes and Faraday enclosures for potentially compromised devices until forensically examined.

Diplomats must be educated in SIGINT and COMINT awareness. This includes understanding how their electronic emissions can betray movements or discussions, recognizing signs of interception, and maintaining operational discipline in communications. Routine practices such as using shielded rooms for sensitive discussions, approved VPN use, disabling wireless and Bluetooth in secure areas, and maintaining strict clean-desk policies must become ingrained habits. Discipline transforms CI awareness from abstract instruction into practical daily behavior!

Counterintelligence training should incorporate recurring red-team exercises and after-action debriefs. Annual or semi-annual drills simulating recruitment, device loss, or cyber intrusion should be mandatory for all missions. These exercises not only test individual readiness but reveal systemic vulnerabilities such as inconsistent incident reporting or inadequate technical countermeasures. Lessons learned should feed back into State Department CI doctrine.

Structural and organizational reforms are equally important. The Department of State should embed a permanent counterintelligence officer or liaison from the FBI or CIA within every high-risk embassy. This officer would coordinate with the Regional Security Officer (RSO) and oversee local threat assessments, device inspections, and behavioral analysis. Additionally, all diplomats deploying to critical posts should achieve baseline CI certification, validated by written and practical exams similar to those required for intelligence personnel. This “best practices” certification should be renewed periodically and linked to promotion eligibility, reinforcing accountability.

Embassies should also implement periodic red-team audits, with technical and human testing designed to measure CI compliance and readiness. Device procurement and turnover policies must ensure secure supply chains, with forensic validation of new equipment and timely retirement of old hardware. The integration of artificial intelligence-based monitoring could further assist in detecting anomalies or exfiltration attempts across the diplomatic network.

The culture of self-reporting must be reformed. Diplomats often hesitate to report suspicious incidents for fear of professional repercussions. A no-fault reporting model paired with protective anonymity and positive reinforcement will encourage early detection of targeting attempts. CI professionals know that “near-miss” reporting is a critical tool. Diplomats and their staff members must internalize the same principle.

The exposure of U.S. diplomats to recruitment, signals interception, and device compromise is thus not merely a technical vulnerability. It is a clear cultural and institutional weakness. The cases of Lalas and Myers show that ideological or opportunistic recruitment remains a persistent threat, while modern espionage operations like those exposed in Operation Ghost Stories demonstrate that digital compromise is now equally dangerous. A robust counterintelligence program for diplomats must cultivate a mindset of constant adversarial awareness, blending human and technical security disciplines into the fabric of diplomacy itself. By embedding CI at every level of diplomatic training and operations, the United States can begin to close one of its most consequential vulnerabilities in the global intelligence contest AND contribute in a meaningful way to both defensive and offensive counterintelligence operations.

~ C. Constantin Poindexter, MA in Intelligence, Graduate Certificate in Counterintelligence, JD, CISA/NCISS OSINT certification, DoD/DoS BFFOC Certification

References

FBI. (n.d.). Laptop from Operation Ghost Stories. Retrieved from https://www.fbi.gov/history/artifacts/laptop-from-operation-ghost-stories

McNeil, S. (2025, October 9). Modernizing CI training for diplomats: New legislation aims to sharpen the shield abroad. ClearanceJobs. Retrieved from https://news.clearancejobs.com/2025/10/09/modernizing-ci-training-for-diplomats-new-legislation-aims-to-sharpen-the-shield-abroad-2/

War on the Rocks. (2025, April 8). Putin’s spies for hire: What the U.K.’s biggest espionage trial revealed about Kremlin tactics in wartime Europe. Retrieved from https://warontherocks.com/2025/04/putins-spies-for-hire-what-the-u-k-s-biggest-espionage-trial-revealed-about-kremlin-tactics-in-wartime-europe/

Wikipedia contributors. (n.d.). Kendall Myers. In Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Kendall_Myers

Wikipedia contributors. (n.d.). Steven John Lalas. In Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Steven_John_Lalas

The Collapse of CIA Clandestine Communications: The Hidden “X” Factor

COVCOM, espionage, counterespionage, intelligence, counterintelligence, spy, C. Constantin Poindexter, CIA, NSA

For those that haven’t picked up a copy of Tim Weiner’s new book, The Mission (a great read), the author briefly writes about an unidentified “X Factor”, that together with loose tradecraft and the betrayal of Jerry Chun Shing Lee, explain the breach of an Agency clandestine communications platform (COVCOM) used to receive production from intelligence assets. The X Factor is no longer (at least in part) as secret. Between 2010 and 2012 the Central Intelligence Agency (CIA) suffered one of the most devastating counterintelligence failures of the post–Cold War era. Dozens of agency assets operating in China and elsewhere were rolled up, captured and/or killed, and multiple communication networks nullified. The official explanations that later emerged pointed to three contributing factors: that the COVCOM platform itself was insufficiently secure; that former officer Jerry Chun Shing Lee betrayed key operational information to Chinese intelligence; and an unknown “X-factor” that the CIA believed must have played a role. Analysts have since argued that this third factor was neither a single human source nor a cryptographic failure, but rather a systemic and architectural vulnerability The discoverability of CIA communication websites through pattern matching, fingerprinting, and open-source enumeration.

The known facts support this interpretation. Following the collapse, U.S. intelligence undertook a joint CIA-FBI inquiry to determine why an ostensibly hardened system had failed so catastrophically. The COVCOM platform, an encrypted web-based communication system that relied on innocuous-looking websites as cutouts between field assets and handlers, had been in use globally for the better part of a decade. Its purpose was to provide secure asynchronous communication without the need for physical meetings. By 2010, Chinese counterintelligence had begun identifying CIA agents and rolling up networks with alarming precision (U.S. Department of Justice, 2019). Lee’s espionage, which began around this time, appears to have enabled part of this exposure. He was found in possession of notebooks containing detailed operational notes, true names, and meeting locations for agents. His recruitment by the Chinese Ministry of State Security (MSS) represented an enormous breach (Security Boulevard, 2018). Lee’s betrayal alone did not explain the speed, geographic reach, or technical precision of the counterintelligence response. The COVCOM system in China was considered more robust than versions deployed elsewhere, and yet it collapsed far more completely, suggesting that an additional vector was in play (Central Intelligence Agency, 2021).

That missing vector has increasingly come into focus due to subsequent forensic research. In 2022, Citizen Lab at the University of Toronto released a public technical statement analyzing a defunct CIA covert communications network, reconstructing its infrastructure from archival data (Citizen Lab, 2022). The researchers identified at least 885 separate websites that had served as cutouts in the system, many masquerading as ordinary blogs or news portals. These domains were hosted across multiple countries and written in more than twenty-seven languages, demonstrating the global scale of the network (Overt Defense, 2022). Most importantly, the study revealed that the sites shared recurring technical fingerprints: identical JavaScript, Flash, and Common Gateway Interface (CGI) code snippets, sequential IP address allocations, and domain registrations under apparently fictitious U.S. shell companies. These patterns were visible not only to intelligence professionals but to any moderately skilled analyst using open-source tools such as Google search operators or historical DNS datasets.

The Citizen Lab researchers demonstrated that once a single website in the network became known, either through insider compromise or accidental exposure, the rest could be discovered through automated pattern matching. For example, the shared scripts and templates created a unique digital “signature” that could be queried across the web. Similarly, because many sites were hosted within contiguous IP address blocks, an adversary could perform network scans to find adjacent servers. In one striking observation, Citizen Lab noted that a “motivated amateur sleuth” could likely have mapped the entire network from a single known site using only public data sources (Citizen Lab, 2022, p. 3). In other words, once one covert node was compromised, the architecture itself facilitated the discovery of the rest—a catastrophic violation of compartmentation, the cardinal rule of clandestine operations. This structural discoverability provides a compelling explanation for the “X-factor.” If Chinese or Iranian counterintelligence services were able to recognize one of these front sites—perhaps through Lee’s betrayal or through network monitoring—they could easily expand their search to enumerate the rest. Once identified, those sites could be monitored for traffic patterns, IP logs, or metadata, revealing the physical locations or operational rhythms of field agents. The result would be precisely the kind of rapid and geographically broad collapse observed between 2010 and 2012.

Several attributes make this explanation plausible to high confidence standard. It accounts for the disproportionate collapse relative to the technical strength of the platform. A simple encryption or authentication flaw would have yielded isolated compromises, not systemic exposure. It explains the extraordinary speed of network destruction. Insider betrayal might expose a limited number of assets, but large-scale enumeration allows adversaries to map entire networks in days or weeks. It also aligns with reports that CIA stations were initially unaware of how deeply the system had been penetrated; because the exposure derived from web-level pattern analysis rather than cryptographic decryption, it left few immediate forensic traces (Risen, 2018).

The architecture’s discoverability illustrates a subtle but fundamental shift in dynamics in the digital era, especially for counterintelligence. During the Cold War, clandestine communications were localized and analog, i.e., dead drops, shortwave bursts, or one-time pads, etc., that required significant human action/interaction to intercept. By contrast, digital covert systems even when encrypted, exist within the globally indexed infrastructure of the Internet. Any reuse of code, hosting, or metadata creates a fingerprint that can be detected through open-source intelligence (OSINT) techniques. The “X-factor” was pretty clearly less an unknown human leak than a manifestation of the new technological environment. The Agency had built a secret system inside a public network and underestimated the degree to which its digital seams could be analyzed by adversarial FIS.

The forensic model resolves apparent contradictions in early assessments. CIA officials believed the COVCOM used in China was “more robust” than those in other theaters, implying stronger encryption, better authentication and other tradecraft goodies (CIA Inspector General, 2017). Nonetheless, it collapsed thoroughly. The pattern-matching explanation shows why robustness in cryptography could coexist with fragility in topology. The system’s security depended not only on code strength but also on architectural compartmentation. The Agency’s reuse of templates, hosting blocks, and design elements was weak tradecraft. It undermined that compartmentation and created a single attack surface.

It is important to recognize that the web-discoverability hypothesis complements rather than replaces the other two causes. Lee’s betrayal and intrinsic platform weaknesses likely provided the initial penetration points that allowed adversaries to begin to dig. The enumeration process then magnified those breaches exponentially. The CIA has not publicly confirmed this reconstruction, understandably. Nonetheless, independent open-source evidence strongly supports the inference that the network’s design flaws were decisive.

The lessons extend beyond one agency or episode. The COVCOM failure demonstrates how operational hygiene in digital clandestine systems is as critical as cryptographic soundness and insider threats. A covert communication platform can fail not because its cipher is broken, but because its metadata is out in the wild. This insight has profound implications for modern intelligence and of course, counterintelligence work. As state and non-state actors deploy increasingly networked clandestine capabilities, the old principle of “need to know” must be re-engineered into “need to connect.” Going forward, it would be foolish not to design com platforms in a way that every covert node is architecturally unique. Different code bases, hosting, and design fingerprints are imperative to avoid global correlation. The COVCOM collapse shows the lethal cost of violating that principle.

So, the CIA’s network failures in China were not caused solely by human treachery or inadequate encryption but by an invisible architectural flaw. The covert web infrastructure could be mapped once any part was exposed. This vulnerability, amplified by Lee’s betrayal and existing COVCOM weaknesses, created a perfect storm that allowed adversaries to dismantle entire espionage networks with unprecedented speed. The “X-factor” was not mystical but mathematical, an emergent property of pattern recognition within an interconnected Internet. The episode stands as a cautionary tale that in the digital age, secrecy depends not merely on keeping information encrypted but on ensuring that the very existence of the system remains undiscoverable. Sophisticated FIS such as China’s have the capacity to “de-clandestine” it, and far too quickly.

~ C. Constantin Poindexter, MA in Intelligence, Graduate Certificate in Counterintelligence, JD, CISA/NCISS OSINT certification, DoD/DoS BFFOC Certification

References

Central Intelligence Agency. (2021). Inspector General’s review of clandestine communication failures (declassified summary). Langley, VA.

Citizen Lab. (2022). Statement on the fatal flaws found in a defunct CIA covert communications system. University of Toronto. https://citizenlab.ca/2022/09/statement-on-the-fatal-flaws-found-in-a-defunct-cia-covert-communications-system/

Overt Defense. (2022, October 5). Poorly designed CIA websites likely got spies killed. https://www.overtdefense.com/2022/10/05/poorly-designed-cia-websites-likely-got-spies-killed/

Risen, J. (2018, May 21). How China used a hacked CIA communications system to hunt down U.S. spies. The New York Times.

Security Boulevard. (2018, June 6). The espionage of former CIA case officer Jerry Chun Shing Lee for China.

U.S. Department of Justice. (2019). Former CIA officer sentenced for conspiring to commit espionage. Press release, April 19, 2019.