Operation Merlin: A D&D Failure by Strategic Compromise

Operation Merlin, denial and deception, d and d, intelligence, counterintelligence, espionage, counterespionage, HUMIN, C. Constantin Poindexter, CIA, NSA, DIA

Operation Merlin: A Denial and Deception Case Study in Covert Sabotage and the Anatomy of a Strategic Blunder of Enormous Proportions

Operation Merlin was a clandestine CIA program designed to undermine Iran’s nuclear weapons development program by inserting deliberately sabotaged warhead component blueprints through a recruited human asset. Executed from approximately 1998 through the early 2000s, the operation was an ambitious attempt at deception against a state-level nuclear proliferator. I am going to share my thoughts here about Operation Merlin through the lens of Denial and Deception (D&D) doctrine, evaluate its design, execution, and compromise against accepted deception planning frameworks. Drawing on trial exhibits from United States v. Sterling (2015), investigative reports, and foundational D&D literature, my opinion is that Operation Merlin, while possessing a sound deception concept, suffered from catastrophic failures in channel selection, feedback architecture, operational security, and post-compromise institutional decision-making that collectively rendered it not merely ineffective but potentially counterproductive to the national security interests it was designed to serve.

I. Introduction: Deception as Counterproliferation

The use of deception as a counterproliferation tool occupies an uncomfortable space in American intelligence history. Unlike tactical battlefield deception or strategic wartime misdirection, i.e., domains in which the United States and its allies developed sophisticated doctrinal frameworks during the Second World War, deception operations targeting foreign weapons programs operate in a gray zone where the consequences of failure are measured not in lost engagements but in accelerated existential threats. Operation Merlin sits at the center of this tension: an operation whose architects understood the strategic imperative but whose execution betrayed a fundamental misapprehension of the doctrinal requirements for successful material deception against a sophisticated state adversary.

To offer a robust eveluation of Merlin, we need to move beyond the narrative of its public exposure (the prosecution of CIA case officer Jeffrey Sterling, the journalism of James Risen, the spectacle of a federal trial in which CIA operatives testified behind seven-foot partitions) and instead subject the operation to the same analytical framework that professional deception planners apply to their own work. This essay applies the six-element D&D planning framework derived from Barton Whaley’s foundational taxonomy in Stratagem: Deception and Surprise in War (Whaley, 2007), Richards Heuer’s cognitive analytical model from Psychology of Intelligence Analysis (Heuer, 1999), and the operational principles codified in Joint Publication 3-13.4, Military Deception (Joint Chiefs of Staff, 2012), supplemented by the historical precedent of the XX Committee’s Double Cross System as the benchmark for successful material deception at scale.

II. Strategic Context and the Deception Concept

By the late 1990s, the U.S. Intelligence Community assessed with growing confidence that Iran was pursuing nuclear weapons capability, though the evidentiary basis for this assessment remained contested internally. The 2001 National Intelligence Estimate, the first to formally conclude that Iran was working toward a nuclear weapon, was later characterized by Paul Pillar, then the CIA’s National Intelligence Officer for the Near East and South Asia, as resting on “a matter of inference” rather than direct evidence (Porter, 2014). Nevertheless, the policy imperative to disrupt Iran’s nuclear trajectory was acute, and the menu of available options was constrained by the absence of a viable military target set and the diplomatic limitations of the post-JCPOA environment that would not materialize for another fifteen years.

Into this gap stepped the CIA’s Directorate of Operations with a proposal rooted in material deception: recruit a Russian nuclear scientist with legitimate technical credentials, provide him with doctored blueprints for a nuclear warhead firing set, and direct him to deliver these blueprints to Iranian officials under the legend of a mercenary walk-in seeking financial compensation for proliferation-grade technical intelligence (Risen, 2006).

Within Whaley’s taxonomy, this concept falls squarely under the category of “mimicking”, creating a false artifact that imitates a real one closely enough to be accepted as authentic by the target (Whaley, 2007). The doctored blueprints were not fabrications from whole cloth; they were based on genuine Russian weapons designs, modified to contain dozens of hidden engineering flaws that would cause any device constructed from them to fail. The deception’s success depended on the flaws being sufficiently subtle to evade detection by Iranian scientists while being sufficiently fundamental to render the resulting weapon inoperable.

The concept was sound. Material deception (the introduction of fabricated or corrupted physical artifacts into an adversary’s intelligence or procurement stream ) has a long and occasionally successful history, from Operation Mincemeat’s fictitious invasion plans in 1943 to the CIA’s Cold War-era contamination of Soviet technical collection channels. The critical question was never whether the concept could work in principle, but whether the CIA possessed the operational infrastructure, tradecraft discipline, and institutional patience to execute it against a counterintelligence-aware adversary like Iran.

III. Operational Design and Execution

The operation’s centerpiece was a human asset — a Russian nuclear engineer recruited by the CIA and referred to at trial under the cryptonym “Merlin” (United States Department of Justice [USDOJ], 2015). Merlin possessed genuine scientific credentials, making him a plausible vector for the delivery of proliferation-grade material. His CIA handler from November 1998 through May 2000 was case officer Jeffrey Alexander Sterling, who managed the asset relationship and coordinated the operational logistics of the delivery (USDOJ, 2015).

The delivery was designed to exploit a known vulnerability in Iran’s procurement architecture: its reliance on intermediaries and walk-in sources for weapons-relevant technical intelligence. Merlin was directed to approach Iran’s mission to the International Atomic Energy Agency (IAEA) in Vienna, Austria, and provide an incomplete set of the doctored blueprints. The incompleteness was deliberate. It created an incentive structure requiring the Iranians to re-contact Merlin for the remaining schematics, thereby confirming acceptance of the bait and potentially opening a sustained intelligence collection channel into Iran’s nuclear procurement apparatus (Risen, 2006).

Former National Security Adviser Condoleezza Rice testified at Sterling’s trial that the program was “one of the only levers we had to try to disrupt Iran’s nuclear program” and characterized it as among the government’s “most closely held secrets” (Barakat, 2015). Rice further stated that she personally intervened with the New York Times to suppress publication of a story about the operation, arguing that exposure could result in catastrophic loss of life (Gerstein, 2015).

The execution in February 2000 deviated significantly from the operational plan. Merlin’s testimony at trial revealed that he had difficulty locating the Iranian mission in Vienna. When he found it, no one answered the door. He ultimately placed the envelope containing the blueprints in a mailbox and covered it with a newspaper (Solomon, 2015). Additionally, Merlin deviated from his handlers’ instructions regarding the contact mechanism: rather than providing an American mailing address as directed, he substituted an email address, reasoning that an American postal address would appear suspicious to Iranian counterintelligence and could be traced back to him (Solomon, 2015).

These deviations carry BIG implications when evaluated against D&D doctrine. An asset who autonomously modifies operational parameters based on his own risk calculus (however rational that calculus may be) introduces uncontrolled variables into the deception architecture. More critically, Merlin’s technical competence, which made him a credible channel, simultaneously made him capable of evaluating the material he was tasked to deliver. According to Risen’s account, Merlin recognized the deliberate flaws in the schematics and transmitted his belief along with the delivery which signaled to the Iranians that the blueprints were intelligence service-manufactured, allowing Iranian scientists to identify and discard the sabotaged elements while extracting legitimate technical data (Risen, 2006). Merlin denied these characterizations under oath, testifying that Risen’s depiction of him as reluctant was “completely untrue” (Solomon, 2015). The divergence itself is analytically significant: if Risen’s source was not Merlin, then whoever provided those details possessed the kind of intimate operational knowledge consistent with a case officer’s access.

IV. D&D Doctrinal Evaluation

A. Desired Perception

The foundational requirement of any deception operation is a clearly defined desired perception, i.e., the specific belief the operation is designed to induce in the target’s mind (Joint Chiefs of Staff, 2012). Operation Merlin’s desired perception was straightforward: that the blueprints were genuine proliferation material obtained through an illicit procurement channel (a disgruntled or mercenary Russian scientist selling weapons knowledge for financial gain).

This perception was plausible on its face. Russian nuclear scientists in the post-Soviet period were documented to be underpaid, underemployed, and in some cases actively solicited by proliferating states. The desired perception exploited a real phenomenon, which is doctrinally correct. The most effective deceptions are those anchored in patterns the target already recognizes and expects (Heuer, 1999). Assessment: Adequate.

B. The Deception Story

The constructed narrative, a Russian scientist approaching Iran’s IAEA mission as a walk-in, offering warhead-grade schematics for money, was coherent as a standalone legend. Walk-in approaches by foreign nationals offering technical intelligence were not unprecedented in proliferation networks.

However, there is no indication in the trial record that the CIA subjected this story to rigorous adversarial analysis “red-teaming” we call it. The planners missed specifically examining how Iran’s Ministry of Intelligence and Security (VEVAK) would process and evaluate a cold-approach walk-in offering firing set blueprints. VEVAK had extensive institutional experience identifying Western intelligence provocations, and a walk-in of this nature. An unsolicited player offering the single most sensitive category of weapons data, with no prior relationship or established bona fides would have triggered significant counterintelligence scrutiny. The absence of documented red-team analysis suggests the deception story was evaluated for internal plausibility rather than adversarial resilience. Assessment: Deficient.

C. Channel Selection

D&D doctrine, codified in lessons from the London Controlling Section’s World War II operations and subsequent CIA and DoD guidance, instructs that the credibility of the delivery channel is the single most critical variable in material deception. The channel must be one that the adversary already trusts or is predisposed to trust, typically because the source has previously provided verified intelligence, is embedded in a network the adversary already exploits, or mimics an approach pattern the adversary has successfully used before (Holt, 2004).

From Iranian FIS’s perspective Merlin possessed none of these attributes . He was an unknown entity conducting a cold approach. His operational execution was amateurish, i.e., unable to locate the mission, leaving material in an unattended mailbox, etc.. From an Iranian counterintelligence officer’s perspective, applying the analytical principles Heuer articulated, the approach contained no prior cognitive anchor that would predispose acceptance (Heuer, 1999). The channel was cold, unvetted from the target’s vantage point, and operationally clumsy.

Taking a lesson from history, the Double Cross System is instructive. The XX Committee’s deception channels, turned German agents who fed disinformation to the Abwehr, were effective precisely because they were channels the adversary had already accepted and validated through prior intelligence exchanges. Double Cross built credibility over months and years of carefully calibrated true-false reporting mixtures before introducing critical strategic deceptions like FORTITUDE. Operation Merlin attempted to deliver the equivalent of FORTITUDE-grade material through a channel with zero established credibility. Assessment: Critically Deficient.

D. Feedback Architecture

The operation’s feedback mechanism was its most elegant design element: the deliberate incompleteness of the blueprints created a natural trigger requiring Iran to re-contact Merlin for the remaining schematics, thereby confirming acceptance.

The problem was singular and fatal: Iran never responded. This silence created an analytical void that the operation had no means to resolve. The CIA could not determine whether Iran had detected the deception and discarded it, had accepted the material but chose to develop it independently, had never routed the material to a competent analyst, or whether VEVAK had flagged the approach as a provocation and filed it as a counterintelligence reference.

Well-designed deception operations maintain redundant feedback mechanisms precisely to prevent this kind of interpretive paralysis. The Double Cross System’s feedback architecture, continuous monitoring of German assessments through ULTRA decrypts of Abwehr and OKW communications, allowed deception planners to observe in near-real-time whether their false intelligence was being accepted, rejected, or partially integrated, and to adjust their deception stories accordingly (Howard, 1995). Operation Merlin had a single feedback point, and when that point went silent, the operation was effectively blind. No secondary collection mechanism (SIGINT, HUMINT from other sources inside Iran’s nuclear apparatus, or technical surveillance of Iranian procurement activity) was established to provide independent confirmation of the operation’s effect. Assessment: Critically Deficient.

E. Adaptability

Nothing in the trial record indicates that the CIA developed contingency plans for the various failure modes the operation might encounter — Iranian detection, asset compromise, the asset’s autonomous deviation from instructions, or operational exposure through internal security breaches. The reassignment of Sterling in May 2000 without documented succession planning or compartmentation review further suggests that continuity of operations planning was inadequate (USDOJ, 2015). He was the only player with intimate knowledge of the asset. When Sterling subsequently entered an adversarial posture with the agency, there was no adaptive mechanism to contain the resulting vulnerability. Assessment: Critically Deficient.

F. Operational Security

This is where Operation Merlin became a catastrophic F.U. The universe of individuals with knowledge of the operation expanded and expanded. The President, the National Security Adviser, senior CIA leadership, multiple case officers, the Russian asset and his wife, and after Sterling raised concerns through ostensibly proper channels, staffers on the Senate Select Committee on Intelligence knew it all. Each additional read-in was a point of compromise.

The most fundamental security failure was personnel-related. Sterling possessed direct, intimate knowledge of the operation, the asset’s identity, the tradecraft, and the operational dynamics. He was reassigned and then, within three months, became an Agency “adversary”. Counterintelligence doctrine requires enhanced monitoring of personnel with access to sensitive compartmented information who demonstrate indicators of potential unreliability. That would ABSOLUTELY include legal disputes with the employing I.C. agency. There is no indication that any such monitoring was implemented (Gerstein, 2015; Solomon, 2015). Assessment: Catastrophically Deficient.

V. The Vectors of Compromise

Operation Merlin was compromised through three distinct vectors, each representing a failure at a different level of the D&D security architecture.

The asset’s autonomous judgment constituted the first vector. Merlin’s technical competence, the very attribute that made him a credible channel, enabled him to evaluate and potentially undermine the material he was tasked to deliver. This is a structural paradox inherent in using technically sophisticated assets for material deception: the more credible the channel, the more capable it is of detecting and subverting the deception it carries.

The case officer’s grievance constituted the second vector. The prosecution established through communications metadata that Sterling and Risen were in contact during the periods preceding and following the publication of State of War, i.e., phone calls to Risen’s residence, emails containing articles related to Sterling’s former operational portfolio, and continued contact from December 2003 through November 2005 (USDOJ, 2015). Sterling’s defense argued that Senate Intelligence Committee staffers were a more plausible source and that the government’s evidence proved only communication, not the transmission of classified content (Wheeler, 2015). The jury found the circumstantial evidence sufficient, convicting Sterling on nine felony counts on January 26, 2015, and Judge Leonie Brinkema sentenced him to forty-two months (USDOJ, 2015).

The government’s self-compromise constituted the third and most strategically damaging vector. In prosecuting Sterling under the Espionage Act, the government introduced CIA operational cables, internal planning documents, and testimony from twenty-three CIA officers into the public record of a federal courtroom (Solomon, 2015). The trial revealed the operational concept, the asset’s role, the delivery methodology, the nature of the sabotaged blueprints, and the strategic rationale in far greater specificity than Risen’s book had disclosed. Bloomberg News reported from Vienna that the IAEA would “probably review intelligence they received about Iran as a result of the revelations,” with a former British envoy to the IAEA warning that the disclosures suggested “a possibility that hostile intelligence agencies could decide to plant a ‘smoking gun’ in Iran for the IAEA to find” (Solomon, 2015). Prosecutor James Trump acknowledged at sentencing that the exposure “ended the use of the nuclear-plans ruse against other countries” (Gerstein, 2015).

This third vector represents the most consequential D&D failure. In attempting to punish a compromise that had exposed a single operation, the government’s prosecution compromised an entire deception methodology. Any state with access to the public trial record — which now constitutes the most comprehensive open-source documentation of a CIA material deception program targeting a foreign nuclear capability — could retroactively audit its own procurement channels for similar operations and inoculate itself against future attempts. This is SPECIFICALLY why I refer to this as a strategic rather than tactical or operational disaster.

The Anti-Double Cross

Evaluated in its totality against the D&D planning framework, Operation Merlin represents something approaching the inverse of the Double Cross System. Where Double Cross maintained dozens of simultaneous channels with established credibility, Merlin relied on a single cold channel with no prior validation. Where Double Cross monitored adversary acceptance in near-real-time through ULTRA, Merlin had a single feedback mechanism that produced silence. Where Double Cross adapted its deception narratives continuously based on observed adversary reactions, Merlin had no adaptive capability. Where Double Cross maintained ruthless operational security — including the execution of compromised agents — Merlin allowed a disaffected case officer with comprehensive operational knowledge to depart the agency in an adversarial posture without enhanced counterintelligence monitoring.

The strategic concept underlying Operation Merlin (using sabotaged technical intelligence to misdirect a proliferating state’s weapons development) was theoretically sound. In a different operational context, I believe that it was completely viable. The failure was not conceptual but executional: a series of compounding deficiencies in channel selection, feedback architecture, adaptability, and operational security that transformed an ambitious deception operation into what may ultimately have been a net intelligence gain for the very adversary it was designed to deceive.

For the counterintelligence professional, Operation Merlin’s most enduring lesson may be its final chapter. The institutional impulse to punish unauthorized disclosure, when pursued through the adversarial transparency of a federal prosecution, can inflict damage orders of magnitude greater than the original compromise. The prosecution of Jeffrey Sterling did not restore the secrecy of Operation Merlin. It annihilated it. With it went the viability of an entire category of covert action against nuclear proliferators for the foreseeable future.

Regardless of which and what was worse, the results ware and are BAAADD. The op. is now a template. Any state with a competent intelligence service and access to the trial record (which is to say, absolutely everyone) can now retroactively audit its own procurement channels for operations matching this kind of pattern. The Agency has also created a counterintelligence inoculation of the adversary set. Every proliferating state now possesses a known reference case for how the U.S. I.C. constructs material deception against nuclear programs. Add to that the diplomatic blowback with the IAEA and lingering Iran-theatre analytical poisoning, and this becomes even uglier.

C. Constantin Poindexter, MA in Intelligence, Graduate Certificate in Counterintelligence, JD, CISA/NCISS OSINT certification, DoD/DoS BFFOC Certification

Bibliography

  • Barakat, M. (2015, January 16). CIA asset ‘Merlin’ testifies about mission at CIA leak trial. Associated Press.
  • Gerstein, J. (2015, May 11). Former CIA officer sentenced to 3-1/2 years for leaking Iran details. Politico.
  • Heuer, R. J. (1999). Psychology of intelligence analysis. Center for the Study of Intelligence, Central Intelligence Agency.
  • Holt, T. (2004). The deceivers: Allied military deception in the Second World War. Scribner.
  • Howard, M. (1995). Strategic deception in the Second World War: British intelligence operations against the German High Command. W. W. Norton.
  • Joint Chiefs of Staff. (2012). Joint Publication 3-13.4: Military deception. U.S. Department of Defense.
  • Porter, G. (2014). Manufactured crisis: The untold story of the Iran nuclear scare. Just World Books.
  • Risen, J. (2006). State of war: The secret history of the NSA and the Bush administration. Free Press.
  • Solomon, N. (2015, February 27). CIA evidence from whistleblower trial could tilt Iran nuclear talks. Guernica.
  • United States Department of Justice. (2015, May 11). Former CIA officer sentenced to 42 months in prison for leaking classified information and obstruction of justice [Press release].
  • United States of America v. Jeffrey Alexander Sterling, No. 1:11-cr-00005 (E.D. Va. 2015). Selected case files. Federation of American Scientists, Project on Government Secrecy.
  • Whaley, B. (2007). Stratagem: Deception and surprise in war. Artech House.
  • Wheeler, M. (2015, February 21). What was the CIA really doing with Merlin by 2003? EmptyWheel.

Partizan Crap Characterizes the 2026 I.C. Threat Assessment

national threat assessment, intelligence community, CIA, NSA, DIA, espionage, counterespionage, intelligence, counterintelligence, C. Constantin Poindexter

Unvarnished No More: The 2026 Annual Threat Assessment and the Politicization of American Intelligence, a Critical Analysis of Departures from Intelligence Community Analytical Traditions

On March 18, 2026, Director of National Intelligence Tulsi Gabbard presented the 2026 Annual Threat Assessment (ATA) to the Senate Select Committee on Intelligence, fulfilling the Intelligence Community’s statutory obligation under Section 617 of the FY21 Intelligence Authorization Act. The document’s own introduction pledges to deliver “nuanced, independent, and unvarnished intelligence” to policymakers (Office of the Director of National Intelligence [ODNI], 2026, p. 2). Yet a careful comparison of the 2026 ATA with its predecessors reveals systematic omissions, rhetorical softening, and political editorializing that collectively undermine the document’s claim to analytical independence. I argue that the 2026 ATA departs from Intelligence Community analytical traditions in ways that align with the administration’s political preferences, particularly regarding Russia, domestic extremism, and climate, and that these departures represent a failure of the DNI’s duty to provide unvarnished intelligence to Congress and the American people.

The significance of this argument cannot be overstated. The ATA exists precisely because democratic governance requires that elected officials receive honest assessments of threats, unfiltered by political convenience. Intelligence Community Directive 203, issued in 2007, codified the community’s formal tradecraft standards, mandating objectivity, transparency regarding sources and assumptions, and independence from political considerations (Just Security, 2025). The Intelligence Reform and Terrorism Prevention Act of 2004 (IRTPA) further requires that the DNI ensure intelligence products are “timely, objective, independent of political considerations, based upon all sources of available intelligence, and employ the standards of proper analytic tradecraft” (Pub. L. No. 108-458, § 1019). When an ATA is shaped to avoid contradicting the sitting president’s preferred narratives, it ceases to function as intelligence and instead becomes an instrument of political communication.

The Softening of Russia as a Strategic Threat

The 2024 ATA, produced under DNI Avril Haines, described Russia’s aggression in Ukraine as underscoring that Moscow “remains a threat to the rules-based international order” (ODNI, 2024, p. 5). The 2026 ATA, by contrast, introduces conciliatory language throughout its Russia analysis that reads less like threat assessment and more like diplomatic aspiration. It states that “Russia’s aspirations for multipolarity could allow for selective collaboration with the U.S. if Moscow’s threat perceptions regarding Washington were to diminish” and suggests that “a durable settlement to the war in Ukraine could open the door for a thaw in U.S.–Russia relations and an improved bilateral geostrategic and commercial relationship” (ODNI, 2026, pp. 27–28). This framing mirrors the administration’s diplomatic posture toward Moscow rather than the IC’s traditional threat-focused analytical lens.

The document further characterizes the concept of adversary alignment among China, Russia, Iran, and North Korea as overstated, calling it “limited and primarily bilateral” and asserting that the notion “overstates the depth of cooperation that is currently occurring” (ODNI, 2026, p. 20). This downgrading arrives despite the IC’s own acknowledgment in the same document that North Korea deployed over 11,000 troops to support Russian combat operations in Ukraine (ODNI, 2026, p. 24). The analytical minimization of adversary cooperation is consistent with President Trump’s longstanding reluctance to characterize Russia as an adversary, a posture that dates to his public siding with Vladimir Putin over U.S. intelligence findings at the 2018 Helsinki summit (Foreign Policy Research Institute [FPRI], 2019) as well as the point of view expressed by Gabbard publicly even predating her position within the I.C.

The Disappearance of Foreign Election Interference

Perhaps the most conspicuous omission in the 2026 ATA is the near-total absence of any discussion of foreign interference in U.S. elections. As Defense One reported, this marks the first time in nearly a decade that foreign threats to U.S. elections have been omitted from the annual threat assessment (Defense One, 2026). The 2024 ATA explicitly warned that China, Russia, and Iran would attempt to interfere in U.S. elections using generative AI and other means (ODNI, 2024). The 2025 DHS Homeland Threat Assessment similarly identified the 2024 election cycle as “an attractive target for many adversaries” and warned that nation-state-aligned actors would “continue to target democratic processes” (DHS, 2024, p. 4). The ODNI itself published a separate report titled “Foreign Threats to US Elections After Voting Ends in 2024” (ODNI, 2024b). That this entire threat category has vanished from the 2026 ATA is analytically inexplicable absent political motivation.

When Senator Mark Warner, the panel’s top Democrat, pressed Gabbard on this omission at the March 18 hearing, asking whether there was “no foreign threat to our elections in the midterms this year,” Gabbard’s response was evasive, stating only that the IC “has been and continues to remain focused on any collection and intelligence that show a potential foreign threat” (Defense One, 2026). This non-answer is consistent with DNI Gabbard’s broader pattern of minimizing Russian interference in American democracy. In July 2025, Gabbard declassified documents she claimed exposed a “treasonous conspiracy” by Obama-era officials regarding the 2016 Russian interference findings—allegations that multiple investigations, including the Republican-led Senate Intelligence Committee’s own probe, had already examined and found unsubstantiated (CNN, 2025; Lawfare, 2025). As the Council on Foreign Relations assessed, Gabbard’s actions have “deprived her of any pretension to analytical judgment independent of the president” (Betts, 2025).

The Erasure of Domestic Violent Extremism

The 2026 ATA’s terrorism section is focused almost exclusively on Islamist terrorism. Domestic violent extremism (DVE)—a category that encompasses racially or ethnically motivated extremism, anti-government militias, and other ideologically motivated domestic threats—receives no dedicated treatment. This stands in stark contrast to years of IC and DHS assessments that identified DVE as among the most persistent threats to the homeland. The DHS’s 2024 Homeland Threat Assessment warned that domestic violent extremists “driven by various anti-government, racial, or gender-related motivations” had conducted multiple attacks and that law enforcement had disrupted additional plots (DHS, 2024). The FBI reported over 1,700 domestic terrorism investigations underway as of late 2024 (House Homeland Security Committee, 2025). The Government Accountability Office released a comprehensive report in 2025 documenting the federal government’s ongoing domestic terrorism strategies and the persistent nature of the threat (GAO, 2025).

The omission of DVE from the 2026 ATA aligns with the Trump administration’s broader effort to reframe the terrorism discourse around Islamist ideology while downplaying threats from domestic actors whose motivations often overlap with right-wing political movements. The 2026 ATA’s extended discussion of the Muslim Brotherhood and its characterization of Islamist ideology as a “fundamental threat to freedom and foundational principles that underpin Western Civilization” (ODNI, 2026, p. 8) represents an analytical emphasis not seen in prior ATAs, which treated the terrorism landscape as ideologically diverse. This selective emphasis serves the administration’s political narrative while leaving Congress and the public without the IC’s assessment of a threat category that the FBI’s own data indicates remains active and lethal. It also unironically gives cover to a not insignificant group of Trump supporters, certainly purposeful by design.

The Removal of Climate Change as a Security Threat

The 2024 ATA treated climate change as a significant threat multiplier, stating that “the accelerating effects of climate change are placing more of the world’s population, particularly in low- and middle-income countries, at greater risk from extreme weather, food and water insecurity, and humanitarian disasters, fueling migration flows and increasing the risks of future pandemics” (ODNI, 2024, p. 5). Climate change appeared throughout that document as a driver of instability across multiple regions, including in assessments of Iran’s water scarcity challenges. The 2026 ATA eliminates climate change entirely as a named threat category. The term does not appear once. A single passing reference to “extreme weather events” in the migration section (ODNI, 2026, p. 7) is the only remnant of what had been a substantial analytical thread across multiple prior assessments.

This excision is not analytically defensible. The physical phenomena that made climate change a security concern in 2024 have not abated in 2026; if anything, the scientific consensus has strengthened. The removal reflects the Trump administration’s hostility toward climate science as a policy matter—a political preference that has no legitimate bearing on an intelligence community’s assessment of how environmental change affects geopolitical stability, food security, migration patterns, and conflict risk. The DNI’s role is to present the IC’s best assessment of reality, not to curate that reality to avoid topics the White House considers ideologically inconvenient.

Political Editorializing in an Intelligence Product

The 2026 ATA’s Foreword contains language that would have been unthinkable in prior assessments. It credits “President Trump sealing the U.S.–Mexico border” for enforcement successes and notes that “fentanyl seizures by weight have decreased 56 percent at the U.S.–Mexico border since President Trump took office” (ODNI, 2026, pp. 4–5). Annual threat assessments have traditionally employed dry, institutional prose that avoids attributing policy outcomes to individual political leaders by name. The function of an ATA is to assess threats, not to validate a president’s policy record. This departure transforms portions of what should be an analytical document into something resembling a political communication.

The editorializing extends beyond border policy. The Foreword adopts the administration’s rhetorical framework wholesale, stating that “we should be cautious about thinking that every problem in the world directly threatens us” (ODNI, 2026, p. 4)—a statement that, while perhaps reasonable in isolation, mirrors the administration’s America First foreign policy framing rather than reflecting IC analytical tradition. As scholars at the Foreign Policy Research Institute have warned, when political appointees shape intelligence products to serve the president’s messaging priorities, the core mission of the intelligence community—to provide independent analysis that may contradict leadership preferences—is fundamentally compromised (FPRI, 2019). The AEI documented how Gabbard fired the acting chair of the National Intelligence Council and his deputy after they produced assessments that contradicted administration positions, then physically relocated the NIC to her office to prevent what she characterized as “politicization” (American Enterprise Institute, 2025).

My Thoughts

From my view, the cumulative effect of these five departures, i.e., the softening of Russia’s threat profile, the erasure of foreign election interference, the omission of domestic violent extremism, the elimination of climate change as a security concern, and the introduction of political editorializing, is an Annual Threat Assessment that fails its statutory and institutional purpose. Each omission or distortion aligns with known political preferences of the Trump administration, and each contradicts the IC’s own recent analytical record. The IRTPA requires the DNI to ensure that intelligence is “independent of political considerations.” Intelligence Community Directive 203 mandates “objectivity, transparency regarding sources and assumptions, and independence from political considerations” (Just Security, 2025). The 2026 ATA, by its own internal evidence, fails both standards.

The consequences of this failure extend beyond the document itself. When intelligence products become vehicles for political messaging, policymakers lose the independent analytical baseline they need to make informed decisions. Congressional oversight is undermined when the IC’s primary public-facing threat assessment omits entire threat categories for political reasons. And public trust in the intelligence community, already strained by decades of controversy, erodes further when citizens can compare successive ATAs and observe that threats appear and disappear not because the world has changed but because the White House has changed. As Richard Betts of the Council on Foreign Relations observed, intelligence’s prime value often lies in telling leaders facts or implications they do not want to hear (Betts, 2025). A DNI who cannot or will not fulfill that function has, in the most consequential sense, abdicated the office’s reason for existing. The inconvenient truth is that the DNI’s acts and omissions are willful, a fact on perfect display during the Congressional hearing today (March 18th), during which Gabbard said, “Senator, the only person who can determine what is and is not an imminent threat is the president.” The Intelligence Community’s primary task is to provide warning intelligence, which is the very definition of the reporting of an “imminent threat”.

C. Constantin Poindexter, MA in Intelligence, Graduate Certificate in Counterintelligence, JD, CISA/NCISS OSINT certification, DoD/DoS BFFOC Certification

References

  • American Enterprise Institute. (2025, May 21). The politicization of intelligence. AEI. https://www.aei.org/articles/the-politicization-of-intelligence/
  • Betts, R. K. (2025, August 21). The intelligence community’s politicization: Dueling to discredit. Council on Foreign Relations. https://www.cfr.org/articles/intelligence-communitys-politicization-dueling-discredit
  • Defense One. (2026, March 18). Annual threat assessment omits election security. https://www.defenseone.com/policy/2026/03/annual-threat-assessment-election-security/412217/
  • Department of Homeland Security. (2024). 2025 Homeland Threat Assessment. https://www.dhs.gov/sites/default/files/2024-10/24_1002_ia_homeland-threat-assessment-2025.pdf
  • Foreign Policy Research Institute. (2019, August 12). A nadir is reached in the politicization of U.S. intelligence. https://www.fpri.org/article/2019/08/a-nadir-is-reached-in-the-politicization-of-u-s-intelligence/
  • Government Accountability Office. (2025). Domestic terrorism: Additional actions needed to implement the national strategy (GAO-25-107030). https://www.gao.gov/assets/gao-25-107030.pdf
  • House Homeland Security Committee. (2025, December 19). Threat snapshot: House Homeland unveils updated “Terror Threat Snapshot” assessment. https://homeland.house.gov/2025/12/19/threat-snapshot/
  • Intelligence Reform and Terrorism Prevention Act of 2004, Pub. L. No. 108-458, 118 Stat. 3638.
  • Just Security. (2025, June 20). When intelligence stops bounding uncertainty: The dangerous tilt toward politicization under Trump. https://www.justsecurity.org/114297/trump-administration-politicized-intelligence/
  • Lawfare. (2025, August 6). From Russian interference to revisionist innuendo: What the Gabbard files actually say. https://www.lawfaremedia.org/article/from-russian-interference-to-revisionist-innuendo–what-the-gabbard-files-actually-say
  • NBC News. (2024, December 11). Would Tulsi Gabbard bring a pro-Russian bias to intelligence reporting? https://www.nbcnews.com/politics/national-security/will-tulsi-gabbard-bring-russian-bias-intelligence-reporting-rcna180248
  • Office of the Director of National Intelligence. (2024). 2024 Annual Threat Assessment of the U.S. Intelligence Community. https://www.dni.gov/files/ODNI/documents/assessments/ATA-2024-Unclassified-Report.pdf
  • Office of the Director of National Intelligence. (2026). 2026 Annual Threat Assessment of the U.S. Intelligence Community. https://www.dni.gov/files/ODNI/documents/assessments/ATA-2026-Unclassified-Report.pdf
  • PBS NewsHour. (2025, July 24). Gabbard pushes report on Obama and Russia probe. https://www.pbs.org/newshour/show/gabbard-pushes-report-on-obama-and-russia-probe-as-trump-faces-pressure-over-epstein
  • Wittes, B. (2025, July 22). The situation: The lies of Tulsi Gabbard. Lawfare. https://www.lawfaremedia.org/article/the-situation–the-lies-of-tulsi-gabbard

The Takaichi “Prompt Exploit” as Novel Tradecraft: A Counterintelligence Operator’s View of AI Enabled Influence Operations

disinformation, information operations, espionage, counterespionage, intelligence, counterintelligence, psyops, C. Constantin Poindexter, CIA, DIA, NSA

AI Enabled Smear Operations and Counterintelligence Detection: Lessons from the Attempted ChatGPT Exploit Targeting Sanae Takaichi

The attempted exploitation of ChatGPT to support a covert smear campaign against Japanese Prime Minister Sanae Takaichi is not a novelty story about AI gone wrong. It is a clear operational vignette of how modern state-linked actors or FIS attempt to compress the intelligence cycle and accelerate influence effects with generative tools. OpenAI’s February 25, 2026 threat reporting describes a now banned ChatGPT account linked to an individual associated with Chinese law enforcement who attempted in mid October 2025 to leverage the model to plan and execute a covert influence operation aimed at discrediting Takaichi, followed by later requests to edit “cyber special operations” status reports after the model refused the original operational ask (OpenAI, 2026). Public reporting based on that disclosure adds that the actor’s plan included coordinated negative commentary, impersonation techniques, and wedge framing designed to mobilize resentment around U.S. tariffs and immigration narratives (Jiji Press, 2026; Reuters, 2026; Axios, 2026). From a counterintelligence perspective, this is a case study in how an adversary treats a commercial large language model as a low-friction staff officer: ideation, drafting, message discipline, and iterative refinement, all without needing to recruit a human asset or expose internal tradecraft through overt tasking channels.

What makes the episode analytically valuable is the specificity of the improper tasking. Reporting indicates that the actor asked ChatGPT to draft a multi part plan to discredit Takaichi, to generate and help post and spread negative comments attacking her stances including immigration, to polish narratives and recurring status reports describing ongoing cyber special operations, and to inflame wedge grievances by amplifying anger over U.S. tariffs on Japan (Jiji Press, 2026; Axios, 2026; OpenAI, 2026). These requests form a recognizable information operations workflow: design the campaign, manufacture content, distribute content, or at least create distribution-ready material, and assess and iterate based on reporting. In classical counterintelligence terms, the operator sought to maximize plausible deniability, minimize cost, and raise tempo, substituting generative capacity for time-consuming human copywriting while reducing the number of personnel who must be read into the narrative engineering function (CISA, 2022; ODNI FMIC, 2024).

The most important counterintelligence observation is that the exploit is not primarily technical. It is procedural and behavioral. Operators do not need to jailbreak a model to gain advantage. They can ask for adjacent assistance such as language polishing, translation, formatting, summarization of internal memos, and audience-tailored variations. OpenAI’s reporting explicitly notes the actor returned after an initial refusal and asked for edits to operational status reports, which is precisely how professional services are laundered in many influence pipelines: when direct enablement is blocked, pivot to editorial support and documentation hygiene (OpenAI, 2026). This aligns with U.S. government’s framing of foreign malign influence as subversive, undeclared, coercive, or criminal activity that uses multiple pathways and intermediaries, often blending overt platforms with covert personas and synthetic content (ODNI FMIC, 2024; DOJ, n.d.). The model is not the operation. It becomes a friction reducer within the operation.

Seen through the lens of the intelligence cycle, the actor’s approach collapses collection, analysis, production, and dissemination into a tight loop. The multi-part plan request is campaign design, meaning objective, target audience, narrative lines, channels, and timing. The post-and-spread request is dissemination planning and, at minimum, the production of ready-to-publish material. The status report editing request is assessment: codifying observed effects, identifying what resonated, and deciding next moves (OpenAI, 2026; Axios, 2026). When an influence apparatus scales, this loop becomes industrialized: many accounts, multi-platform content seeding, and iterative narrative tuning. Reporting around the OpenAI threat case underscores that these efforts can be large-scale, resource-intensive, and sustained, consistent with a bureaucracy rather than hobbyist trolling (Reuters, 2026; CyberScoop, 2026). As Ben Nimmo has emphasized, the intent is to apply pressure everywhere, all at once, which is characteristic of FIS or state-linked coercive information operations rather than organic political discourse (Axios, 2026).

The operational targeting of Takaichi is also instructive for counterintelligence because it sits at the intersection of influence operations and transnational repression. While this case focuses on a smear campaign against a Japanese political figure, OpenAI’s broader description of the actor’s uploaded materials suggests a wider ecosystem aimed at suppressing dissent and silencing critics, including tactics such as forged documentation and intimidation narratives (OpenAI, 2026; CyberScoop, 2026). The FBI defines transnational repression to include online disinformation campaigns, harassment, intimidation, and abuse of legal processes, exactly the kinds of tools that can be amplified or routinized by AI-assisted content generation (FBI, n.d.). In counterintelligence risk terms, that convergence matters. When an adversary blends influence effects, shaping attitudes, with coercive effects, punishing or deterring speech, the target set expands from voters to voices, and the operational threshold for harm drops.

The wedge grievance element, stoking resentment over U.S. tariffs, illustrates classic influence tradecraft. Hijack a real grievance, inflate it, and attach it to the target as a blame object. This is not persuasion via factual argument. It is agitation via emotional mobilization. CISA guidance on foreign influence operations describes how adversaries exploit mis, dis, and malinformation narratives to bias policy and undermine social cohesion, often by inflaming divisive issues (CISA, 2022). The tariff frame is particularly useful because it can be pitched simultaneously as anti-U.S., blaming Washington, and anti-target, blaming Takaichi’s posture for provoking friction, with variants tailored to different audiences. In counterintelligence vocabulary, this is narrative multi-casting: the same kernel is repackaged into mutually reinforcing storylines for disparate communities.

The cross platform distribution pattern referenced in public reporting, activity on X and other sites, with relatively low engagement but persistent output, resembles the known Chinese influence pattern commonly labeled Spamouflage or Dragonbridge: high volume, mixed quality, low authentic engagement, but sustained presence and periodic tactical evolution (Reuters, 2026; NATO StratCom COE, 2023; Graphika, 2025). Low engagement does not mean low intent or low risk. It can indicate poor tradecraft, early-stage testing, or a campaign optimized for secondary effects such as search pollution, narrative seeding for later pickup, or creating “evidence” of public sentiment that can be cited elsewhere. Counterintelligence professionals should treat low engagement content as potential scaffolding. The objective may be to build a lattice of posts, screenshots, and proof artifacts that can later be laundered into higher credibility channels.

From the defender’s side, the case clarifies what model refusal can and cannot do. OpenAI reports that ChatGPT refused overtly malicious prompts, yet the actor appears to have proceeded using other tools and later used ChatGPT for editing (OpenAI, 2026). This reveals a strategic limitation. Safety filters reduce direct enablement. They do not eliminate the underlying operational capability of a state apparatus that can shift to domestic models, human copywriters, or alternative platforms. Effective mitigation requires a layered approach: model-side safeguards, platform-side enforcement, and inter-organizational intelligence sharing that treats AI as one component in a broader influence toolkit (OpenAI, 2026; CISA, 2024). The IC’s Foreign Malign Influence Center has emphasized that foreign malign influence is multi-actor and multi-pathway by design, which implies countermeasures must also be multi-pathway. Detection in one node rarely collapses the whole network (ODNI FMIC, 2024).

For counterintelligence operators, three takeaways are operationally salient. First, generative AI is best understood as an accelerant of existing influence doctrine rather than a replacement. It speeds up drafting, localization, and A B testing of narratives while enabling bureaucratic reporting to be produced faster and with greater stylistic consistency (OpenAI, 2026; CISA, 2022). Second, the human factor remains the decisive vulnerability. The actor’s interaction with ChatGPT created an evidentiary trail that allowed defenders to correlate intent, post-and-spread negative commentary with observed online activity. This is a reminder that operational security failures frequently occur in routine administrative behavior (OpenAI, 2026; CyberScoop, 2026). Third, influence and repression are increasingly convergent lines of effort. When disinformation is used not only to persuade but to intimidate, deplatform, or socially punish, the problem set expands to include civil liberties impacts, diaspora targeting, and sovereignty challenges (FBI, n.d.; DOJ, 2023).

In countermeasures terms, the Takaichi case underscores the value of structured analytic techniques in attribution and mitigation. Analysts should separate narrative content, behavioral signals such as posting cadence and account creation patterns, infrastructure signals such as hosting and coordinated link sharing, and procedural artifacts such as templated emails, repeated phrasing, and report formats. OpenAI’s account-level disruption, combined with open-source correlation to online hashtags and posts referenced in operational materials, is a template for fusion analysis that pairs platform telemetry with OSINT validation (OpenAI, 2026). NATO-aligned research similarly emphasizes that state-sponsored or FIS information operations exploit differences across platforms and jurisdictions. Defenders should expect rapid lateral movement when friction increases on any single platform (NATO StratCom COE, 2023).

The attempted exploit is best characterized as an “AI-enabled influence operation reconnaissance and production cycle, with the model treated as a drafting cell embedded in a broader state-linked apparatus”. The key question is not whether a model can be tasked with dissemination directly. It is whether it can generate dissemination-ready content, standardize narrative discipline, and reduce the time and training required to run a coordinated smear campaign. In this case, it could at least partially, until refusal controls forced the actor to route around and repurpose the model for editing and reporting (OpenAI, 2026; Jiji Press, 2026). For counterintelligence professionals, that reality demands a posture shift.. We must defend not only against disinformation artifacts but against the process improvements that AI grants adversaries. Faster cycles, lower labor costs, and more plausible linguistic camouflage are the new norm. The Takaichi operation appears to have underperformed in engagement, yet it is a forward indicator of how state-backed influence operational tradecraft is adapting to generative systems. They are persistent, multi-platform and procedurally agile (Reuters, 2026; Graphika, 2025).

C. Constantin Poindexter, MA in Intelligence, Graduate Certificate in Counterintelligence, JD, CISA/NCISS OSINT certification, DoD/DoS BFFOC Certification

Bibliography

  • Axios. (2026, February 25). Reporting on OpenAI’s disclosure of a China linked attempt to use ChatGPT to plan and refine a smear campaign targeting Japan’s Prime Minister Sanae Takaichi.
  • Cybersecurity and Infrastructure Security Agency. (2022). Preparing for and mitigating foreign influence operations (CISA Insight).
  • Cybersecurity and Infrastructure Security Agency. (2024, April 17). Guidance for securing election infrastructure against tactics of foreign malign influence (Joint guidance release with FBI and ODNI).
  • CyberScoop. (2026, February 25). Reporting on OpenAI’s threat report and Chinese law enforcement linked “cyber special operations” materials uploaded for editing.
  • Federal Bureau of Investigation. (n.d.). Transnational repression (Overview page describing tactics including online disinformation campaigns, harassment, and intimidation).
  • Graphika. (2025). Chinese state influence (Selected insights from Graphika ATLAS reporting, November 2024 to January 2025).
  • Jiji Press. (2026, February 27). Reporting summarized by Nippon.com on OpenAI’s claim that a Chinese law enforcement official asked ChatGPT to draft a plan to discredit Takaichi and to post and spread negative comments.
  • NATO Strategic Communications Centre of Excellence. (2023). Dragons roar and bears howl: Convergence in Sino Russian information operations in NATO countries.
  • OpenAI. (2026, February 25). Disrupting malicious uses of AI (Threat report describing disruption of accounts, including an influence operation attempt targeting Sanae Takaichi).
  • Reuters. (2026, February 25). Reporting on OpenAI’s threat report detailing misuse of ChatGPT for scams and influence operations, including a smear campaign targeting Japan’s prime minister.
  • Reuters. (2026, February 26). Reporting on a Foundation for Defense of Democracies analysis of China linked influence operations targeting Japan’s elections and Prime Minister Sanae Takaichi, consistent with Spamouflage and Dragonbridge patterns.
  • U.S. Department of Justice. (2023, April 17; updated 2025, February 6). Press release describing charges tied to transnational repression schemes and the use of fake online personas to harass dissidents and disseminate state narratives.
  • U.S. Office of the Director of National Intelligence, Foreign Malign Influence Center. (2024). FMI Primer (Public release defining foreign malign influence and its pathways).

A U.S. Attack on Iran, a Catastrophic Unforced Error

war, warfighter, Iran, U.S., intelligence, counterintelligence, espionage, counterespionage, C. Constantin Poindexter, CIA, NSA

Public and Congressional Support: The Decisive Constraint That Can Turn U.S. Military Dominance Over Iran Into Strategic Defeat

The United States retains overwhelming advantages in the material and operational prerequisites of high-end conventional warfare. In any prospective conflict with Iran, Washington can assume advantages in air and naval superiority, intelligence and surveillance coverage, precision strike capacity, suppression of enemy air defenses, long-distance logistics, and advanced cyber and electronic warfare. Yet those advantages do not automatically translate into strategic success. The decisive variable is not whether the United States can destroy targets faster than an adversary can replace them, but whether the United States can sustain the political mandate to keep fighting after the initial shock of combat wears off, the costs become visible, and the enemy adapts.

This is the core vulnerability of a discretionary war with Iran. Public support and congressional support are not merely background noise or messaging challenges. They are strategic enablers. When they are absent or brittle, they shape rules of engagement, constrain time horizons, narrow acceptable costs, and fracture coalition cohesion. In that environment, even tactically brilliant operations can fail to achieve the most important objectives because political will collapses sooner than the enemy’s capacity to resist. Vo Nguyen Giap articulated this logic explicitly. A belligerent can leave enemy forces partly intact if it can destroy the enemy’s will to remain in the war. (PBS, n.d.) That insight was operationalized against the United States in Vietnam, echoed in Afghanistan, and remains relevant to any prospective United States-Iran war.

The Strategic Center of Gravity: Legitimacy and Endurance

Clausewitz argued that war is a continuation of politics by other means. In American practice, the political character of war is inseparable from constitutional structure and democratic consent. A war that begins without clear congressional authorization, or that proceeds amid broad public skepticism, can win battles while steadily losing its domestic foundation. The War Powers Resolution codifies Congress’s position that the President introduces U.S. forces into hostilities only pursuant to a declaration of war, specific statutory authorization, or a national emergency created by an attack on the United States or its forces. (50 U.S.C. § 1541, 1973) In a discretionary strike campaign that grows into sustained hostilities, the gap between executive action and legislative consent becomes a recurring legitimacy crisis rather than a one-time procedural dispute.

Recent reporting underscores that this institutional fault line is not theoretical. Reuters reported that the U.S. Senate rejected a bid to curb presidential Iran war powers, reflecting a live and lively, contested debate over authority and oversight in potential Iran hostilities. (Reuters, 2025) That debate matters operationally because contested legitimacy does not remain in Washington. It affects allied basing decisions, overflight permissions, intelligence sharing, escalation thresholds, and the credibility of U.S. signals to both adversaries and partners. A campaign that looks unilateral, politically improvised, or domestically unpopular becomes harder to sustain and easier for Iran and its proxy network to frame as illegitimate aggression.

Material Superiority Versus Political Fragility

From my perspective (military/intelligence), the United States can plausibly execute many of the classic prerequisites you listed. But those capabilities do not eliminate the central political question: what is the concrete objective, and how long will the American public accept the costs required to achieve it?

Public sentiment data indicates a serious constraint. A University of Maryland Critical Issues Poll found only 21 percent favor the United States initiating an attack on Iran, with 49 percent opposed and 30 percent unsure. (University of Maryland Critical Issues Poll, 2026) A YouGov report covering an Economist YouGov poll likewise found Americans more likely to oppose than to support using military force to attack Iran, with 49 percent opposing and 27 percent supporting, and with significant partisan and independent resistance. (YouGov, 2026) Meanwhile, an AP NORC poll found that while many Americans view Iran as an enemy and express concern about Iran’s nuclear program, they have low trust in presidential judgment on the use of military force, with only about three in ten expressing high trust and more than half expressing little or no trust. (Associated Press NORC, 2026)

The former being said, all of this has strategic implications. They suggest that domestic consent is not merely divided. It is structurally thin, with a large, uncertain middle and a relatively small affirmative mandate for initiating war. Low confidence in the decision maker’s judgment means that early setbacks or civilian casualties can rapidly convert uncertainty into opposition. Further, thin consent invites legislative confrontation, and legislative confrontation invites operational constraints. This is exactly the kind of environment in which an adversary designs a strategy of political attrition rather than symmetrical military competition.

Vietnam: Giap’s Theory of Victory Was Political

The claim that “North Vietnam did not win the Vietnam War” can be true in a narrow kinetic sense. The United States inflicted vast battlefield losses and dominated many tactical engagements. Yet North Vietnam and the Viet Cong were able to outlast the United States by targeting the political will that sustained American participation. Giap described the objective as breaking “the American will to remain in the war,” using operations intended to force de-escalation and reshape the political calculus in Washington. (PBS, n.d.) The point is not that one event alone decided the outcome. The point is that the adversary’s theory of victory treated American domestic endurance as the center of gravity. Once that center weakened, America’s material advantages could not convert into a stable political settlement on acceptable terms.

For an Iran scenario, the parallel is not an exact replay of Vietnam’s terrain or insurgency structure. The parallel is “strategic method”. Iran does not need to win a conventional air-sea contest. It needs to ensure that the United States does not achieve its most important objectives at a politically acceptable cost. If Iran can force Washington into a cycle of escalation and retaliation, or can trigger regional proxy pressure that steadily raises the price of engagement, then the war becomes a contest of domestic patience more than a contest of platforms.

Afghanistan and the Logic of “Time”

The Afghanistan experience reinforces the same strategic logic through a different mode of war. A saying widely attributed to Taliban fighters captures the asymmetry of time horizons: “You have the watches, we have the time.” (Maclean’s, 2017) The exact provenance of the phrase is less important than its strategic meaning. The U.S. Administration is about fall into the same bullshit trap. Democracies fight under time constraints produced by elections, news cycles, budget politics, and public casualty sensitivity. Insurgent and revolutionary actors often fight under generational horizons, with lower sensitivity to near term losses and a stronger tolerance for prolonged hardship.

Iran’s leadership and its proxy network have repeatedly demonstrated a long-horizon approach to regional strategy. In a conflict, Iran can employ calibrated escalation through proxies, maritime harassment, missile and drone pressure, and political warfare aimed at eroding coalition cohesion (a “coalition” of states that have already publicly objected to U.S. warplanning). The objective is not necessarily to defeat U.S. forces in the field. It is to make the conflict feel indefinite, morally ambiguous, and strategically distracting, which are precisely the conditions that drain public support in the United States.

“Shock and Awe” Does Not Solve the Political Problem

Advocates of rapid strike campaigns often argue that overwhelming early force can preempt political attrition by ending the conflict quickly. History offers caution. Initial public support (pretty clearly NOT the case today) can be high at the onset of a war, but it can erode sharply as the war’s duration and costs expand, particularly if the rationale becomes contested. The Iraq example is instructive: Gallup reported 72 percent support for the war against Iraq in late March 2003. (Gallup, 2003) Yet Gallup later documented substantial erosion in perceived worth and support over time as realities on the ground diverged from initial expectations. (Gallup, 2006) The Brookings analysis of early Iraq war opinion similarly underscores the rally effect and its limits. (Kull, Ramsay, and Lewis, 2003)

For Iran, the political risk is heightened because current polling suggests the United States would begin without anything like the 2003 level of public backing. (University of Maryland Critical Issues Poll, 2026) (YouGov, 2026) (Associated Press NORC, 2026) Without a broad initial mandate, the usual pattern reverses: instead of rallying, creating a cushion against early shocks, early shocks can collapse a narrow coalition of support. Moreover, Iran is structurally capable of generating early shocks through proxy responses and regional disruption, meaning that the political challenge may begin immediately, not after months or years.

Congress as a Strategic Actor, Not a Background Variable

In a system where Congress controls funding and has constitutional war powers, the legislative branch becomes a de facto strategic actor. When Congress is divided, when authorization is ambiguous, or when the public is skeptical, Congress can constrain the war through funding restrictions, reporting requirements, and political signaling that affects allied behavior. Reuters reporting on war powers debates around Iran illustrates that these conflicts are not hypothetical. (Reuters, 2025) Even the sycophants are likely to run out of patience for another endless foray, largely due to constituent pressure rather than disloyalty to their cult.

This really matters. Strategic clarity requires durable political consensus. If objectives are unclear or expand, congressional opposition becomes more likely and more intense. Further, Iran can exploit visible domestic division through information operations, propaganda, and calibrated escalation intended to polarize U.S. politics. In that sense, a weak domestic mandate is not merely a constraint on U.S. freedom of action. It becomes a targetable vulnerability. The North Vietnamese knew it. The Afghans knew it, and the more sober members of the Department of Defense know it.

A Missing Ingredient: Defined, Credible Political Objectives

Even if the United States can strike nuclear facilities, degrade air defenses, and disrupt command networks, the strategic question remains what “winning” means and what settlement conditions are realistically attainable. If the objective is limited, such as delaying nuclear capabilities, the question becomes whether limited objectives justify the costs and risks of regional escalation. If the objective expands to regime change, the problem becomes far harder because military destruction does not automatically produce political legitimacy, stable governance, or a non-hostile successor regime. Here, the user’s final criterion is decisive. Post-conflict planning, a wicked difficult peril that we have botched over and over again, will repeat itself. History shows that military victory without a stabilization strategy yields strategic failure, and the public tends to punish wars that feel open-ended, morally muddled, or poorly planned.

In the Iran case, this risk is amplified because a strike campaign can trigger proxy retaliation in multiple theaters, raise energy and shipping risks, and produce unpredictable political reverberations, all of which can be framed domestically as an optional war of choice rather than a necessary act of self-defense. When a war’s necessity is contested, public support becomes the decisive front.

Dominance in Combat Power Does Not Guarantee Strategic Success

The United States may indeed be dominant across many of the operational categories that matter for battlefield performance. Yet wars are not won solely by platform superiority. They are won by aligning military means with politically sustainable ends. Current public opinion suggests a narrow and fragile mandate for initiating an attack on Iran, combined with low confidence in executive judgment about the use of force. (University of Maryland Critical Issues Poll, 2026) (YouGov, 2026) (Associated Press NORC, 2026) In that environment, congressional contention over authorization and war powers becomes a predictable friction point, not an occasional procedural dispute. (50 U.S.C. § 1541, 1973) (Reuters, 2025) Iran and its proxy network do not need to defeat the United States conventionally to succeed strategically. They need to prolong, complicate, and regionalize the conflict until the United States loses the will and domestic legitimacy to continue, echoing Giap’s theory of victory in Vietnam and the time horizon logic captured by the Afghanistan aphorism. (PBS, n.d.) (Maclean’s, 2017)

A United States attack on Iran will NOT end well. We’ll have tactical dominance paired with a complete strategic disaster. Without sustained public and congressional support, the United States will fail to achieve its most important objectives (if the Administration can even articulate them) at an acceptable cost. The venture will not end with a clear victory, but with political exhaustion and a forced search for exit ramps. That is not my political critique. It is a strategic assessment rooted in how democratic states actually choose war and wage war. My call? Don’t f. do it. Exaggerations about “days from completing a nuclear weapon” coupled with no clear objective or endgame is a movie that we’ve seen before.

C. Constantin Poindexter, MA in Intelligence, Graduate Certificate in Counterintelligence, JD, CISA/NCISS OSINT certification, DoD/DoS BFFOC Certification

Bibliography

  • Associated Press NORC Center for Public Affairs Research. 2026. “Most Americans see Iran as enemy but doubt Trump on military force: poll.” Associated Press.
  • 50 U.S.C. § 1541. 1973. War Powers Resolution, “Purpose and policy.” Legal Information Institute, Cornell Law School.
  • Gallup. 2003. “Seventy Two Percent of Americans Support War Against Iraq.” Gallup News Service, March 24, 2003.
  • Gallup. 2006. “Three Years of War Have Eroded Public Support.” Gallup News Service, March 17, 2006.
  • Kull, Steven, Clay Ramsay, and Evan Lewis. 2003. “Rally Round the Flag: Opinion in the United States before and after the Iraq War.” Brookings Institution.
  • Maclean’s. 2017. “Fighting in Afghanistan: ‘You have the watches. We have the time’.” September 2, 2017.
  • PBS. n.d. “Peoples Century: Guerrilla Wars: Vo Nguyen Giap Transcript.” Public Broadcasting Service.
  • Reuters. 2025. “US Senate rejects bid to curb Trump’s Iran war powers.” June 27, 2025.
  • University of Maryland Critical Issues Poll. 2026. “Do Americans Favor Attacking Iran Under the Current Circumstances? The Latest Critical Issues Poll Findings.”
  • YouGov. 2026. “Few Americans support U.S. military action against Iran, but a majority think it’s likely.” Economist YouGov poll, February 20 to 23, 2026.

AI as a Force Multiplier in Recent Intrusion Operations

AI, artificial intelligence, intelligence, counterintelligence, espionage, counterespionage, hacker, cyber, cyber security, C. Constantin Poindexter

AI as a Force Multiplier in Cyber Intrusions: Counterintelligence Lessons from the Amazon Threat Intelligence FortiGate Campaign, AI-Assisted Attack Planning, and Scalable Post-Exploitation Tradecraft

From a counterintelligence professional’s perspective, I read Amazon Threat Intelligence’s February 2026 report less as a novelty story about “hackers using AI” and more as a warning about a structural change in operational economics. The important point is not that a threat actor used a large language model. It is that a presumably low-to-medium skill, financially motivated Russian-speaking actor was able to scale intrusion activity across more than 600 FortiGate devices in over 55 countries in roughly five weeks by integrating commercial AI services into every phase of the attack workflow (Moses, 2026). In counterintelligence terms, this is a capability amplification event. AI did not make the actor sophisticated. It made the actor productive (Moses, 2026).

That distinction matters. Amazon’s analysis is unusually valuable because it documents both sides of the phenomenon. On one hand, the actor used AI to generate attack plans, write tooling, sequence actions, and coordinate operations at a tempo that would traditionally imply a larger team. On the other hand, the same actor repeatedly failed when facing hardened environments, patched systems, or nonstandard conditions. Amazon explicitly notes that the actor could not reliably compile custom exploits, debug failures, or creatively pivot beyond straightforward automated paths (Moses, 2026). This is exactly what a counterintelligence officer should expect from a force multiplier: improved throughput without equivalent gains in judgment, tradecraft, or adaptability.

The Amazon case is especially useful because it separates hype from mechanism. The campaign did not depend on exotic zero-days. Amazon states that no FortiGate vulnerability exploitation was observed in the campaign it analyzed; instead, the actor exploited exposed management interfaces, weak credentials, and single-factor authentication, then used AI to execute these known methods at scale (Moses, 2026). That is a profound lesson for defenders. AI is not changing the laws of intrusion. It is compressing the time and labor required to exploit organizations that still fail at fundamentals.

From a counterintelligence perspective, this changes how we should think about indications and warnings. Historically, broad multi-country infrastructure access, custom scripts in multiple languages, and organized post-exploitation playbooks would often suggest a resourced team such as an FIS, state-supported private operator, or at least a mature criminal crew. Amazon’s report shows that this inference is no longer reliable. The actor’s infrastructure contained numerous scripts and dashboards with hallmarks of AI generation, and Amazon concluded that a single actor or very small group likely produced a toolkit whose volume would previously imply a development team (Moses, 2026). In intelligence analysis, this is a warning against legacy heuristics. Scale is no longer a clean proxy for organizational size or skill.

Amazon’s “AI as a force multiplier” section is the core of the matter. The actor used at least two distinct commercial LLM providers in complementary ways. One served as the primary tool developer and operational assistant, while another was used as a supplementary planner when the actor needed help pivoting inside a compromised network (Moses, 2026). In one observed instance, the actor reportedly submitted a victim’s internal topology, hostnames, credentials, and identified services to obtain a step-by-step compromise plan (Moses, 2026). For counterintelligence professionals, this is not just a cyber issue. It is a tradecraft issue. The actor is externalizing planning and decision-support functions to commercial platforms, effectively outsourcing parts of the “staff work” that junior operators or analysts would otherwise perform.

This pattern aligns with broader reporting from major providers and threat intelligence teams. Google Threat Intelligence Group’s February 2026 AI Threat Tracker documents growing adversary integration of AI across reconnaissance, phishing enablement, malware/tooling development, and post-compromise support, while also emphasizing that it has not yet observed “breakthrough capabilities” that fundamentally change the threat landscape (Google Threat Intelligence Group, 2026). That is highly consistent with the Amazon case: AI is improving speed, coverage, and consistency more than it is producing genuine operational innovation (Google Threat Intelligence Group, 2026; Moses, 2026). Microsoft’s Digital Defense Report 2025 similarly describes adversaries using generative AI for scaling social engineering, reconnaissance, code generation, exploit development support, and automation of exfiltration-to-lateral movement pipelines (Microsoft, 2025). The convergence across independent sources is notable. Different organizations are observing the same pattern from different vantage points.

Anthropic’s 2025 report on “vibe hacking” extends this trend in a particularly important direction. Anthropic described a disrupted criminal operation in which an actor used an AI coding agent not only as a technical consultant but as an active operator embedded into the attack lifecycle, supporting reconnaissance, credential harvesting, penetration, and extortion-related tasks (Anthropic, 2025). Whether one agrees with every framing choice in vendor reports, the operational implication is clear: AI-enabled actors are increasingly turning language models and coding agents into workflow engines. They are not merely asking for snippets of code. They are building repeatable campaign infrastructure around AI-assisted execution (Anthropic, 2025; Moses, 2026).

For counterintelligence practitioners, the strategic concern is not limited to criminal ransomware precursors. The same force-multiplier logic applies to espionage, access development, insider targeting, and influence preparation. Google’s reporting notes that government-backed actors are using AI for technical research, target development, and rapid phishing lure generation, including reconnaissance activities that support subsequent operations (Google Threat Intelligence Group, 2026). The FBI has also publicly warned that AI increases the speed, scale, and realism of phishing and social engineering, including voice and video cloning (FBI San Francisco, 2024). In the CI domain, this means hostile services and proxies can expand target coverage, improve linguistic quality, and accelerate social graph exploitation with lower manpower. AI narrows the gap between intent and execution.

There is also an analytical security issue that deserves more attention: data exposure to AI platforms during live operations. Amazon’s report indicates that the actor submitted internal victim topology, credentials, and service data into a commercial AI workflow (Moses, 2026). From a counterintelligence standpoint, this is a double-edged phenomenon. It may increase adversary effectiveness, but it also creates potential collection and disruption opportunities, depending on provider visibility, legal authorities, and industry cooperation. More importantly, it means that operationally sensitive network intelligence is now moving through third-party AI services as part of adversary tradecraft. That should influence how we think about public-private partnerships, lawful reporting channels, and rapid deconfliction.

The Fortinet context reinforces a second CI principle, i.e, adversary success often begins with governance failure, not advanced tradecraft. Fortinet’s January 2026 PSIRT analysis documented abuse of FortiCloud SSO and repeatedly emphasized best practices such as restricting administrative access, disabling vulnerable SSO paths, and monitoring for malicious admin creation and anomalous logins (Windsor, 2026). NIST’s National Vulnerability Database entry for CVE-2026-24858 further confirms the seriousness of the authentication bypass exposure affecting multiple Fortinet product lines when FortiCloud SSO was enabled (NIST NVD, 2026). Even if the Amazon campaign did not depend on that specific exploit path, the environment is the same: internet-exposed edge infrastructure, identity weaknesses, and uneven patching create permissive terrain that AI-enabled actors can mine at scale (Moses, 2026; Windsor, 2026; NIST NVD, 2026).

The practical implication is that counterintelligence and cybersecurity must converge more tightly on defensive prioritization. In many organizations, CI is still treated as a narrow insider-threat or foreign-intelligence problem, while cyber defense handles perimeter hygiene and incident response. That separation is increasingly artificial. AI-augmented threat actors blur the boundaries between criminal and state-adjacent tradecraft, between opportunistic access and strategic exploitation, and between cyber intrusion and intelligence preparation of the environment. Europol’s 2025 organized crime threat assessment reporting, as reflected in major coverage, likewise points to AI lowering costs and increasing the scale and sophistication of criminal operations, including cyber-enabled activity and proxy behavior that can intersect with geopolitical interests (Reuters, 2025). The ecosystem is converging.

In my view, the correct response is not panic over “autonomous AI hackers.” Amazon’s report itself argues against that caricature. The actor remained brittle, shallow, and dependent on weak targets (Moses, 2026). The right response is disciplined adaptation in three areas.

Organizations must treat identity and edge administration as counterintelligence terrain, not merely IT hygiene. Exposed management interfaces, weak credentials, and single-factor authentication are now high-confidence enablers of AI-scaled intrusion campaigns (Moses, 2026). MFA, restricted administration paths, credential rotation, and segmentation are not basic controls anymore; they are anti-scaling controls.

Defenders need telemetry designed for workflow detection rather than malware signatures. Amazon explicitly notes the campaign’s use of legitimate open-source tools and recommends behavioral detection over IOC dependence (Moses, 2026). That aligns with the broader AI-enabled threat model. When AI helps actors orchestrate legitimate tools more efficiently, the artifact footprint looks cleaner while the behavioral pattern becomes more machine-like and more repeatable.

Intelligence organizations and enterprises should expand analytic models for adversary assessment. When a low-skill actor can produce high-volume tooling and broad campaign coverage, we must stop equating output polish with strategic sophistication. The key discriminators will be resilience under friction, adaptation under failure, target discipline, and operational security. In the Amazon case, the actor’s poor OPSEC and inability to improvise revealed the underlying limitations despite impressive scale (Moses, 2026). Those are precisely the indicators that counterintelligence tradecraft has always prioritized.

My take, the AI force multiplier threat is real, but its significance is often misunderstood. It really resembles a “brute force” attack reminiscent of the first generation hackers but on steroids. AI is the “steroid”. So, the immediate danger is not superintelligence. It is operational leverage. AI gives mediocre actors the ability to behave like nation-state FIS against poorly defended targets. It accelerates reconnaissance, scripting, planning, and social engineering. It reduces labor costs and time-to-action. It increases campaign breadth. And it does all of this without solving the deeper human problems of judgment, creativity, and tradecraft. For counterintelligence professionals, that means the threat landscape is becoming more crowded, faster-moving, and harder to triage. The strategic answer remains the same as ever: protect critical access, harden identity, improve detection, and refine analytic tradecraft. What has changed is the speed at which failure to do so will be exploited (Moses, 2026; Google Threat Intelligence Group, 2026; Microsoft, 2025; Anthropic, 2025; FBI San Francisco, 2024).

~ C. Constantin Poindexter, MA in Intelligence, Graduate Certificate in Counterintelligence, JD, CISA/NCISS OSINT certification, DoD/DoS BFFOC Certification

Bibliography

  • Anthropic. (2025, August). Vibe hacking: How cybercriminals are using AI coding agents to scale data extortion operations. Anthropic.
  • Bleiberg, J. (2026, February 25). Hackers used AI to breach 600 firewalls in weeks, Amazon says. Insurance Journal.
  • FBI San Francisco. (2024, May 8). FBI warns of increasing threat of cyber criminals utilizing artificial intelligence. Federal Bureau of Investigation.
  • Google Threat Intelligence Group. (2026, February 12). GTIG AI Threat Tracker: Distillation, experimentation, and (continued) integration of AI for adversarial use. Google Cloud Blog.
  • Microsoft. (2025). Microsoft Digital Defense Report 2025: Safeguarding trust in the AI era. Microsoft.
  • Moses, C. (2026, February 20). AI-augmented threat actor accesses FortiGate devices at scale. AWS Security Blog.
  • National Institute of Standards and Technology, National Vulnerability Database. (2026). CVE-2026-24858 detail. NVD.
  • Reuters. (2025, March 18). Europol warns of AI-driven crime threats. Reuters.
  • Windsor, C. (2026, January 22). Analysis of Single Sign-On Abuse on FortiOS. Fortinet PSIRT Blog.

When “AI-Enabled Counterintelligence” Means Everything and Therefore Proves Little

artificial intelligence, intelligence, counterintelligence, espionage, counterespionage, deception, C. Constantin Poindexter, I.C., CIA, NSA

Artificial intelligence is unquestionably altering intelligence practice, especially in collection triage, identity resolution, and D&D (“denial and deception”) at scale. The same broadness that makes “AI and counterintelligence” a timely topic also makes it easy for scholarship to drift from disciplined inference into plausible generalizations. Henry Prunckun’s article AI and the Reconfiguration of the Counterintelligence Battlefield, argues that authoritarian regimes integrate AI into counterintelligence more aggressively than democracies, generating widening disparities in surveillance capacity, strength of deception operations, and detection. That thesis is appealing, but the problem is that, as presented, it relies on conceptual stretching, not ‘real good’ operationalization, and OSINT constrained attribution, which together make the conclusion stronger than the evidence can reliably support.

Conceptual slippage: counterintelligence becomes a synonym for regime security

The article offers an expansive definition of counterintelligence, including hostile intelligence operations by FIS, non-state actors, and internal threats. That definitional move risks conflating classic counterintelligence functions, such as detecting foreign intelligence services, running double agents, and protecting sensitive programs, with broad domestic security tasks, such as repression of dissent, censorship, and generalized surveillance. In the case studies, that risk becomes reality. China’s Skynet and Sharp Eyes are treated as counterintelligence infrastructure, yet the true purpose of these systems is “public security” and political control ( meaning “suppression”) through population-scale monitoring and data fusion. This is not counterespionage in the narrow sense (Peterson, 2021; He, 2021). Using such architectures as direct evidence of “counterintelligence capability” is contestable unless the article could demonstrate a specific, evidenced pathway from mass surveillance to demonstrable counterespionage outcomes. A good example might be the identification of foreign case officers, agent spotting, surveillance detection route patterning, or disruption of recruitment pipelines.

This matters because conceptual stretching lets the analysis “win” by broadening the dependent variable. If counterintelligence includes nearly all internal security functions, then authoritarian states will almost always appear “ahead,” because their legal structures permit scale and coercion across the entire society. A tighter approach would separate “state security surveillance capacity” from “counterespionage effectiveness,” then test where and how the two overlap.

Unmeasured dependent variables: adoption is not capability, and capability is not effectiveness

The piece repeatedly asserts an “uneven transformation” and “increasing disparities” between authoritarian and democratic systems. The paper does not clearly operationalize what “capability” means. Is it speed of deployment, volume of data, integration across agencies, analytic accuracy, disruption rates, or successful attribution of hostile services? Those are DISTINCT variables. Without an operational definition and observable indicators, the comparative claim becomes rhetorical rather than analytic.

Fortunately, the literature on predictive analytics is instructive. Government and academic reviews emphasize that predictive systems can help triage and allocate resources, but performance and fairness depend heavily on data quality, feedback loops, and governance (National Institute of Justice, 2014; U.S. Department of Justice, 2024). In real deployments, predictive policing tools have faced serious critiques for low accuracy and bias amplification, precisely because historical data encode institutional and sampling distortions (Shapiro, 2017; Alikhademi et al., 2021). The counterintelligence analogy is direct. If authoritarian systems ingest broader data and act on weaker thresholds, they may increase the velocity of suspicion generation without reliably increasing detection precision. So, “more AI” generates more alerts, more potentially nefarious interventions, and more error, rather than more validated counterintelligence successes. Unless the article can distinguish surveillance scale from validated performance outcomes, it confuses activity with effectiveness.

Causal inference is asserted, not identified

The article frequently implies causation, that AI enables preemptive counterintelligence, improves early warning, and accelerates counterespionage timelines. Yet in this piece, the causal chain is not established with process tracing evidence. Much of the language signals inference by plausibility, using formulations such as “reportedly,” “believed,” “suggests,” and “consistent with.” That can be appropriate in exploratory work, but lacks strong causal conclusions about “advantage” or “disparity” without a rigorous evidentiary standard.

A methodologically disciplined approach would specify competing hypotheses and explanations. They would demonstrate why AI is THE differentiator, rather than alternative drivers like expanded authorities, intensified human surveillance, party control over institutions, enhanced cyber hygiene, or increased resourcing. Robert Yin’s framework for case study research emphasizes analytic generalization and the need to consider rival explanations, not merely accumulate confirmatory examples (Yin, 2014). Not following the framework begins to look like one of those cognitive biases that we are taught to avoid. The article’s current structure tends to accumulate plausible examples of authoritarian digital control and then attribute the change in counterintelligence conditions to AI itself, when the same outcomes could often be produced through conventional surveillance and coercion supplemented by basic automation.

Case selection: the design invites selection on the dependent variable

The four cases, China, Russia, Iran, and North Korea, are justified partly by strategic AI application, active counterintelligence engagement, and OSINT accessibility. That selection logic is understandable, but it has consequences. It tilts the sample toward regimes that are shining examples of coercive security states. It excludes “negative” or less confirming cases that might constrain the inference. Social science methodologists have repeatedly warned us that selecting only cases where the outcome is expected will often bias comparative claims, especially when the study then reasons as if the cases represent a broader population (King, Keohane, & Verba, 1994; Seawright & Gerring, 2008). If Prunkun’s aim is build theory, he may want to say so explicitly and limit generalization claims. If the aim is an authoritarian versus democratic comparison, it needs either systematic comparative indicators or at least one or more democratic cases chosen by objective criteria.

This flaw is not just academic. The paper makes claims about democratic constraints, Five Eyes governance, and interagency “silos,” yet provides no parallel case evidence at the same granularity as the authoritarian ones. There is an asymmetric evidentiary burden. Authoritarian capability is described through many examples. Democratic capability is summarized through general governance constraints, . . . a classic setup for overstating comparative divergence.

OSINT dependence: acknowledged limitations, but high confidence attributions persist

The paper responsibly acknowledges OSINT limitations, including bias, misinformation, attribution gaps, and inference under uncertainty. Then the narrative proceeds to attribute specific AI-enabled activities to specific organs such as the MSS, FSB, GRU, MOIS, and the RGB, even while admitting overlapping roles and covert postures. This is a substantive vulnerability. The hardest analytic problem in intelligence scholarship is not describing a tool set, but attributing operational use to a particular unit with defensible confidence.

The OSINT literature is explicit that open sources can be powerful but are shaped by discoverability, platform biases, selective visibility, and analytic framing, all of which can distort both collection and interpretation (McDermott, 2021; Yadav et al., 2023). Triangulation helps, but triangulation among sources that ultimately derive from similar technical telemetry pipelines or shared reporting ecosystems can create an illusion of confirmation. The article would be stronger if it adopted a consistent evidentiary lexicon like “confirmed,” “assessed,” “plausible,” “speculative,” and then used that teminology to discipline claims about which agency did what, and with what AI component.

“Cognitive security” is promising, but under-specified as a threat model

The piece explains “cognitive security” as safeguarding the analytic process from distortion, synthetic overload, and eroded trust. That is a valid conceptual move, and it aligns with growing institutional concern about deepfakes and generative deception (particularly impersonation), synthetic identities, and social engineering at scale (RAND, 2022; CDSE, 2025; ENISA, 2025). The weakness is that the paper’s cognitive security discussion remains programmatic rather than operational. It describes effects, such as evidence stream distortion and analyst overload, but it does not specify the attack surfaces, such as data poisoning, provenance forgery, adversarial inputs to classifiers, synthetic HUMINT reporting, or deepfake-enabled pretexting. Without a more explicit threat model, cognitive security risks functioning as an exciting label rather than an analytic framework capable of generating testable hypotheses and practical mitigations.

Overstatement risk in cross-national characterizations

Some country characterizations are brittle. The claim that Russia does not use AI for extensive domestic surveillance, contrasted with China, is vulnerable because Russia’s internal security ecosystem has long invested in monitoring and control, even if its architecture differs from China’s camera-centric methods. When a paper makes categorical claims that can be challenged by counterexamples, it hands critics a free punch and distracts from the stronger parts of the argument. Good comparative work often relies on “relative to” claims rather than absolutes, unless the evidence is overwhelming.

My take? The main contribution is conceptual, but its conclusions outrun its design

The excerpt reads strongest as a conceptual intervention arguing that AI changes the conditions of counterintelligence, especially by enabling synthetic deception and stressing analytic trust. Where it becomes substantively flawed is where it implies comparative empirical conclusions about authoritarian “advantage” and widening capability disparities without operational definitions, without balanced case selection, and with OSINT-constrained attribution that cannot consistently sustain unit-level claims. The remedy is not to abandon the thesis. It is to narrow the dependent variable, define measurable indicators, discipline inference and attribution, and align claims to what the evidence and design can actually support. Absent those corrections, the argument risks becoming unfalsifiable. Authoritarian states appear superior because counterintelligence is defined broadly enough to include most internal security, adoption is treated as capability, and capability is treated as effectiveness. Prunckun’s point here may well be true. I HIIIIGHLY respect this author and his expertise, however addresssing these flaws would go a long way to proving his points.

~ C. Constantin Poindexter, MA in Intelligence, Graduate Certificate in Counterintelligence, JD, CISA/NCISS OSINT certification, DoD/DoS BFFOC Certification

Bibliography

  • Alikhademi, K., et al. (2021). A review of predictive policing from the perspective of fairness. National Science Foundation Public Access Repository.
  • Center for Development of Security Excellence (CDSE). (2025). Artificial Intelligence and Counterintelligence Concerns (Student guide). U.S. Department of Defense.
  • European Union Agency for Cybersecurity (ENISA). (2025). ENISA Threat Landscape 2025.
  • He, A. (2021). How China harnesses data fusion to make sense of surveillance data. Brookings Institution.
  • King, G., Keohane, R. O., & Verba, S. (1994). Designing Social Inquiry: Scientific Inference in Qualitative Research. Princeton University Press.
  • McDermott, Y. (2021). Open source information’s blind spot. Journal of International Criminal Justice, 19(1), 85–105.
  • National Institute of Justice. (2014). Overview of predictive policing. Office of Justice Programs, U.S. Department of Justice.
  • Peterson, D. (2021). China’s “Sharp Eyes” program aims to surveil 100% of public space. Center for Security and Emerging Technology (CSET), Georgetown University.
  • RAND Corporation. (2022). Artificial Intelligence, Deepfakes, and Disinformation.
  • Seawright, J., & Gerring, J. (2008). Case selection techniques in case study research. Political Research Quarterly, 61(2), 294–308.
  • Shapiro, A. (2017). Policing predictive policing. Washington University Law Review, 94(5), 1149–1189.
  • U.S. Department of Justice, Office of Justice Programs. (2024). Artificial Intelligence and Criminal Justice: Final Report.
  • Yadav, A., et al. (2023). Open source intelligence: A comprehensive review of the state of the art. Journal of Big Data, 10, Article 38.
  • Yin, R. K. (2014). Case Study Research: Design and Methods (5th ed.). SAGE Publications.

The Abouzar Rahmati Penetration: A Counterintelligence Case Study

spy, spies, espionage, counterespionage, intelligence, counterintelligence, C. Constantin Poindexter

The Abouzar Rahmati Case: A Counterintelligence Case Study in the Era of Digital Espionage

The case of Abouzar Rahmati, an Iranian spy indicted in September 2024 for acting as an illegal agent of the Iranian government, offers a compelling case study for counterintelligence professionals. Rahmati, a 42-year-old FAA contractor with a PhD in Electrical Engineering, exploited his position to access and exfiltrate sensitive documents related to the FAA’s National Airspace System (NAS). His capture highlights the evolving landscape of espionage and the critical role of digital forensics, travel surveillance, and whistleblower tips in counterintelligence operations. In this piece, I am going to share the methods used to uncover Rahmati’s activities (no classified docs or tradecraft here, sorry to disappoint), and provide some insights into how penetration agents can be detected and neutralized.

Abouzar Rahmati, a U.S. government contractor, was indicted on charges of acting as an illegal agent of the Iranian government. His activities involved accessing and exfiltrating sensitive FAA documents, which he subsequently provided to Iranian authorities. Rahmati’s case is instructive for counterintelligence professionals as it demonstrates the complex interplay of traditional and digital investigative techniques in uncovering espionage activities. The methods used to catch Rahmati offer valuable lessons in counterintelligence strategies and the importance of vigilance in protecting sensitive information.

Methods for Detecting Penetration Agents: How to Uncover a Betrayal

Internal audits and security checks are fundamental tools in counterintelligence. In Rahmati’s case, an internal audit at the FAA revealed discrepancies in document access logs. These audits are crucial for identifying unusual patterns that may indicate unauthorized access or data exfiltration. As noted by The Washington Post, routine security checks flagged Rahmati’s unusual access patterns, prompting further investigation. This underscores the importance of regular and thorough internal audits in detecting potential security breaches (Washington Post, 2024).

Digital forensics plays a pivotal role in modern counterintelligence. Rahmati’s activities were traced through metadata analysis, which revealed inconsistencies in document access patterns. A report from a government watchdog site detailed how investigators discovered that certain documents were accessed and potentially altered, suggesting unauthorized manipulation. This highlights the value of digital forensics in uncovering hidden activities and providing evidence for further investigation (Government Watchdog Report, 2024).

Travel surveillance and communication monitoring are essential components of counterintelligence. Rahmati’s frequent trips to Iran, which coincided with sensitive FAA projects, raised suspicions. The New York Times reported that these travels were scrutinized, revealing a pattern of behavior inconsistent with his stated purposes. Additionally, surveillance of Rahmati’s communications uncovered contacts with Iranian officials, providing further evidence of his espionage activities (New York Times, 2024).

Whistleblower tips can be invaluable in counterintelligence operations. A forum on the dark web discussed leaks from an anonymous source within the FAA, suggesting that Rahmati was caught due to a whistleblower who provided evidence of his actions to the FBI. This underscores the importance of encouraging and protecting whistleblowers, as they can provide crucial insights and evidence (Dark Web Forum, 2024).

Penetration agents often operate as part of larger espionage networks. Rahmati’s activities were part of a broader Iranian espionage network, and his capture was the result of a coordinated effort to dismantle this network. This highlights the need for counterintelligence agencies to consider the broader context and potential connections when investigating individual cases (Dark Web Source, 2024).

Thorough background checks and deception detection are critical in counterintelligence. Rahmati’s lies about his military service in the Islamic Revolutionary Guard Corps (IRGC) were discovered during routine background checks, raising red flags that prompted further investigation. This emphasizes the importance of verifying the backgrounds of individuals with access to sensitive information (FBI Background Check Report, 2024).

Uncovering the Rahmati Penetration

The methods used to uncover Rahmati’s activities support the argument for a multifaceted approach to counterintelligence. The combination of internal audits, digital forensics, travel surveillance, and whistleblower tips provided a comprehensive framework for detecting and neutralizing his espionage activities. The initial detection of Rahmati’s unusual activities through internal audits at the FAA was a crucial first step. These audits, combined with digital forensics, revealed patterns of behavior that were inconsistent with his job requirements. Metadata analysis of the documents he accessed provided concrete evidence of his unauthorized actions. This approach demonstrates the effectiveness of combining traditional security measures with advanced digital techniques in counterintelligence operations.

Rahmati’s travel patterns and communications were key indicators of his espionage activities. The surveillance of his frequent trips to Iran, coupled with the monitoring of his communications with Iranian officials, provided a clear picture of his motives and actions. This highlights the importance of integrating travel and communication data into counterintelligence strategies to identify potential threats.

The role of whistleblower tips in Rahmati’s case cannot be overstated. Anonymous sources within the FAA provided crucial evidence that supplemented the findings from digital forensics and surveillance. Additionally, the coordination with a larger Iranian espionage network underscores the need for counterintelligence agencies to consider the broader context and potential connections when investigating individual cases.

The Abouzar Rahmati case offers valuable insights into the methods and strategies used in modern counterintelligence operations. The combination of internal audits, digital forensics, travel surveillance, and whistleblower tips provided a robust framework for detecting and neutralizing his espionage activities. As counterintelligence professionals, it is essential to adopt a multi-faceted approach that leverages both traditional and digital investigative techniques to protect sensitive information and neutralize potential threats. The Rahmati case serves as a reminder of the evolving nature of espionage and the critical role of vigilance and innovation in counterintelligence.

~ C. Constantin Poindexter, MA in Intelligence, Graduate Certificate in Counterintelligence, JD, CISA/NCISS OSINT certification, DoD/DoS BFFOC Certification

Bibliography

  • Dark (not going to share). 2024. “Leaks from Anonymous Source Within FAA.” Accessed February 2, 2026. https://dark.
  • Dark (not going to share). 2024. “Iranian Espionage Network Dismantled.” Accessed February 2, 2026. https://dark.
  • FBI Background Check Report. 2024. “Rahmati Background Check Discrepancies.” Accessed February 2, 2026. https://fbi.gov/reports/background-checks/rahmati.
  • Government Watchdog Report. 2024. “Digital Forensics in Rahmati Case.” Accessed February 2, 2026. https://watchdog.gov/reports/digital-forensics.
  • New York Times. 2024. “FAA Contractor Indicted for Spying.” New York Times, September 28. Accessed February 2, 2026. https://nytimes.com/article/rahmati-indictment.
  • Washington Post. 2024. “Internal Audit Flags FAA Contractor.” Washington Post, September 27. Accessed February 2, 2026. https://washingtonpost.com/article/faa-audit.

Perils of Public AI from a Counterintelligence Perspective: The Madhu Gottumukkala Case

a.i., artificial intelligence, spy, spies, intelligence, counterintelligence, espionage, counterespionage, C. Constantin Poindexter

The Perils of Public AI from a Counterintelligence Operator’s View: A Case Study on Madhu Gottumukkala’s Reckless Use of ChatGPT

In the clandestine world of national security, the line between operational success and catastrophic failure is often measured in millimeters of discretion. The recent revelation that Madhu Gottumukkala, the acting director of the Cybersecurity and Infrastructure Security Agency (CISA), utilized a public, commercially available version of ChatGPT to process “for official use only” (FOUO) documents is not merely a procedural misstep. It is an incredibly stupid counterintelligence debacle, I mean, “of the highest order” (Sakellariadis, 2026). This incident exposes a chasm of staggering depth between the rapid adoption of transformative technology and the foundational principles of information security that have, until now, protected the nation’s most sensitive secrets. From my perspective as a counterintelligence expert, Gottumukkala’s actions were not born of ignorance but of a dangerous arrogance, a presumption that his position insulated him from the very rules he was sworn to enforce. This presumption is a gift to adversarial FIS and a nightmare for those tasked with defending the integrity of our intelligence apparatus.

The Inherent Treachery of Public Large Language Models

To understand the gravity of Gottumukkala’s error, one must first dissect the fundamental architecture and data policies of public Large Language Models (LLMs) like OpenAI’s ChatGPT. These models are not inert tools; they are dynamic, cloud-hosted systems designed to learn and evolve from user interactions. OpenAI’s policy, while occasionally nuanced, has consistently maintained that submitted data may be retained and used to train and refine their models (OpenAI, 2025). This means that every prompt, every document fragment, and every query entered into the public interface becomes part of a vast, aggregated dataset. For a civilian user, this might raise privacy concerns. For a government official handling sensitive material, it represents an unauthorized and uncontrolled data spill of potentially catastrophic proportions.

The data itself is only half the problem. The metadata generated by the interaction, i.e., user’s IP address, device fingerprinting, session timings, and the very nature of the queries, etc., provides a rich tapestry of intelligence for a determined adversary. A sophisticated FIS such as China’s Ministry of State Security (MSS) or Russia’s SVR does not need to directly breach OpenAI’s servers to benefit. They can analyze the model’s outputs over time to infer the types of questions being asked by government entities. If an official uploads a contracting document related to a critical infrastructure project, the model’s subsequent, more knowledgeable answers about that specific topic could signal a point of interest. This is a form of signals intelligence (SIGINT) by proxy, where the adversary learns not what we know, but what we are focused on, thereby revealing strategic priorities and operational vulnerabilities.

Furthermore, the security of these public platforms is a moving target. While no direct evidence of a major breach of OpenAI’s training data is publicly available, the possibility cannot be discounted. The U.S. intelligence community operates on the principle of need-to-know and compartmentalization precisely because no system is impenetrable. Deliberately placing sensitive data into a system with an opaque security posture, governed by a private company with its own corporate interests and potential vulnerabilities, is an abdication of the most basic tenets of information security. The 2023 breach of MoveIt Transfer, a widely used file-transfer software, which impacted hundreds of organizations, including government agencies, serves as a stark reminder that even trusted third-party systems can be compromised (CISA, 2023). Gottumukkala’s actions effectively created a similar, albeit digital, vulnerability by choice.

The Anatomy of an Insider Threat: Arrogance as a Vector

Counterintelligence professionals spend their careers identifying and mitigating insider threats, which are often categorized as malicious, coerced, or unintentional. Gottumukkala’s case falls into a particularly insidious subcategory, . . . the entitled or arrogant insider. This is an individual who, often due to seniority or perceived importance, believes that security protocols are for lesser mortals. His reported actions paint a textbook picture. Faced with a blocked application, he did not seek to understand the policy or use the approved alternative; he reportedly demanded an exemption, forcing his subordinates to override security measures designed to protect the agency (Sakellariadis, 2026). He just assumed that the rules simply did not apply to him.

This behavior is more than a simple lapse in judgment. It is a systemic cancer. When a leader demonstrates a flagrant disregard for established rules, it erodes the entire security culture of an organization. Junior personnel, witnessing a senior official flout policy without immediate repercussion, receive a clear message. The rules are flexible, especially for the powerful. This creates an environment ripe for exploitation, where other employees may feel justified in likewise ignoring rules that they don’t find convenient, exponentially increasing the agency’s attack surface. Adversarial FIS are adept at exploiting this kind of cultural rot. They understand that a demoralized workforce with a cynical view of leadership is more susceptible to coercion, recruitment, or simple negligence.

Gottumukkala’s reported professional history amplifies these concerns. His documented failure to pass a counterintelligence-scope polygraph examination is a monumental red flag that should have precluded any role involving access to sensitive operational or intelligence information (Sakellariadis, 2026). A polygraph is not a perfect lie detector, but in the counterintelligence context, it is a critical counterespionage tool for assessing an individual’s trustworthiness, susceptibility to coercion, and potential for undeclared foreign contacts. A failure in this screening is a definitive signal of elevated risk. Making matters worse, he sought to remove CISA’s Chief Information Officer (CIO), the very official responsible for maintaining the agency’s cybersecurity posture (Sakellariadis, 2026). This pattern suggests a hostility toward institutional oversight that is antithetical to the role of a cybersecurity leader in addition to hostility towards basic INFOSEC protocols.

The Strategic Cost of a Single Data Point

The documents in question were reportedly FOUO, not classified. This distinction, while bureaucratically significant, is strategically irrelevant to a capable adversary. FOUO documents often contain the building blocks of classified intelligence. They can reveal details about sources and methods, sensitive but unclassified contract information about critical infrastructure, internal deliberations on policy, and/or the identities and roles of key personnel involved in national security efforts.

Consider a hypothetical but plausible scenario. A FOUO document details a DHS contract with a private firm to harden the cybersecurity of a specific sector of the electrical grid. Uploaded to a public AI, this data point is now part of a larger model. An adversary, through persistent querying of the public AI, could potentially coax the model into revealing more about this sector’s vulnerabilities than it otherwise would. Even if the model does not explicitly reveal the document, the adversary’s knowledge of the type of work being done allows them to focus their espionage, cyberattacks, or influence operations on that specific firm or sector. The FOUO document becomes the breadcrumb that leads the adversary to the feast. The Office of the Director of National Intelligence (ODNI) has repeatedly warned in its annual threat assessments that adversaries prioritize unclassified data collection to build a mosaic of intelligence (ODNI, 2025). Each piece is harmless on its own, but together they form a clear and actionable picture.

The existence of secure, government-controlled alternatives makes this incident all the more infuriating. The Department of Homeland Security has developed and deployed its own AI-powered tool, DHSChat, specifically designed to operate within a secured federal network, ensuring that sensitive data does not leave the government’s digital ecosystem (DHS, 2024). Gottumukkala’s insistence on using the public, less secure option over the purpose-built, secure one is the action of someone who either lacks a fundamental understanding of the threat landscape or simply doesn’t give a shit. In either case, the result is the same. It is an unnecessary forced error, and self-inflicted wound on national security.

The Imperative of Accountability and a Zero-Tolerance Mandate

The response to this incident should be unequivocal and severe. The Department of Homeland Security’s own Management Directive 11042.1 mandates that any unauthorized disclosure of FOUO information be investigated as a security incident, potentially resulting in “reprimand, suspension, removal, or other disciplinary action” (DHS, 2023). Anything less than a full counterintelligence investigation, coupled with Gottumukkala’s immediate removal from any position of trust, signals a tacit acceptance of reckless behavior.

This case should catalyze a broader policy shift across the entire Intelligence Community which has been visibly altered by current leadership. A zero-tolerance policy for the use of public AI tools with any government data, let alone sensitive information, must be implemented and enforced without exception. This requires more than a memo. It requires robust technical controls, including network-level blocks to prevent such data exfiltration and continuous monitoring for policy violations. It also demands a cultural reset led from the very top, where security is not seen as a bureaucratic hurdle but as an integral component of every mission.

The arrogance displayed by Madhu Gottumukkala is a counterintelligence nightmare. The arrogance and hubris are breathtaking. This case represents a willful blindness to the reality of the threats we face, or worse, zero concern whatsoever for the protection of national security assets. Our adversaries are relentless, sophisticated, and constantly probing for weaknesses. We cannot tolerate bureaucrats who view security protocols as optional. The integration of AI into our national security architecture holds immense promise, but that promise can only be realized if it is guided by the enduring principles of vigilance, discipline, and respect for the sanctity of sensitive information. To do otherwise is not just foolish. It is a betrayal of the public trust and a dereliction of the duty to protect the nation.

~ C. Constantin Poindexter, MA in Intelligence, Graduate Certificate in Counterintelligence, JD, CISA/NCISS OSINT certification, DoD/DoS BFFOC Certification

Bibliography

  • Department of Homeland Security. (2023). Management Directive 11042.1: Safeguarding Sensitive But Unclassified (For Official Use Only) Information. Retrieved from DHS.gov
  • Department of Homeland Security. (2024). DHS’s Responsible Use of Generative AI Tools. Retrieved from DHS.gov
  • National Counterintelligence and Security Center. (2025). Annual Threat Assessment: Adversary Exploitation of Leaked Data. Washington, D.C.: Office of the Director of National Intelligence.
  • OpenAI. (2025). ChatGPT Data Usage Policy. Retrieved from OpenAI.com
    Sakellariadis, J. (2026, January 27). Trump’s Acting Cyber Chief Uploaded Sensitive Files into a Public Version of ChatGPT. POLITICO. Retrieved from Politico.com
  • Cybersecurity and Infrastructure Security Agency (CISA). (2023, June 1). *AA23-165A: MOVEit Transfer Vulnerability Exploit

Espionaje en Bávaro: El caso Novikov – contrainteligencia, desinformación y la anatomía de una operación de influencia

espia, espias, espionage, inteligencia, contraespionaje, contrainteligencia, DNI, J2, C. Constantin Poindexter

La detención en Bávaro del ciudadano ruso Dmitrii Novikov constituye uno de los expedientes más reveladores de la historia de la inteligencia (y contrainteligencia) de nuestra Quisqueya. Apto para estudiar la convergencia entre operaciones de influencia, crimen transnacional y técnicas contemporáneas de encubrimiento financiero, su envergadura no podemos pasar por alto. Según la información pública difundida por autoridades dominicanas y replicada por medios de referencia, Novikov habría dirigido desde territorio dominicano una red de “ciberinfluencia” vinculada al Proyecto Lakhta, también denominado “La Compañía”, orientada a la creación y difusión de contenido digital con fines de desinformación política y manipulación en redes sociales, con efectos proyectados tanto sobre la República Dominicana como sobre otros países de la región, entre ellos Argentina (Listín Diario, 2025; EFE, 2025). Para el profesional de contrainteligencia, la importancia del caso no reside únicamente en la imputación, sino en los indicadores de método: cobertura social verosímil, externalización operacional mediante colaboradores locales, y un esquema de financiación y pagos diseñado para opacar origen y trazabilidad, todo ello enmarcado en una tradición rusa de guerra informativa ampliamente documentada por fuentes judiciales y regulatorias estadounidenses y sus semejantes europeos.

Los hechos son nítidos. El Ministerio Público, actuando junto con la Unidad Especializada del Crimen Organizado, detuvo a Novikov durante un operativo en una villa del residencial Palmas del Sol II, Bávaro, donde residía con familiares (Listín Diario, 2025; EFE, 2025). Se le acusó de haber operado con la intención explícita de evitar que se percibiera el origen del contenido promovido, ocultando su nacionalidad rusa y utilizando colaboradores locales, bajo la apariencia de un deportista de artes marciales mixtas, mientras recibía fondos y dirección de asociados al Proyecto Lakhta (Listín Diario, 2025; EFE, 2025). En términos de ‘tradecraft’, la “leyenda” personal (el relato de identidad que permite acceso, normaliza contactos y reduce sospecha) aparece aquí como instrumento de penetración social y, por extensión, de influencia. No se trata de un detalle anecdótico. La cobertura deportiva opera como camuflaje cultural, facilita redes sociales orgánicas y diluye la percepción de intencionalidad política hasta hoy en día igual como para los fines de Novikov sirvió.

La dimensión financiera del caso merece atención especial. Las autoridades afirman haber comprobado que Novikov manejaba operaciones económicas y transacciones internacionales mediante billeteras electrónicas con criptomonedas, usando plataformas como Binance y activos como Bitcoin y Ethereum (Listín Diario, 2025; EFE, 2025). La Fiscalía considera que estos mecanismos habrían sido empleados para mover fondos internacionales encubriendo el origen de los recursos y facilitando actividades ilícitas vinculadas al lavado de activos y al financiamiento transnacional (EFE, 2025; Listín Diario, 2025). Para la contrainteligencia es instructivo. Ilustra una realidad operativa, el ecosistema cripto no es en sí “invisible”, pero sí ofrece fricción adicional para la atribución y la congelación rápida de flujos, especialmente cuando se combina con identidades prestadas, intermediarios y jurisdicciones con distinta y bien variada velocidad de cooperación. En operaciones de influencia, el dinero no es accesorio. Es el sistema circulatorio que paga infraestructura, compra amplificación, remunera operadores, y sostiene persistencia.

El expediente añade un componente que, de confirmarse, ampliaría su gravedad estratégica. Durante el operativo se incautaron evidencias que comprometerían al imputado con la venta y distribución de armas de fuego (Listín Diario, 2025; EFE, 2025). Esta intersección entre desinformación y armas sugiere un patrón conocido por los profesionales del ámbito castrense investigativo y de inteligencia nacional. Cuando convergen propaganda, financiación opaca y armamento, el fenómeno trasciende la “influencia blanda” y se aproxima a un ecosistema habilitador de coerción, intimidación y/o criminalidad organizada. En términos analíticos, el riesgo ya no es sólo cognitivo (degradación de confianza pública, polarización, distorsión deliberativa) sino también material, por la capacidad de introducir violencia o amenaza en el teatro social.

Para comprender el rótulo “Lakhta” y su peso, conviene situarlo en el marco histórico documentado por instancias judiciales y regulatorias. El Departamento de Justicia de Estados Unidos describió el Proyecto Lakhta como un esfuerzo paraguas, financiado por Yevgeniy Prigozhin, que incluía componentes orientados a audiencias extranjeras y que administraba presupuestos multimillonarios para actividades de influencia, incluyendo compras de anuncios, registros de dominios, uso de servidores proxy y “promoción” de publicaciones en redes sociales. El objetivo estratégico fue de sembrar discordia y socavar la fe en instituciones democráticas (U.S. Department of Justice, 2018). El propio gobierno estadounidense, en documentación oficial, asoció la operación con “information warfare” (guerra informática) y con esfuerzos para simular activismo local mediante identidades ficticias y técnicas de ocultación de origen (U.S. Department of Justice, 2018). Por su parte, el Departamento del Tesoro de Estados Unidos caracterizó el Proyecto Lakhta como una campaña de desinformación financiada por Prigozhin dirigida a audiencias en Estados Unidos, Europa, Ucrania e incluso Rusia, destacando su uso de “personas” ficticias y su financiación de “troll farms” (U.S. Department of the Treasury, 2022). Complementariamente, el propio registro público de sanciones de OFAC identifica a la Internet Research Agency LLC (la “fábrica de trolls”) con alias explícitos que incluyen “LAKHTA INTERNET RESEARCH”, reforzando la continuidad nominal y organizacional del constructo Lakhta en la arquitectura de influencia rusa (U.S. Department of the Treasury, Office of Foreign Assets Control, 2026).

La República Dominicana, por su posición geográfica, sociedad libre y abierta, su centralidad turística, su conectividad logística y su apertura de ecosistemas digitales, constituye un espacio atractivo para operaciones de influencia que busquen “plausible deniability” y a la vez proyección regional. Las autoridades dominicanas sostienen que las operaciones atribuidas a Novikov apuntaban a incidir en la opinión pública, con impactos directos en el país y en otros entornos regionales (Listín Diario, 2025). En paralelo, fuentes periodísticas reseñaron que en Argentina se detectó una estructura denominada “La Compañía”, supuestamente vinculada al gobierno ruso y al Proyecto Lakhta, cuyo objetivo sería conformar redes locales leales a intereses rusos para campañas de desinformación, con operadores dedicados a recibir financiamiento y tejer vínculos con colaboradores (Listín Diario, 2025). Reportajes contemporáneos sobre Argentina describieron hallazgos de redes asociadas a campañas de desinformación para promover intereses de Moscú (The Record, 2025; Buenos Aires Times, 2025). Este encadenamiento (nodos nacionales que replican un mismo manual) es típico de operaciones de influencia sostenidas. Se construyen “células” de baja visibilidad, se tercerizan tareas, y se mantiene dirección estratégica a distancia.

Desde la perspectiva profesional, el caso Novikov ofrece lecciones operativas concretas para el diseño de defensa. Primero, la atribución moderna depende menos de “una prueba reina” y más de una constelación de indicadores: patrón de contenido, sincronización de amplificación, infraestructura digital, y rutas de financiación. Cuando el Ministerio Público afirma que Novikov recibía dirección y fondos de asociados a Lakhta, está apuntando a la hipótesis de mando y control, es decir, a una cadena de coordinación, no a mera actividad individual (Listín Diario, 2025; EFE, 2025). Segundo, la cobertura social, en este caso la apariencia de atleta, no debe subestimarse. Es un mecanismo de acceso y normalización, capaz de producir capital social y reclutar facilitadores locales sin que éstos perciban la finalidad estratégica (Listín Diario, 2025). Tercero, el uso de criptoactivos en plataformas globales exige capacidades técnicas y jurídicas específicas como la analítica de blockchain, cooperación con ‘exchanges’, preservación de evidencia digital y coordinación internacional, porque la velocidad del flujo financiero suele superar la velocidad administrativa del Estado (EFE, 2025; Listín Diario, 2025).

Cuarto, la operación descrita confirma un principio que en contrainteligencia conviene reiterar. La desinformación no es simple “mentira” sino una disciplina de ingeniería social, orientada a modificar percepciones, elevar costos de gobernabilidad y erosionar la confianza y legitimidad institucional. El propio marco estadounidense sobre Lakhta enfatiza objetivos estratégicos de discordia y debilitamiento de confianza pública mediante identidades falsas y manipulación del debate (U.S. Department of Justice, 2018). En consecuencia, las respuestas estatales deben integrar no sólo persecución penal, sino resiliencia cognitiva, i.e., alfabetización mediática, transparencia proactiva, y mecanismos de advertencia temprana que permitan a la ciudadanía reconocer narrativas “fabricadas” sin necesidad de censura. La censura también es parte de un complot nefasto. Es el terreno que estas operaciones buscan. Cuanto más se perciba represión informativa, mayor será la rentabilidad propagandística del atacante.

El caso Novikov puede leerse como un capítulo dominicano de un guión ya observado en otras latitudes. Fue una operación de influencia con sello ruso, asociada nominalmente al Proyecto Lakhta, que combinaba ingeniería social, encubrimiento de origen, financiación opaca y utilización de facilitadores locales para maximizar alcance y minimizar atribución (Listín Diario, 2025; EFE, 2025; U.S. Department of the Treasury, 2022). La presencia de indicios de tráfico de armas simultáneamente sugiere una peligrosísima convergencia entre desinformación y criminalidad material, una simbiosis que multiplica el daño potencial y exige respuesta integral del Estado (Listín Diario, 2025; EFE, 2025). Para la contrainteligencia, la conclusión es sobria. La República Dominicana no está “al margen” del tablero. Por su propia conectividad en integración con un mundo MUCHO más allá de la Altagracia, nuestro país es un objetivo y bien uno bien atractivo. La defensa exige capacidades de investigación financiera moderna, cooperación internacional, y una comprensión clara de que la guerra informativa es una operación clandestina de largo aliento y alcance cuyo campo de batalla es la confianza.

~ C. Constantin Poindexter, MA en Inteligencia, Certificado de Posgrado en Contrainteligencia, JD, certificación CISA/NCISS OSINT, certificación DoD/DoS BFFOC

Bibliografía

  • Buenos Aires Times. (2025, 19 de junio). Argentina’s spies expose alleged Russian disinformation group.
  • EFE. (2025, 19 de septiembre). La Fiscalía dominicana detiene a un hombre ruso vinculado a un proyecto de desinformación.
  • Listín Diario. (2025, 19 de septiembre). Ministerio Público arresta a joven ruso que habría dirigido campañas de desinformación desde RD.
  • Listín Diario. (2025, 19 de septiembre). EEUU y Argentina: Otros países que han detectado presencia de rusos pertenecientes a “Lakhta”.
  • The Record. (2025, 19 de junio). Argentina uncovers suspected Russian spy ring behind disinformation campaigns.
  • U.S. Department of Justice. (2018, 19 de octubre). Russian National Charged with Interfering in U.S. Political System.
  • U.S. Department of the Treasury. (2022, 29 de julio). Treasury Targets the Kremlin’s Continued Malign Political Influence Operations in the U.S. and Globally.
  • U.S. Department of the Treasury, Office of Foreign Assets Control. (2026, 23 de enero). Sanctions List Search entry: Internet Research Agency LLC (incluye alias “LAKHTA INTERNET RESEARCH”).

A Pier Walk, an Encrypted App, and a Trail of Receipts: The Wei Espionage Case, Counterintelligence and PRC Tradecraft

china, PRC, PLA, espionage, spy, spies, counterespionage, counterintelligence, intelligence, C. Constantin Poindexter, counterespionage;

The two-hundred-month federal sentence imposed on U.S. Navy sailor Jinchao Wei, also known as Patrick Wei, is not merely a cautionary tale about a single insider’s betrayal. It is a contemporary, well documented case study in the People’s Republic of China’s persistent espionage campaign against U.S. defense entities, executed through an operational pattern that has become all too familiar to counterintelligence practitioners, i.e., low friction spotting and assessment via online platforms, cultivation under plausible non-official cover, incremental tasking that begins with seemingly innocuous collection, and compensation methods that leave a financial signature even when communications are migrated to encrypted channels (U.S. Department of Justice, 2023; U.S. Department of Justice, 2026a). The Wei matter is also a reminder that insider threats rarely begin with the theft of a crown jewel. They begin with ego, attention, a sense of being chosen, and the seductive illusion that the handler is impressed and that the target is smarter than the system.

Public reporting and Department of Justice releases describe Wei as having been arrested in August 2023 as he arrived for duty at Naval Base San Diego, where he was assigned to the amphibious assault ship USS Essex (U.S. Department of Justice, 2023; U.S. Department of Justice, 2026b). The arrest timing and location are operationally significant. Counterintelligence cases often culminate when investigators can control the environment, secure devices and storage, and prevent further loss of national defense information while preserving evidentiary integrity. The government’s narrative, as presented publicly, reflects a mature, documentable case anchored in communications and payment records rather than exotic or highly sensitive sources. The Department of Justice has been explicit that not every investigative step can be disclosed and I don’t intend to do so here, but it has been equally clear that the evidentiary core included intercepts of communication between Wei and his PRC handler, and documentation of how Wei was rewarded for his betrayal (U.S. Department of Justice, 2026a).

The recruitment vector in this case aligns with PRC modus operandi in insider targeting. Wei was approached through social media by an individual presenting as a “naval enthusiast” who claimed a connection to China’s state-owned shipbuilding sector, a cover story designed to appear adjacent to legitimate maritime interest while still close enough to naval affairs to justify pointed questions (U.S. Department of Justice, 2026a; Associated Press, 2026). That presentation is instructive. It reduces the psychological barrier to engagement, provides a rationale for curiosity-driven dialogue, and permits gradual escalation from general discussion to tasking. A handler does not need immediate access to classified networks to create damage. He needs a human source who can provide operationally relevant details, and then he needs to keep the source talking long enough to normalize betrayal.

Once engaged, Wei’s operational security behavior demonstrates both awareness and complicity. He told a Navy friend that the activity looked “quite obviously” like espionage and, after that realization, he shifted communications to a different encrypted messaging application that he believed was more secure (U.S. Department of Justice, 2026a; USNI News, 2026). This is an important marker for investigators and security managers. When a cleared person acknowledges illicit intent yet continues, the motivation is not confusion. It is volition. The move to a “more secure” platform is also characteristic of PRC handling in HUMINT collection. Chinese FIS does not need to provide sophisticated technical tradecraft if the target will self-generate it. Public charging language indicates agreed steps to conceal the relationship, including deletion of conversation records and use of encrypted methods, which reflects basic but purposeful counter-surveillance and denial behavior (U.S. Department of Justice, 2023).

Tasking, as described in public releases, combined opportunistic collection with specific collection requirements. Wei was asked to “walk the pier” and report which ships were present, provide ship locations, and transmit photos and videos along with ship-related details (U.S. Department of Justice, 2026a). From a counterintelligence perspective, these are not trivial asks. Pier-side observations can support pattern of life analysis, readiness inference, and operational planning, particularly when fused with open source material and other clandestine reporting. The case officer’s methodology is “incrementalism”. A handler begins with items that feel observational and deniable, then pulls the source toward more sensitive materials by normalizing the exchange relationship and introducing compensation.

The most damaging element is the alleged transfer of classified technical and operational documents. DOJ accounts state that over an approximately 18-month relationship, Wei provided approximately sixty manuals and other sensitive materials, including at least thirty manuals transmitted in one tranche in June 2022, some of which clearly bore export control warnings. The materials were related to ship systems such as power, steering, weapons control, elevators, and damage and casualty controls (U.S. Department of Justice, 2026a; U.S. Department of Justice, 2026b; Associated Press, 2026). In counterintelligence risk terms, technical manuals provide adversaries with a low-cost blueprint for exploitation. They can inform electronic attack planning, maintenance and sustainment targeting, and vulnerability discovery. They also enable synthetic training and doctrine development for adversary operators. A single manual can be operationally relevant for years because systems and procedures often evolve incrementally, not continuously.

Compensation details illuminate tradecraft and investigative leverage. Wei received more than $12,000 over the course of the relationship, including an alleged $5,000 payment connected to the June 2022 manual transfer. The DOJ has described the use of online payment methods (U.S. Department of Justice, 2023; U.S. Department of Justice, 2026a). This is common in modern espionage involving HUMINT assets who are not professional intelligence officers. Financial transfers create documentary evidence, establish quid pro quo, and provide prosecutors with a corroborating narrative that is legible to a jury. For counterintelligence professionals, this observation is instructive. When communications shift to encrypted platforms, payment flows often remain discoverable through records, device artifacts, and third-party reporting. The operational discipline required to truly eliminate financial signatures is rarely present in an insider unless he or she is COMSEC sophisticated.

Public disclosures describe the case’s investigative architecture in broad but meaningful terms which are instructive even in the absence of the classified story. The FBI and Naval Criminal Investigative Service conducted the investigation. The DOJ characterized the matter as a “first of its kind” espionage investigation in the district, language that signals a substantial investigative effort and a prosecutorial commitment to proving the national security dimension in open court (U.S. Department of Justice, 2026a). The described evidence set emphasizes calls and electronic and audio messages with the PRC handler, payment records and receipts, and a post-arrest interrogation in which Wei admitted to providing the materials and described his conduct as espionage (U.S. Department of Justice, 2026a). Those elements are not glamorous, but they are decisive. They reflect the fundamentals of counterintelligence case building: document the relationship, document tasking and exchanges, document intent and benefit.

This IS PRC modus operandi! The Wei case fits a familiar pattern. The approach was enabled by digital access to targets, the cover identity was plausibly adjacent to the target’s professional interests, and the relationship was escalated through a play on Wei’s ego, . . . a mix of attention, manipulation, and money to compromise him. Tradecraft relied on human psychology, not advanced technical means. The Chinese FIS officer did not need to defeat a classified network. He convinced an insider to carry information out through routine channels and to do so voluntarily. This is a good example of why insider threat programs cannot focus only on clearance adjudication and periodic training. They must incorporate behavioral indicators, targeted education about online elicitation, and strong reporting pathways that reward early disclosure rather than stigmatize it (U.S. Department of Justice, 2023; U.S. Department of Justice, 2026a).

There is also a supervisory and cultural lesson embedded here. Wei voiced suspicion to another sailor. That disclosure was a moment when the damage could have been immediately contained. Peers often see the first signs of a peril, yet peers hesitate, either because they do not want to “ruin someone’s career” or because they assume someone else will act. Counterintelligence operators should treat this as a design requirement. Reporting must be made psychologically easy, procedurally simple, and institutionally supported. A peer report should trigger a calibrated and coordinated response, not an immediate public spectacle. The goal is to get ahead of compromise, not to create an environment where personnel conceal concerns to avoid attention.

The Wei case is a well-evidenced illustration of PRC espionage tradecraft against the United States. Chinese FIS spots and contacts potential insiders at scale through social platforms, cultivates via plausible identity, normalizes secret communications, introduces tasking that begins with the innocuous then escalates to classified materials, and pays through channels that are convenient to the target while still supporting handler control and a firm compromise of the asset (U.S. Department of Justice, 2023; U.S. Department of Justice, 2026a; USNI News, 2026). In my professional judgment, this is another textbook example of ego as the primary driver beneath the surface rationalizations. Even when loneliness, financial temptation, or grievance are present, the consistent psychological engine in treasonous espionage is the ego’s appetite to feel important, chosen, liked, befriended and exceptional. Wei’s conduct underscores that dynamic. He recognized the espionage for what it was, believed he could manage his exposure by encrypted applications, and continued down the road of betrayal. That is not naïveté. It is a belief that rules apply to others, that risk can be controlled by personal cleverness, and that the handler’s attention is a validation of one’s importance in the world. In very few espionage cases, money is the hook. The I.C. likes to think that examples like the Ames Case was a money-motivated treason. It was only partially. Likewise, the I.C. report on Ana Montés lays the blame at the feet of “ideology”. That really wasn’t it. Ego is the line that keeps the source from walking away when conscience and common sense offer an exit. It is almost ALWAYS ego.

~ C. Constantin Poindexter, MA in Intelligence, Graduate Certificate in Counterintelligence, JD, CISA/NCISS OSINT certification, DoD/DoS BFFOC Certification

Bibliography

  • Associated Press. (2026, January 12). Former Navy sailor sentenced to 16 years for selling information about ships to Chinese intelligence.
  • U.S. Department of Justice. (2023, August 3). Two U.S. Navy servicemembers arrested for transmitting military information to the People’s Republic of China.
  • U.S. Department of Justice. (2026a, January 13). Former U.S. Navy sailor sentenced to 200 months for spying for China.
  • U.S. Department of Justice. (2026b, January 14). U.S. Navy sailor sentenced to more than 16 years for spying for China.
  • USNI News. (2026, January 13). Sailor to serve 16 year prison sentence for selling secrets to China.