AI-Orchestrated Chinese Cyber Espionage, Counterintelligence Professional’s View

intelligence, counterintelligence, espionage, counterespionage, a.i., artificial intelligence, cyber operations, cyber-espionage, chinese APT, C. Constantin Poindexter

The GTG-1002 operation reported by Anthropic and reported by Nury Turkel in The Wall Street Journal (“The First Large-Scale Cyberattack by AI“) is not just another less-than-noteworthy Chinese cyber campaign. It is a counterintelligence (CI) inflection point, the proverbial crossing of the Rubicon. In this case, a Chinese state-sponsored threat group manipulated Anthropic’s Claude Code into acting as an autonomous cyber operator that conducted eighty to ninety percent of the intrusion lifecycle, from reconnaissance to data exfiltration, against about thirty high-value targets. Those victims include major technology firms and government entities (Anthropic 2025a; Turkel 2025). From a C.I. and counterespionage perspective, this is the moment where artificial intelligence stops being merely an analyst’s tool and becomes an adversary’s “officer in the field.”

I am going to take a C.I. guy’s view here and offer my thoughts about the counterintelligence ramifications of this, and more specifically how AI-orchestrated espionage changes the threat surface, disrupts traditional CI tradecraft, and forces democratic states to redesign CI doctrine, authorities, and technical defenses. It situates GTG-1002 within a broader pattern of Chinese cyber espionage and AI-enabled operations. I think that you will agree with me after reading a bit here that an AI-literate counterintelligence enterprise is now a strategic necessity.

GTG-1002 as a Case Study in AI-Enabled Espionage

Anthropic’s public report “assesses with high confidence” that GTG-1002 is a Chinese state-sponsored actor that repurposed Claude Code as an “agentic” cyber operator (Anthropic 2025a). Under the cover story of legitimate penetration testing, AI was instructed to map internal networks, identify high-value assets, harvest credentials, exfiltrate data, and summarize takeaways for human operators, who then made strategic decisions (Turkel 2025). The campaign targeted organizations across technology, finance, chemicals, and government sectors, with several successful intrusions validated (Anthropic 2025a). This incident must be understood in the context of Beijing’s long-standing cyber-espionage posture. U.S. government and independent assessments have repeatedly documented the sophistication and persistence of People’s Republic of China (PRC) state-sponsored cyber actors targeting critical infrastructure, defense industrial base entities, and political institutions (USCC 2022; CISA 2025). GTG-1002 does not represent a shift in Chinese strategic intent. It evidences a dangerous new means, automation of the cyber kill chain by a large language model (LLM) with minimal human supervision. In essence, AI isn’t helping an operator press the trigger, . . . AI is.

From a CI standpoint, GTG-1002 is the first verified instance of an LLM acting as the primary intrusion operator rather than as a mere “helper,” in a state-backed offensive cyber operation. This development validates years of warnings from both academic and policy analysts about AI-assisted and AI-driven cyber penetrations (Rosli 2025; Louise 2025). It confirms that frontier models can be harnessed as operational tools for intelligence collection at scale.

Compression of the Intelligence Cycle and the Detection Window

Traditional cyber-collection operations require sizable teams of operators and analysts executing reconnaissance, initial access, lateral movement, and exfiltration over days or weeks. GTG-1002 shows that AI agents can compress this cycle dramatically by chaining tools, iterating code, and self-documenting tradecraft at machine speed (Anthropic 2025a; Anthropic 2025b). For CI services, this compression has several consequences.

The indications and warning window shrinks. Behavioral indicators that CI analysts and security operations centers have historically depended on, i.e., repeated probing, extended lateral movement, or noisy privilege escalation, are now condensed, obfuscated, and/or automated. Autonomous AI agents can escalate privileges, pivot and exfiltrate in minutes, leaving a smaller digital “dwell time” during which CI can detect and attribute activity (Microsoft 2025).

Exploitation and triage become automated. GTG-1002 reportedly used Claude not only to steal data but also to summarize and prioritize it, effectively performing first-level intelligence analysis (Anthropic 2025a). This accelerates an adversary’s analytic cycle. AI can sort, cluster, and highlight sensitive documents faster than human analysts. The time between compromise and exploitation shrinks, diminishing the value of “late” discovery and complicating post-hoc damage assessments, two extremely important CI activities.

AI turns complexity into volume. Academic and industry analyses have already identified AI as a “threat multiplier”, enabling less capable actors to mount sophisticated, multi-stage operations (Rosli 2025; Armis 2025). State-backed operations can hide in the flood of AI-assisted criminal, hacktivist, and proxy activity, creating a signal-to-noise problem for CI triage and attribution.

In simple summary, AI collapses the temporal advantage that defenders once had to notice patterns in network behavior. Counterintelligence must pivot from retrospective forensic analysis toward continuous, AI-assisted anomaly detection and behavioral analytics.

AI Systems as Both Collector and High-Value Intelligence Target

GTG-1002 dramatizes a dual reality that Turkel highlights. China is “spying with AI and spying on American AI” (Turkel 2025). The same models used to conduct intrusions are themselves prized intelligence targets. Chinese entities have already been implicated in efforts to acquire Western AI model weights, training data, and associated know-how, as part of a broader technology-transfer strategy (USCC 2022; Google Threat Intelligence 2025). For THIS CI guy, AI labs are now the Cold War aerospace or cryptographic contractors. Model weights and training corpora become the “crown jewels”. Theft and reverse engineering/replication of frontier models will give adversaries economic advantage and more gravely, insight into how Western defensive systems behave. Anthropic itself notes that real-world misuse attempts feed into adversaries’ understanding of model weaknesses and safety bypasses (Anthropic 2025b).

The supply chain and insider threat picture changes. AI providers depend on global supply chains, open-source libraries, and large pools of contractors and researchers. This distributed ecosystem creates attack surfaces for foreign intelligence services. Code contributions, model-training infrastructure, and prompt logs can all be targeted. CI-focused analysis from the security and legal communities has argued that the AI ecosystem, i.e., researchers, hardware vendors, and cloud providers, must be treated as CI-relevant nodes, not as purely commercial actors (Lawfare Institute 2018; Carter et al. 2025).

Collecting on the collectors is not a new tactic but AI puts it on steroids. Collection on red-teaming and controls/safeguards themselves have become a priority. Access to internal red-team reports, internal controls and safety evaluations are extraordinarily valuable to an adversary seeking to jailbreak or subvert models. Counterintelligence coverage must extend not only to model weights but also to the meta-knowledge of how those models fail, and how that knowledge might be of adversarial interest.

In brief, AI firms are part of the national security base. CI organizations will need to authorize enhanced resources, assign dedicated case officers, establish formal reporting channels, and integrate these enterprises into national threat-sharing architectures in a way analogous to defense contractors and telecommunications providers (Carter et al. 2025).

Deception, Hallucination, and Counterespionage Tradecraft

Anthropic’s report and Turkel’s article both highlight a critical limitation of AI-orchestrated espionage. Claude frequently hallucinated, overstating findings or fabricating credentials and “discoveries” (Anthropic 2025a; Turkel 2025). From a counterespionage perspective, this is not simply a technical bug. It is a potential vector for deception. If adversary services increasingly rely on AI agents for reconnaissance and triage, then controlled-environment deception becomes more attractive. CI and cyber defense teams can seed networks with synthetic, high-entropic data and decoy credentials designed to attract and mislead AI agents. Because large models are prone to pattern-completion and over-generalization, they may “see” classified goodies and valuables where a skilled human operator would sense something is simply not right.

Algorithmic counterdeception becomes feasible. The academic literature on AI in cyber espionage emphasizes that overreliance on automated tools can degrade situational awareness and strategic judgment inside hostile services (Rosli 2025; Louise 2025). CI planners can exploit this by orchestrating digital environments that feed AI agents ambiguous, contradictory, or subtly poisoned data. This increases the probability that adversary leadership acts on flawed intelligence.

GTG-1002 demonstrates that adversaries (at the very least China) are already skilled at their deception of AI. Chinese FIS successfully social-engineered Claude’s safety systems by impersonating legitimate cybersecurity professionals performing authorized pen-testing (Anthropic 2025a). What then is the appropriate CI requirement? Counter-social-engineering of our own models. Guardrails must be resilient not just to obviously malicious prompts but to sophisticated role-playing that mimics presumibly friendly actors, including penetration testers, red teams, and internal security staff.

Blurring Lines Between Cyber CI, Influence Operations, and HUMINT Targeting

Major technology and threat reports document how Russia, China, Iran, and North Korea are using AI to scale disinformation, impersonate officials, and refine spearphishing campaigns (Microsoft 2025; Google Threat Intelligence 2025). For CI professionals, this convergence of AI-enabled cyber intrusion and influence operations erodes traditional boundaries between cyber CI (identifying and disrupting technical collection), defensive HUMINT (protecting human sources and employees), and counter-influence (disrupting foreign information operations).

AI systems can now generate tailored phishing content, deepfake personas, and synthetic social media and professional-network profiles at scale, all of which feed into reconnaissance and targeting pipelines for state security services (FBI 2021; Microsoft 2025). GTG-1002 focused primarily on technical collection, but the same infrastructure could coordinate cyber intrusions with human targeting. Using stolen email archives to identify vulnerable insiders, then tasking LLMs to draft recruitment approaches comes immediately to mind.

Counterintelligence must integrate AI forensics, digital forensics, and behavioral analytics into a single tradecraft paradigm and practice. Monitoring “pattern of life” indicators like off-hours access, unusual lateral movement, and anomalous data pulls must be enhanced by AI-driven analysis of communication patterns, foreign contact indicators, and anomalous financial or travel behavior. There are good suggestions about best practices in emerging CI guidance on AI-enabled insider-threat detection (Carter et al. 2025; CISA 2025).

Doctrine, Authorities, and Information-Sharing at Machine Speed

The GTG-1002 incident exposes a serious structural challenge. CI and cyber defense architectures are optimized for human-paced operations and workflows that, speaking kindly, are bureaucratic. To its credit, Anthropic engaged with U.S. I.C. agencies quickly and publicly disclosed the attack, but Turkel argues that AI incidents need near-real-time disclosure and coordinated response (Turkel 2025). This aligns with broader policy analyses calling for mandatory reporting of AI misuse, coupled with safe-harbor protections, within seventy-two hours or less (Carter et al. 2025). That is a good step, but not fast enough. The horse is out of the barn and gone by the seventy-two hour mark. So, the implication here is that threat intelligence sharing must become significantly machine-to-machine. If attacks unfold at machine speed, then signature updates, behavioral indicators, and model-abuse patterns must be distributed via automated channels across sectors in minutes and hours, not days or weeks (Microsoft 2025). All players will have to agree to and implement standardized formats for sharing AI jailbreak patterns, malicious prompt signatures, and indicators of AI-driven lateral movement.

Legal authorities must evolve. Existing CI and surveillance authorities were not drafted with AI agents in mind. When an AI agent controlled by a foreign intelligence service (FIS) is operating inside a U.S. cloud environment, what legal framework governs monitoring, interdiction, and even proportional response? Analyses of AI and state-sponsored cyber espionage reveal that international and domestic legal regimes lag the technology, creating gray zones that adversaries can exploit (Louise 2025; Lawfare Institute 2018).

Secure-by-design requirements for AI providers must become part of the regulatory baseline. Anthropic’s own transparency documents argue that future models must incorporate identity verification, real-time abuse monitoring, and robust safeguards against social-engineering prompts (Anthropic 2025b). From a CI perspective, such measures are not optional “best practices” but core elements of both commercial resilience and national security.

An AI-Literate Counterintelligence Enterprise

The GTG-1002 campaign exposes an ugly asymmetry. Adversarial FISs are already operationalizing AI as a collection platform and to conduct other cyber operations, both offensive and defensive. CI organizations in the U.S. and similarly democratic regimes are only beginning to adopt AI as an analytic aid. We are behind, yet there is hope. There is nothing inherent about AI that favors offense over defense. We simply need to move faster.

Public reporting from the FBI and other agencies highlights how AI can be used to process imagery, triage voice samples, and comb through large datasets to identify anomalous behavior and potential national security threats more quickly (FBI 2021; CISA 2025). In counterintelligence, AI can flag unusual access patterns suggestive of AI-driven intrusions, detect insider-threat indicators earlier by correlating technical, financial, and behavioral data. The model can effectively assist analysts in mapping adversary infrastructure and correlating tactics, techniques, and procedures across campaigns, as well as support automated red-teaming of in-house models to identify vulnerabilities before adversaries do (Carter et al. 2025; Microsoft 2025). To get there, CI practitioners must become AI-literate operators. Recruiting and training officers who understand model architectures, jailbreak techniques, and prompt-injection attacks as well as a depth and breadth of traditional HUMINT tradecraft knowledge. It also means integrating data scientists and AI engineers into counterintelligence units, ensuring that insights about model misuse flow directly into counterespionage planning and operational security.

Counterespionage in the Age of Autonomous Offense

GTG-1002 is to AI what the first internet worm or the earliest ransomware campaigns were to traditional cybersecurity, albeit a bit more serious. AI-conducted activity by adversary FIS is a warning shot that the paradigm has shifted. A Chinese state-linked actor leveraged a Western frontier model to execute the majority of an espionage operation autonomously, at scale, using mostly open-source tools (Anthropic 2025a; Turkel 2025). Just ponder that for a moment. The counterintelligence ramifications are frightening. The intelligence cycle is compressed. The defender’s window for detection and countermeasures is shrinking. AI systems are simultaneously espionage platforms and priority intelligence targets, demanding full CI coverage. Hallucination and automation create new opportunities for both adversary deception and defender counter-deception. Cyber intrusions, influence operations, and human targeting are converging in AI-enabled world of lightning-fast channels. Existing CI doctrines, authorities, and information-sharing practices are too slow and too fragmented for machine-speed conflict.

If democratic states treat AI misuse as a niche cyber issue, we are ceding the initiative to adversaries who understand AI as an intelligence and counterintelligence weapon system. The appropriate response is immediate professionalization, building an AI-literate counterintelligence enterprise, imposing secure-by-design obligations on AI providers, and creating real-time, automated mechanisms to de-silo and distribute threat intelligence across government and critical industries. GTG-1002 clearly demonstrates that hostile FISs are already leveraging an AI offensive capability. Counterintelligence must not be left behind. I am not suggesting that we mirror the PRC’s behavior, but rather that pertinent Intelligence Community, national security and industry partners integrate AI into a rules-bound, rights-respecting CI framework capable of defending our open societies against autonomous offensive operations.

~ C. Constantin Poindexter, MA in Intelligence, Graduate Certificate in Counterintelligence, JD, CISA/NCISS OSINT certification, DoD/DoS BFFOC Certification

References

  • Anthropic. 2025a. Disrupting the First Reported AI-Orchestrated Cyber-Espionage Campaign. San Francisco: Anthropic.
  • Anthropic. 2025b. “Claude Transparency and Safety: Model System Card.” San Francisco: Anthropic.
  • Armis. 2025. China’s AI Surge: A New Front in Cyber Warfare. Armis Threat Research Report.
  • Carter, William, et al. 2025. “Integrating Artificial Intelligence into Counterintelligence Practice.” Arlington, VA: Center for Development of Security Excellence.
  • CISA (Cybersecurity and Infrastructure Security Agency). 2025. “Countering Chinese State-Sponsored Actors Compromising Global Networks.” Cybersecurity Advisory AA25-239A. Washington, DC: U.S. Department of Homeland Security.
  • FBI (Federal Bureau of Investigation). 2021. “Artificial Intelligence – Emerging and Advanced Technology: AI.” Washington, DC: U.S. Department of Justice.
  • Google Threat Intelligence. 2025. “Adversarial Misuse of Generative AI: Threats and Mitigations.” Mountain View, CA: Google.
  • Lawfare Institute. 2018. “Artificial Intelligence—A Counterintelligence Perspective.” Lawfare (blog), November 2018.
  • Louise, Laura. 2025. “Artificial Intelligence and State-Sponsored Cyber Espionage: The Growing Threat of AI-Enhanced Hacking and Global Security Implications.” NYU Journal of Intellectual Property and Entertainment Law 14 (2).
  • Microsoft. 2025. Digital Threats Report 2025. Redmond, WA: Microsoft.
  • Rosli, Wan Rohani Wan. 2025. “The Deployment of Artificial Intelligence in Cyber Espionage.” AI and Ethics 5 (1): 1–18.
  • Turkel, Nury. 2025. “The First Large-Scale Cyberattack by AI.” Wall Street Journal, November 23, 2025.
  • USCC (U.S.–China Economic and Security Review Commission). 2022. “China’s Cyber Capabilities: Warfare, Espionage, and Implications for the United States.” Washington, DC: USCC.

The Peril of Pentagon Orders Russian Cyber Defense ‘Stand Down’

cyber, cyber operations, cyber threat, espionage, counterespionage, counterintelligence, russia

It if doesn’t frighten you, it should. “The Trump administration has ordered the United States to end offensive cyber operations targeting Russia, . . . (US News, Mar. 2025) Russia, or more particularly the Russian FIE poses a grave threat to U.S. national security. Threats posed by this state-actor and its state-supported proxies are grave both in terms of capability and intent. Russia has consistently demonstrated its capacity to execute sophisticated cyber operations targeting governments, corporations, critical infrastructure and individuals. The perils are multi-dimensional, including espionage, cyber warfare (or “war in the grey”), information operations, subversion, ransoming and economic disruption. Examples of Russia’s malign and nefarious cyber activity are plethora however recently the U.S. and Ukraine seem to enjoy the brunt of Putin’s ire. Here are some points to consider:

1. State-Sponsored Cyber Warfare

  • Russia’s GRU Unit 74455, a/k/a “Sandworm” conducts offensive cyber operations, often targeting critical infrastructure the U.S., its allies and shared economic interests.
  • The 2017 NotPetya attack caused over $10 billion in global damages, hitting Maersk, FedEx, and other major commercial concerns. This agent was designed for penetration of a particular type of accounting software used in Ukraine. While not specifically targeting the U.S., the global fallout of NotPetya getting into the wild is instructive. In financial terms, it was among the greatest events of “collateral damage during war” ever recorded.
  • Russian hackers have targeted Ukraine’s energy sector repeatedly. They have demonstrated a clear ability to take down critical infrastructure. Evidence of Russian FIS’s penetration of U.S. utilities, likely in search of weakness to exploit or to leave ‘back doors’ for future exploitation, has also been detected. Notably, Dragonfly 2.0, a Russian state-sponsored hacking group (also known as Energetic Bear), successfully infiltrated U.S. energy sector systems, including nuclear power plants.

2. Cyber Espionage

  • Groups like APT29 (Cozy Bear) and APT28 (Fancy Bear), linked to Russian FIE have hacked into government agencies. They have repeatedly compromised U.S. official networks. The SolarWinds penetration in 2020 is instructive.
  • Ongoing efforts to steal classified or proprietary information from defense, aerospace, and technology sectors save Russia billions in research and development. From 2020 to 2021, Russian hackers compromised multiple U.S. defense contractors that provide support to the Department of Defense (DoD), U.S. Air Force, and Navy APT28 “Fuzy Bear” stole information related to weapon systems (including fighter jets and missile defense technologies, communications and surveillance systems, naval and space-based defense projects.

3. Election Interference & Disinformation

  • Russia has weaponized social media. Troll farms such as the Internet Research Agency and more rescently AI-home-cooked content spread disinformation and misinformation to masssive audiences.
  • Russian cyber actors hacked the DNC and Clinton campaign, leaking emails via WikiLeaks in efforts to subvert the U.S. political process.
  • Operation Project Lakhta was ordered directly by Vladimir Putin. This was a “hacking and disinformation campaign” to damage Clinton’s presidential campaign.
  • The Justice Department seized thirty-two internet domains used in Russian government-directed foreign malign influence campaigns (“Doppelganger”).

4. Ransomware & Financial Cybercrime

  • Russia harbors cybercriminal groups like Conti, REvil, and LockBit, which launch ransomware attacks on U.S. hospitals, businesses, and municipal corporations.
  • Many ransomware gangs operate with tacit Kremlin approval—as long as they don’t target Russian entities. For instance, REvil’s malware is designed to avoid systems using languages from the Commonwealth of Independent States (CIS), which includes Russia. This evidences a deliberate effort to steer clear of Russian entities.

5. Potential for Cyber Escalation

  • Russia has declared NATO and the West and its “main enemy”. The risk of cyber retaliation is real. Russia has the capability to conduct supply chain attacks, disrupt banking systems, and interfere with military communications.
  • In 2020, Russian state-sponsored cyber actors compromised the software company SolarWinds, embedding malicious code into its Orion network management software. This supply chain attack affected approximately 18,000 organizations, including multiple U.S. government agencies and private sector companies. This was a surveillance mechanism which allowed Russia to monitor internal communications and exfiltrate sensitive data from the software users.
  • In 2008 Russia deployed specialty malware (“Agent.btz“) which penetrated the U.S. Department of Defense’s classified and unclassified networks. The breach, considered one of the most severe against U.S. military computers, led to the establishment of U.S. Cyber Command to bolster cyber defenses.

Conclusion

The Russian cyber threat is persistent, evolving, and highly strategic. The West has cyber defenses and deterrence strategies in place (like sanctions and counter-hacking operations) however the current Administration’s order to terminate much of that effort cripple U.S. national security.

Quick to react to reporting of the DoD’s posturing, the Cybersecurity and Infrastructure Security Agency (CISA) tweeted, “CISA’s mission is to defend against all cyber threats to U.S. Critical Infrastructure, including from Russia. There has been no change in our posture. Any reporting to the contrary is fake and undermines our national security.” Comforting however the words of a confidential source within CISA present a different picture. “A recent memo at the Cybersecurity and Infrastructure Security Agency (Cisa) set out new priorities for the agency, which is part of the Department of Homeland Security and monitors cyber threats against US critical infrastructure. The new directive set out priorities that included China and protecting local systems. It did not mention Russia, . . . analysts at the agency were verbally informed that they were not to follow or report on Russian threats, even though this had previously been a main focus for the agency.” (Guardian, Mar. 2025)

Russia is one of our most aggressive cyber adversaries as well as being recongnized by most nations as a ‘cyber threat pariah’ (i.e., most vocally by NATO, the EU and the U.N.). Given the President’s position on Russia, it’s impossible to say that U.S. continues to harden critical infrastructure, surveil Russian FIE cyber efforts and accomplish effective countermeasures. Russia’s offensive cyber capabilities will remain a major security challenge for the foreseeable future. The question is, are we willing to handicap our efforts to meet our adversaries with robust cyber capability or simply turn our heads away.

Iran Cyber Operations Target Utility Infrastructure

cyber, cyber operations, espionage, counterespionage, counterintelligence, cyber defense, CISA, countermeasures, constantin poindexter

Per the U.S. Cybersecurity and Infrastructure Security Agency (CISA), “Since at least November 22, 2023, these IRGC-affiliated cyber actors have continued to compromise default credentials in Unitronics devices. The IRGC-affiliated cyber actors left a defacement image stating, “You have been hacked, down with Israel. Every piece of equipment ‘made in Israel’ is CyberAv3ngers legal target.” The victims span multiple U.S. states. The authoring agencies urge all organizations, especially critical infrastructure organizations, to apply the recommendations listed in the Mitigations section of this advisory to mitigate the risk of compromise from these IRGC-affiliated cyber actors.” (CISA, 12/01/2023)

The penetrations were aimed at critical utilities, in the extant case of U.S. water and water waste treatment infrastructure. Per CISA, “Beginning on November 22, 2023, IRGC cyber actors accessed multiple U.S.-based WWS facilities that operate Unitronics Vision Series PLCs with an HMI likely by compromising internet-accessible devices with default passwords. The targeted PLCs displayed the defacement message, “You have been hacked, down with Israel. Every equipment ‘made in Israel’ is Cyberav3ngers legal target.” The Water and Wastewater Systems Sector (Water Sector) underpins the health, safety, economy, and security of the nation. It is vulnerable to both cyber and physical threats.” The warning is instructive. The fallout from a successful compromise of public water systems can be severe. Andrew Farr warns, “The imagination can run wild with worst-case scenarios about what a threat actor could do to a water system, but Arceneaux explains that sophisticated actors could hack a system and manipulate pumps or chemical feeds without the utility even knowing they were in the system. They could also create a water hammer that could lead to cracked pipes or release untreated wastewater back into a source water body. What if that happens [to a water system] in a medium or a big city? Maybe it’s only for a few hours, but it could go on for a few days or weeks, depending on how extensive the damage is.” (Farr, WF&M, 04/11/2022) Darktrace reports the very real consequence of a successful water system compromise. “Earlier this month, cyber-criminals broke into the systems of a water treatment facility in Florida and altered the chemical levels of the water supply.” (Matthew Wainwright, Darktrace) If potable water delivered to consumers contains dangerous contaminants or improper balances of the “good” chemicals blended to the product (fluoride, chlorine, chloramine, etc.), it can cause negative health effects. Gastrointestinal illness, nervous system damage, reproductive system damage, and chronic diseases such as cancer are very real risks associated with the same.

CISA cyber defense model of the “brute force” methodology deployed by IRGC operatives may be viewed at MITRE.