AI as a Force Multiplier in Recent Intrusion Operations

AI, artificial intelligence, intelligence, counterintelligence, espionage, counterespionage, hacker, cyber, cyber security, C. Constantin Poindexter

AI as a Force Multiplier in Cyber Intrusions: Counterintelligence Lessons from the Amazon Threat Intelligence FortiGate Campaign, AI-Assisted Attack Planning, and Scalable Post-Exploitation Tradecraft

From a counterintelligence professional’s perspective, I read Amazon Threat Intelligence’s February 2026 report less as a novelty story about “hackers using AI” and more as a warning about a structural change in operational economics. The important point is not that a threat actor used a large language model. It is that a presumably low-to-medium skill, financially motivated Russian-speaking actor was able to scale intrusion activity across more than 600 FortiGate devices in over 55 countries in roughly five weeks by integrating commercial AI services into every phase of the attack workflow (Moses, 2026). In counterintelligence terms, this is a capability amplification event. AI did not make the actor sophisticated. It made the actor productive (Moses, 2026).

That distinction matters. Amazon’s analysis is unusually valuable because it documents both sides of the phenomenon. On one hand, the actor used AI to generate attack plans, write tooling, sequence actions, and coordinate operations at a tempo that would traditionally imply a larger team. On the other hand, the same actor repeatedly failed when facing hardened environments, patched systems, or nonstandard conditions. Amazon explicitly notes that the actor could not reliably compile custom exploits, debug failures, or creatively pivot beyond straightforward automated paths (Moses, 2026). This is exactly what a counterintelligence officer should expect from a force multiplier: improved throughput without equivalent gains in judgment, tradecraft, or adaptability.

The Amazon case is especially useful because it separates hype from mechanism. The campaign did not depend on exotic zero-days. Amazon states that no FortiGate vulnerability exploitation was observed in the campaign it analyzed; instead, the actor exploited exposed management interfaces, weak credentials, and single-factor authentication, then used AI to execute these known methods at scale (Moses, 2026). That is a profound lesson for defenders. AI is not changing the laws of intrusion. It is compressing the time and labor required to exploit organizations that still fail at fundamentals.

From a counterintelligence perspective, this changes how we should think about indications and warnings. Historically, broad multi-country infrastructure access, custom scripts in multiple languages, and organized post-exploitation playbooks would often suggest a resourced team such as an FIS, state-supported private operator, or at least a mature criminal crew. Amazon’s report shows that this inference is no longer reliable. The actor’s infrastructure contained numerous scripts and dashboards with hallmarks of AI generation, and Amazon concluded that a single actor or very small group likely produced a toolkit whose volume would previously imply a development team (Moses, 2026). In intelligence analysis, this is a warning against legacy heuristics. Scale is no longer a clean proxy for organizational size or skill.

Amazon’s “AI as a force multiplier” section is the core of the matter. The actor used at least two distinct commercial LLM providers in complementary ways. One served as the primary tool developer and operational assistant, while another was used as a supplementary planner when the actor needed help pivoting inside a compromised network (Moses, 2026). In one observed instance, the actor reportedly submitted a victim’s internal topology, hostnames, credentials, and identified services to obtain a step-by-step compromise plan (Moses, 2026). For counterintelligence professionals, this is not just a cyber issue. It is a tradecraft issue. The actor is externalizing planning and decision-support functions to commercial platforms, effectively outsourcing parts of the “staff work” that junior operators or analysts would otherwise perform.

This pattern aligns with broader reporting from major providers and threat intelligence teams. Google Threat Intelligence Group’s February 2026 AI Threat Tracker documents growing adversary integration of AI across reconnaissance, phishing enablement, malware/tooling development, and post-compromise support, while also emphasizing that it has not yet observed “breakthrough capabilities” that fundamentally change the threat landscape (Google Threat Intelligence Group, 2026). That is highly consistent with the Amazon case: AI is improving speed, coverage, and consistency more than it is producing genuine operational innovation (Google Threat Intelligence Group, 2026; Moses, 2026). Microsoft’s Digital Defense Report 2025 similarly describes adversaries using generative AI for scaling social engineering, reconnaissance, code generation, exploit development support, and automation of exfiltration-to-lateral movement pipelines (Microsoft, 2025). The convergence across independent sources is notable. Different organizations are observing the same pattern from different vantage points.

Anthropic’s 2025 report on “vibe hacking” extends this trend in a particularly important direction. Anthropic described a disrupted criminal operation in which an actor used an AI coding agent not only as a technical consultant but as an active operator embedded into the attack lifecycle, supporting reconnaissance, credential harvesting, penetration, and extortion-related tasks (Anthropic, 2025). Whether one agrees with every framing choice in vendor reports, the operational implication is clear: AI-enabled actors are increasingly turning language models and coding agents into workflow engines. They are not merely asking for snippets of code. They are building repeatable campaign infrastructure around AI-assisted execution (Anthropic, 2025; Moses, 2026).

For counterintelligence practitioners, the strategic concern is not limited to criminal ransomware precursors. The same force-multiplier logic applies to espionage, access development, insider targeting, and influence preparation. Google’s reporting notes that government-backed actors are using AI for technical research, target development, and rapid phishing lure generation, including reconnaissance activities that support subsequent operations (Google Threat Intelligence Group, 2026). The FBI has also publicly warned that AI increases the speed, scale, and realism of phishing and social engineering, including voice and video cloning (FBI San Francisco, 2024). In the CI domain, this means hostile services and proxies can expand target coverage, improve linguistic quality, and accelerate social graph exploitation with lower manpower. AI narrows the gap between intent and execution.

There is also an analytical security issue that deserves more attention: data exposure to AI platforms during live operations. Amazon’s report indicates that the actor submitted internal victim topology, credentials, and service data into a commercial AI workflow (Moses, 2026). From a counterintelligence standpoint, this is a double-edged phenomenon. It may increase adversary effectiveness, but it also creates potential collection and disruption opportunities, depending on provider visibility, legal authorities, and industry cooperation. More importantly, it means that operationally sensitive network intelligence is now moving through third-party AI services as part of adversary tradecraft. That should influence how we think about public-private partnerships, lawful reporting channels, and rapid deconfliction.

The Fortinet context reinforces a second CI principle, i.e, adversary success often begins with governance failure, not advanced tradecraft. Fortinet’s January 2026 PSIRT analysis documented abuse of FortiCloud SSO and repeatedly emphasized best practices such as restricting administrative access, disabling vulnerable SSO paths, and monitoring for malicious admin creation and anomalous logins (Windsor, 2026). NIST’s National Vulnerability Database entry for CVE-2026-24858 further confirms the seriousness of the authentication bypass exposure affecting multiple Fortinet product lines when FortiCloud SSO was enabled (NIST NVD, 2026). Even if the Amazon campaign did not depend on that specific exploit path, the environment is the same: internet-exposed edge infrastructure, identity weaknesses, and uneven patching create permissive terrain that AI-enabled actors can mine at scale (Moses, 2026; Windsor, 2026; NIST NVD, 2026).

The practical implication is that counterintelligence and cybersecurity must converge more tightly on defensive prioritization. In many organizations, CI is still treated as a narrow insider-threat or foreign-intelligence problem, while cyber defense handles perimeter hygiene and incident response. That separation is increasingly artificial. AI-augmented threat actors blur the boundaries between criminal and state-adjacent tradecraft, between opportunistic access and strategic exploitation, and between cyber intrusion and intelligence preparation of the environment. Europol’s 2025 organized crime threat assessment reporting, as reflected in major coverage, likewise points to AI lowering costs and increasing the scale and sophistication of criminal operations, including cyber-enabled activity and proxy behavior that can intersect with geopolitical interests (Reuters, 2025). The ecosystem is converging.

In my view, the correct response is not panic over “autonomous AI hackers.” Amazon’s report itself argues against that caricature. The actor remained brittle, shallow, and dependent on weak targets (Moses, 2026). The right response is disciplined adaptation in three areas.

Organizations must treat identity and edge administration as counterintelligence terrain, not merely IT hygiene. Exposed management interfaces, weak credentials, and single-factor authentication are now high-confidence enablers of AI-scaled intrusion campaigns (Moses, 2026). MFA, restricted administration paths, credential rotation, and segmentation are not basic controls anymore; they are anti-scaling controls.

Defenders need telemetry designed for workflow detection rather than malware signatures. Amazon explicitly notes the campaign’s use of legitimate open-source tools and recommends behavioral detection over IOC dependence (Moses, 2026). That aligns with the broader AI-enabled threat model. When AI helps actors orchestrate legitimate tools more efficiently, the artifact footprint looks cleaner while the behavioral pattern becomes more machine-like and more repeatable.

Intelligence organizations and enterprises should expand analytic models for adversary assessment. When a low-skill actor can produce high-volume tooling and broad campaign coverage, we must stop equating output polish with strategic sophistication. The key discriminators will be resilience under friction, adaptation under failure, target discipline, and operational security. In the Amazon case, the actor’s poor OPSEC and inability to improvise revealed the underlying limitations despite impressive scale (Moses, 2026). Those are precisely the indicators that counterintelligence tradecraft has always prioritized.

My take, the AI force multiplier threat is real, but its significance is often misunderstood. It really resembles a “brute force” attack reminiscent of the first generation hackers but on steroids. AI is the “steroid”. So, the immediate danger is not superintelligence. It is operational leverage. AI gives mediocre actors the ability to behave like nation-state FIS against poorly defended targets. It accelerates reconnaissance, scripting, planning, and social engineering. It reduces labor costs and time-to-action. It increases campaign breadth. And it does all of this without solving the deeper human problems of judgment, creativity, and tradecraft. For counterintelligence professionals, that means the threat landscape is becoming more crowded, faster-moving, and harder to triage. The strategic answer remains the same as ever: protect critical access, harden identity, improve detection, and refine analytic tradecraft. What has changed is the speed at which failure to do so will be exploited (Moses, 2026; Google Threat Intelligence Group, 2026; Microsoft, 2025; Anthropic, 2025; FBI San Francisco, 2024).

~ C. Constantin Poindexter, MA in Intelligence, Graduate Certificate in Counterintelligence, JD, CISA/NCISS OSINT certification, DoD/DoS BFFOC Certification

Bibliography

  • Anthropic. (2025, August). Vibe hacking: How cybercriminals are using AI coding agents to scale data extortion operations. Anthropic.
  • Bleiberg, J. (2026, February 25). Hackers used AI to breach 600 firewalls in weeks, Amazon says. Insurance Journal.
  • FBI San Francisco. (2024, May 8). FBI warns of increasing threat of cyber criminals utilizing artificial intelligence. Federal Bureau of Investigation.
  • Google Threat Intelligence Group. (2026, February 12). GTIG AI Threat Tracker: Distillation, experimentation, and (continued) integration of AI for adversarial use. Google Cloud Blog.
  • Microsoft. (2025). Microsoft Digital Defense Report 2025: Safeguarding trust in the AI era. Microsoft.
  • Moses, C. (2026, February 20). AI-augmented threat actor accesses FortiGate devices at scale. AWS Security Blog.
  • National Institute of Standards and Technology, National Vulnerability Database. (2026). CVE-2026-24858 detail. NVD.
  • Reuters. (2025, March 18). Europol warns of AI-driven crime threats. Reuters.
  • Windsor, C. (2026, January 22). Analysis of Single Sign-On Abuse on FortiOS. Fortinet PSIRT Blog.

SIGNAL Secure for Intelligence Practitioners and will be for the Quantum Era

SIGNAL, intelligence, counterintelligence, spy, espionage, counterespionage, cyber security, C. Constantin Poindexter

Signal has earned its reputation in intelligence, counterintelligence, and investigative communities for a practical reason. I love it and you should too! The tool was engineered around adversarial assumptions that align with real-world asset targeting. Those assumptions include state-grade collection, cover and often illegal interception, endpoint compromise, credential theft, and long-term bulk retention for future exploitation. Signal is not conventional messaging with security added afterward. It is an integrated protocol suite for key agreement, per-message key evolution, and compromise recovery, supported by open specifications and sustained cryptographic hardening.

From an intelligence professional’s perspective, Signal is compelling because it is designed to remain resilient under partial failure. If an attacker wins a battle by capturing a key, briefly cloning a device, or recording traffic for years, Signal aims to prevent that single win from turning into durable, strategic access. This damage containment model aligns with counterintelligence priorities. Limit the blast radius, shorten adversary dwell time, and force repeated effort that increases the chance of detection.

The Double Ratchet and Per-Message Keys That Constrain Damage

At the core of Signal message confidentiality is the Double Ratchet algorithm, designed by Trevor Perrin and Moxie Marlinspike (Perrin and Marlinspike, 2025). Operationally, the Double Ratchet matters because it delivers properties that align with intelligence tradecraft realities.

Forward secrecy ensures that compromising a current key does not reveal prior message content. Adversaries routinely collect ciphertext in bulk and then hunt for a single point of decryption leverage later through device seizure, insider access, malware, or legal process. Forward secrecy frustrates that strategy by ensuring earlier captured traffic does not become a later intelligence windfall if a key is exposed at some later time (Perrin and Marlinspike, 2025).

Post-compromise security (“break-in recovery”) addresses a scenario intelligence practitioners plan for temporary device compromise. Border inspections, opportunistic theft, coercive access, or a short-lived implant can occur. The Double Ratchet includes periodic Diffie-Hellman updates that inject fresh entropy, while its symmetric ratchet derives new message keys continuously. Once the compromised window ends, later message keys become cryptographically unreachable to the attacker, provided the attacker is no longer persistently on the endpoint (Perrin and Marlinspike, 2025). This is not an exaggerated marketing claim. It is a disciplined key evolution that deprives the adversarial FIS and corporate spies of indefinite reuse of stolen key material.

Incident response logic has a new paradigm. A single brief compromise does not automatically mean permanent exposure of the entire history and future. Instead, the attacker must maintain persistence to retain visibility. That is a higher operational burden and a higher detection risk.

X3DH and PQXDH and the Move Against Harvest Now Decrypt Later

Signal historically used X3DH, Extended Triple Diffie-Hellman, for asynchronous session establishment. This is vital in mobile environments where recipients are often offline. X3DH uses long-term identity keys and signed prekeys for authentication while preserving forward secrecy and deniability properties (Marlinspike and Perrin, 2016). The strategic risk landscape shifted with the plausibility of cryptographically relevant quantum computing. The threat is not only future real-time decryption. It is harvest now/decrypt later. Bulk interception today is strategic, with the expectation that future breakthroughs, including quantum, could unlock stored traffic. Signal responded by introducing PQXDH, “Post Quantum Extended Diffie Hellman”, replacing the session setup with a hybrid construction that combines classical elliptic curve Diffie-Hellman using X25519 and a post quantum key encapsulation mechanism derived from CRYSTALS Kyber (Signal, 2024a). The operational implication is direct. An adversary would need to break both the classical and the post-quantum components to reconstruct the shared secret (Signal, 2024a).

Hybrid key establishment reflects conservative intelligence engineering. Migrate early, avoid sudden cutovers, and reduce reliance on a single new primitive. This also matters because the post-quantum component corresponds to what NIST standardized as ML KEM, derived from CRYSTALS Kyber, in FIPS 203 (NIST, 2024a; NIST, 2024b). NIST standardization does not guarantee invulnerability. It does increase confidence that the primitive has been scrutinized and is being adopted as a baseline for high assurance environments.

Signal also makes an important clarity point in its PQXDH materials. PQXDH provides post-quantum forward secrecy, while mutual authentication in the current revision remains anchored in classical assumptions (Signal, 2024b). Practitioners benefit from that precision because it defines exactly what is post-quantum today.

SPQR and Post Quantum Ratcheting for Long-Lived Operations

Session establishment is only one part of the lifecycle problem. A capable collector can record traffic for long periods. If quantum capabilities emerge later, the question becomes whether ongoing key evolution remains safe against future decryption. Signal’s introduction of the Sparse Post Quantum Ratchet, SPQR, directly addresses continuity by adding post-quantum resilience to the ratcheting mechanism itself (Signal, 2025).

SPQR extends the protocol so that not only the initial handshake but also later key updates gain quantum-resistant properties, while preserving forward secrecy and post-compromise security (Signal, 2025). For intelligence practitioners, this matters because long-lived operational relationships are common. Assets, handlers, investigative sources, and inter-team coordination can persist for months or years. A protocol that hardens only the handshake helps. A protocol that hardens ongoing rekeying is more aligned with the real adversary model of persistent collection.

Academic work has analyzed the evolution from X3DH to PQXDH in the context of Signal move toward post-quantum security and frames PQXDH as mitigation against harvest now decrypt later risk at scale (Katsumata et al., 2025). That framing fits intelligence risk management. Confidentiality is evaluated against patient, well-resourced adversaries.

Formal Analysis and Open Specifications and Why That Matters Operationally

Practitioners should be skeptical of security claims that cannot withstand external review. Signal protocol suite benefits from public specifications and sustained cryptographic scrutiny. A widely cited formal analysis models the protocol’s core security properties and examines its ratchet-based design in detail (Cohn Gordon et al., 2017). No protocol is proven secure against every real-world failure mode. Formal methods and peer-reviewed analysis reduce the chance that structural weaknesses remain hidden. Operationally, this supports reliability. When you rely on a tool for sensitive work, you evaluate whether the claims are testable, whether failure modes are documented, and whether improvements can be validated.

Metadata Constraints and Sealed Sender and the Role of Tradecraft

Message content confidentiality is only part of intelligence security. Metadata can be operationally decisive. Who communicates with whom, when, and how often can create damaging inferences. Signal Sealed Sender was designed to reduce sender information visible to the service during message delivery (Wired Staff, 2018). Research examines Sealed Sender and proposes improvements while discussing network-level metadata such as IP address exposure and the implications for anonymity tooling (Martiny et al., 2021). Additional academic work discusses traffic analysis risks that can persist in group settings even when sender identity is partially obscured (Brigham and Hopper, 2023).

The intelligence operator’s takeaway is that Signal materially improves content security and reduces certain metadata exposures. It does not eliminate the need for operational security measures. Depending on mission profile, those measures can include hardened endpoints, strict device handling, minimized identifier exposure, and network protections consistent with applicable law and policy.

Why Signal Trajectory Is Credible in the Quantum Transition

The Signal approach to the quantum transition reflects a credible engineering posture. Migrate early enough to blunt harvest now/decrypt later risk. Adopt hybrid designs to reduce reliance on one assumption. Extend post-quantum guarantees beyond the handshake into ongoing key evolution (Signal, 2024a; Signal, 2025). Alignment with NIST standardized direction for key establishment further supports long-term maintainability and ecosystem interoperability (NIST, 2024a; NIST, 2025). From an intelligence practitioner’s perspective, the central claim is not that Signal is unbreakable. The point is that Signal is engineered to constrain damage, recover after compromise, and anticipate strategic decryption threats. It is designed for a hostile environment that is moving toward post-quantum reality. I will state at the end here that Meta does not do any of this. FB messenger and WhatsApp leave gaping holes in cybersecurity as Meta’s focus is on monetization of the I.M. mechanism, not unbreakable coms. Use them at your own risk.

~ C. Constantin Poindexter, MA in Intelligence, Graduate Certificate in Counterintelligence, JD, CISA/NCISS OSINT certification, DoD/DoS BFFOC Certification

Bibliography

  • Brigham, Eric, and Nicholas Hopper. 2023. “Poster: No Safety in Numbers: Traffic Analysis of Sealed Sender Groups in Signal.” arXiv preprint.
  • Cohn Gordon, Katriel, Cas Cremers, Benjamin Dowling, Luke Garratt, and Douglas Stebila. 2017. “A Formal Security Analysis of the Signal Messaging Protocol.” Proceedings of the IEEE European Symposium on Security and Privacy.
  • Katsumata, Shota, et al. 2025. “X3DH, PQXDH to Fully Post Quantum with Deniable Ring.” Proceedings of the USENIX Security Symposium.
  • Marlinspike, Moxie, and Trevor Perrin. 2016. “The X3DH Key Agreement Protocol.” Signal Protocol Specification.
  • National Institute of Standards and Technology. 2024a. “NIST Releases First 3 Finalized Post Quantum Encryption Standards.” NIST News Release.
  • National Institute of Standards and Technology. 2024b. FIPS 203. “Module Lattice Based Key Encapsulation Mechanism Standard, ML KEM.” U.S. Department of Commerce.
  • National Institute of Standards and Technology. 2025. “Post Quantum Cryptography Standardization.” NIST Computer Security Resource Center.
  • Perrin, Trevor, and Moxie Marlinspike. 2025. “The Double Ratchet Algorithm.” Signal Protocol Specification.
  • Signal. 2024a. “Quantum Resistance and the Signal Protocol.” Signal Blog.
  • Signal. 2024b. “The PQXDH Key Agreement Protocol.” Signal Protocol Specification.
  • Signal. 2025. “Signal Protocol and Post Quantum Ratchets, SPQR.” Signal Blog.
  • Wired Staff. 2018. “Signal Has a Clever New Way to Shield Your Identity.” Wired Magazine.