A U.S. Attack on Iran, a Catastrophic Unforced Error

war, warfighter, Iran, U.S., intelligence, counterintelligence, espionage, counterespionage, C. Constantin Poindexter, CIA, NSA

Public and Congressional Support: The Decisive Constraint That Can Turn U.S. Military Dominance Over Iran Into Strategic Defeat

The United States retains overwhelming advantages in the material and operational prerequisites of high-end conventional warfare. In any prospective conflict with Iran, Washington can assume advantages in air and naval superiority, intelligence and surveillance coverage, precision strike capacity, suppression of enemy air defenses, long-distance logistics, and advanced cyber and electronic warfare. Yet those advantages do not automatically translate into strategic success. The decisive variable is not whether the United States can destroy targets faster than an adversary can replace them, but whether the United States can sustain the political mandate to keep fighting after the initial shock of combat wears off, the costs become visible, and the enemy adapts.

This is the core vulnerability of a discretionary war with Iran. Public support and congressional support are not merely background noise or messaging challenges. They are strategic enablers. When they are absent or brittle, they shape rules of engagement, constrain time horizons, narrow acceptable costs, and fracture coalition cohesion. In that environment, even tactically brilliant operations can fail to achieve the most important objectives because political will collapses sooner than the enemy’s capacity to resist. Vo Nguyen Giap articulated this logic explicitly. A belligerent can leave enemy forces partly intact if it can destroy the enemy’s will to remain in the war. (PBS, n.d.) That insight was operationalized against the United States in Vietnam, echoed in Afghanistan, and remains relevant to any prospective United States-Iran war.

The Strategic Center of Gravity: Legitimacy and Endurance

Clausewitz argued that war is a continuation of politics by other means. In American practice, the political character of war is inseparable from constitutional structure and democratic consent. A war that begins without clear congressional authorization, or that proceeds amid broad public skepticism, can win battles while steadily losing its domestic foundation. The War Powers Resolution codifies Congress’s position that the President introduces U.S. forces into hostilities only pursuant to a declaration of war, specific statutory authorization, or a national emergency created by an attack on the United States or its forces. (50 U.S.C. § 1541, 1973) In a discretionary strike campaign that grows into sustained hostilities, the gap between executive action and legislative consent becomes a recurring legitimacy crisis rather than a one-time procedural dispute.

Recent reporting underscores that this institutional fault line is not theoretical. Reuters reported that the U.S. Senate rejected a bid to curb presidential Iran war powers, reflecting a live and lively, contested debate over authority and oversight in potential Iran hostilities. (Reuters, 2025) That debate matters operationally because contested legitimacy does not remain in Washington. It affects allied basing decisions, overflight permissions, intelligence sharing, escalation thresholds, and the credibility of U.S. signals to both adversaries and partners. A campaign that looks unilateral, politically improvised, or domestically unpopular becomes harder to sustain and easier for Iran and its proxy network to frame as illegitimate aggression.

Material Superiority Versus Political Fragility

From my perspective (military/intelligence), the United States can plausibly execute many of the classic prerequisites you listed. But those capabilities do not eliminate the central political question: what is the concrete objective, and how long will the American public accept the costs required to achieve it?

Public sentiment data indicates a serious constraint. A University of Maryland Critical Issues Poll found only 21 percent favor the United States initiating an attack on Iran, with 49 percent opposed and 30 percent unsure. (University of Maryland Critical Issues Poll, 2026) A YouGov report covering an Economist YouGov poll likewise found Americans more likely to oppose than to support using military force to attack Iran, with 49 percent opposing and 27 percent supporting, and with significant partisan and independent resistance. (YouGov, 2026) Meanwhile, an AP NORC poll found that while many Americans view Iran as an enemy and express concern about Iran’s nuclear program, they have low trust in presidential judgment on the use of military force, with only about three in ten expressing high trust and more than half expressing little or no trust. (Associated Press NORC, 2026)

The former being said, all of this has strategic implications. They suggest that domestic consent is not merely divided. It is structurally thin, with a large, uncertain middle and a relatively small affirmative mandate for initiating war. Low confidence in the decision maker’s judgment means that early setbacks or civilian casualties can rapidly convert uncertainty into opposition. Further, thin consent invites legislative confrontation, and legislative confrontation invites operational constraints. This is exactly the kind of environment in which an adversary designs a strategy of political attrition rather than symmetrical military competition.

Vietnam: Giap’s Theory of Victory Was Political

The claim that “North Vietnam did not win the Vietnam War” can be true in a narrow kinetic sense. The United States inflicted vast battlefield losses and dominated many tactical engagements. Yet North Vietnam and the Viet Cong were able to outlast the United States by targeting the political will that sustained American participation. Giap described the objective as breaking “the American will to remain in the war,” using operations intended to force de-escalation and reshape the political calculus in Washington. (PBS, n.d.) The point is not that one event alone decided the outcome. The point is that the adversary’s theory of victory treated American domestic endurance as the center of gravity. Once that center weakened, America’s material advantages could not convert into a stable political settlement on acceptable terms.

For an Iran scenario, the parallel is not an exact replay of Vietnam’s terrain or insurgency structure. The parallel is “strategic method”. Iran does not need to win a conventional air-sea contest. It needs to ensure that the United States does not achieve its most important objectives at a politically acceptable cost. If Iran can force Washington into a cycle of escalation and retaliation, or can trigger regional proxy pressure that steadily raises the price of engagement, then the war becomes a contest of domestic patience more than a contest of platforms.

Afghanistan and the Logic of “Time”

The Afghanistan experience reinforces the same strategic logic through a different mode of war. A saying widely attributed to Taliban fighters captures the asymmetry of time horizons: “You have the watches, we have the time.” (Maclean’s, 2017) The exact provenance of the phrase is less important than its strategic meaning. The U.S. Administration is about fall into the same bullshit trap. Democracies fight under time constraints produced by elections, news cycles, budget politics, and public casualty sensitivity. Insurgent and revolutionary actors often fight under generational horizons, with lower sensitivity to near term losses and a stronger tolerance for prolonged hardship.

Iran’s leadership and its proxy network have repeatedly demonstrated a long-horizon approach to regional strategy. In a conflict, Iran can employ calibrated escalation through proxies, maritime harassment, missile and drone pressure, and political warfare aimed at eroding coalition cohesion (a “coalition” of states that have already publicly objected to U.S. warplanning). The objective is not necessarily to defeat U.S. forces in the field. It is to make the conflict feel indefinite, morally ambiguous, and strategically distracting, which are precisely the conditions that drain public support in the United States.

“Shock and Awe” Does Not Solve the Political Problem

Advocates of rapid strike campaigns often argue that overwhelming early force can preempt political attrition by ending the conflict quickly. History offers caution. Initial public support (pretty clearly NOT the case today) can be high at the onset of a war, but it can erode sharply as the war’s duration and costs expand, particularly if the rationale becomes contested. The Iraq example is instructive: Gallup reported 72 percent support for the war against Iraq in late March 2003. (Gallup, 2003) Yet Gallup later documented substantial erosion in perceived worth and support over time as realities on the ground diverged from initial expectations. (Gallup, 2006) The Brookings analysis of early Iraq war opinion similarly underscores the rally effect and its limits. (Kull, Ramsay, and Lewis, 2003)

For Iran, the political risk is heightened because current polling suggests the United States would begin without anything like the 2003 level of public backing. (University of Maryland Critical Issues Poll, 2026) (YouGov, 2026) (Associated Press NORC, 2026) Without a broad initial mandate, the usual pattern reverses: instead of rallying, creating a cushion against early shocks, early shocks can collapse a narrow coalition of support. Moreover, Iran is structurally capable of generating early shocks through proxy responses and regional disruption, meaning that the political challenge may begin immediately, not after months or years.

Congress as a Strategic Actor, Not a Background Variable

In a system where Congress controls funding and has constitutional war powers, the legislative branch becomes a de facto strategic actor. When Congress is divided, when authorization is ambiguous, or when the public is skeptical, Congress can constrain the war through funding restrictions, reporting requirements, and political signaling that affects allied behavior. Reuters reporting on war powers debates around Iran illustrates that these conflicts are not hypothetical. (Reuters, 2025) Even the sycophants are likely to run out of patience for another endless foray, largely due to constituent pressure rather than disloyalty to their cult.

This really matters. Strategic clarity requires durable political consensus. If objectives are unclear or expand, congressional opposition becomes more likely and more intense. Further, Iran can exploit visible domestic division through information operations, propaganda, and calibrated escalation intended to polarize U.S. politics. In that sense, a weak domestic mandate is not merely a constraint on U.S. freedom of action. It becomes a targetable vulnerability. The North Vietnamese knew it. The Afghans knew it, and the more sober members of the Department of Defense know it.

A Missing Ingredient: Defined, Credible Political Objectives

Even if the United States can strike nuclear facilities, degrade air defenses, and disrupt command networks, the strategic question remains what “winning” means and what settlement conditions are realistically attainable. If the objective is limited, such as delaying nuclear capabilities, the question becomes whether limited objectives justify the costs and risks of regional escalation. If the objective expands to regime change, the problem becomes far harder because military destruction does not automatically produce political legitimacy, stable governance, or a non-hostile successor regime. Here, the user’s final criterion is decisive. Post-conflict planning, a wicked difficult peril that we have botched over and over again, will repeat itself. History shows that military victory without a stabilization strategy yields strategic failure, and the public tends to punish wars that feel open-ended, morally muddled, or poorly planned.

In the Iran case, this risk is amplified because a strike campaign can trigger proxy retaliation in multiple theaters, raise energy and shipping risks, and produce unpredictable political reverberations, all of which can be framed domestically as an optional war of choice rather than a necessary act of self-defense. When a war’s necessity is contested, public support becomes the decisive front.

Dominance in Combat Power Does Not Guarantee Strategic Success

The United States may indeed be dominant across many of the operational categories that matter for battlefield performance. Yet wars are not won solely by platform superiority. They are won by aligning military means with politically sustainable ends. Current public opinion suggests a narrow and fragile mandate for initiating an attack on Iran, combined with low confidence in executive judgment about the use of force. (University of Maryland Critical Issues Poll, 2026) (YouGov, 2026) (Associated Press NORC, 2026) In that environment, congressional contention over authorization and war powers becomes a predictable friction point, not an occasional procedural dispute. (50 U.S.C. § 1541, 1973) (Reuters, 2025) Iran and its proxy network do not need to defeat the United States conventionally to succeed strategically. They need to prolong, complicate, and regionalize the conflict until the United States loses the will and domestic legitimacy to continue, echoing Giap’s theory of victory in Vietnam and the time horizon logic captured by the Afghanistan aphorism. (PBS, n.d.) (Maclean’s, 2017)

A United States attack on Iran will NOT end well. We’ll have tactical dominance paired with a complete strategic disaster. Without sustained public and congressional support, the United States will fail to achieve its most important objectives (if the Administration can even articulate them) at an acceptable cost. The venture will not end with a clear victory, but with political exhaustion and a forced search for exit ramps. That is not my political critique. It is a strategic assessment rooted in how democratic states actually choose war and wage war. My call? Don’t f. do it. Exaggerations about “days from completing a nuclear weapon” coupled with no clear objective or endgame is a movie that we’ve seen before.

C. Constantin Poindexter, MA in Intelligence, Graduate Certificate in Counterintelligence, JD, CISA/NCISS OSINT certification, DoD/DoS BFFOC Certification

Bibliography

  • Associated Press NORC Center for Public Affairs Research. 2026. “Most Americans see Iran as enemy but doubt Trump on military force: poll.” Associated Press.
  • 50 U.S.C. § 1541. 1973. War Powers Resolution, “Purpose and policy.” Legal Information Institute, Cornell Law School.
  • Gallup. 2003. “Seventy Two Percent of Americans Support War Against Iraq.” Gallup News Service, March 24, 2003.
  • Gallup. 2006. “Three Years of War Have Eroded Public Support.” Gallup News Service, March 17, 2006.
  • Kull, Steven, Clay Ramsay, and Evan Lewis. 2003. “Rally Round the Flag: Opinion in the United States before and after the Iraq War.” Brookings Institution.
  • Maclean’s. 2017. “Fighting in Afghanistan: ‘You have the watches. We have the time’.” September 2, 2017.
  • PBS. n.d. “Peoples Century: Guerrilla Wars: Vo Nguyen Giap Transcript.” Public Broadcasting Service.
  • Reuters. 2025. “US Senate rejects bid to curb Trump’s Iran war powers.” June 27, 2025.
  • University of Maryland Critical Issues Poll. 2026. “Do Americans Favor Attacking Iran Under the Current Circumstances? The Latest Critical Issues Poll Findings.”
  • YouGov. 2026. “Few Americans support U.S. military action against Iran, but a majority think it’s likely.” Economist YouGov poll, February 20 to 23, 2026.
Share this post:

Operation Absolute Resolve, Claude and the Weaponization of A.I.

intelligence, counterintelligence, national defence, war, weaponization, artificial intelligence, Anthropic, Claude, C. Constantin Poindexter

“Anthropic appears to be the “canary in the coal mine.” They are the first in public view to be used in a classified operation, and they are the first to be pushed back against.”

The convergence of artificial intelligence and military strategy has now been a subject of theoretical speculation for quite some time. The operational reality of this convergence is now being written in real-time. The January 2026 mission to capture former Venezuelan President Nicolás Maduro, codenamed “Operation Absolute Resolve,” stands as the first definitive deployment of Anthropic’s AI model, Claude, within a classified U.S. military operation (Reuters, 2026). This event marks a pivotal moment in the defense sector, moving AI from the realm of administrative support to the front lines of kinetic warfare. By examining the mechanics of Claude’s integration through Palantir, the friction between Anthropic’s safety-first philosophy and the Pentagon’s lethality requirements, and the broader geopolitical implications for AI development, I argue that this operation represents not merely a tactical success but also clearly the “no going back now” weaponization of Large Language Models (LLMs) in modern conflict.

The deployment of Claude in Operation Absolute Resolve was facilitated through a complex network of public and private partnerships. The operation itself was a conventional military endeavor, involving aerial bombardment of multiple sites in Caracas and the deployment of special forces to secure the capture of Maduro and his wife (Reuters, 2026). However, the intelligence and targeting data that informed these decisions were processed and synthesized by Claude, an LLM designed initially for civilian applications. This integration was achieved via Anthropic’s partnership with Palantir Technologies, a data analytics company whose software is a staple in the Defense Department’s infrastructure (The Wall Street Journal, 2026). Palantir’s role was critical, acting as the bridge between the proprietary security environments of the military and the open-source capabilities of commercial AI. This infrastructure allowed for the ingestion of classified intelligence, the rapid analysis of vast datasets, and the generation of actionable strategic recommendations. Claude effectively functioned as a force multiplier for human command.

The significance of Claude’s role in this operation cannot be overstated. It represents a shift in the utility of AI within the military. While earlier iterations of AI in the Pentagon were often relegated to “unclassified” tasks such as summarizing documents or generating routine reports, the use of Claude in a classified, kinetic mission indicates a maturation of the technology (The Wall Street Journal, 2026). The sources suggest that the model was capable of processing the nuanced geopolitical and tactical data required to support a complex operation of this magnitude. This capability suggests that the Pentagon is beginning to utilize LLMs not just as assistants, but as analytical engines capable of processing the “fog of war” (Kania, 2023). The operational success of the mission implicitly validates the Pentagon’s investment in frontier AI, suggesting that the technology is now ready for high-stakes decision-making environments where the margin for error is measured in lives and geopolitical stability.

Despite the operational success, the deployment of Claude exposes a fundamental philosophical conflict within the AI industry and between the AI industry and the U.S. government. Anthropic was founded with a specific mission: to build AI that is “helpful, honest, and harmless” (Anthropic, 2024). This philosophy is codified in their usage guidelines, which explicitly prohibit the use of Claude to “facilitate violence, develop weapons or conduct surveillance” (The Wall Street Journal, 2026). The irony of using a model designed for safety to plan and execute a military operation that involved bombing and the capture of a head of state is stark. This contradiction highlights the tension between the “safety-first” approach championed by Anthropic and the “kill chain” mentality required by the Pentagon. For a company that has built its brand on rigorous safety testing and the prevention of AI harm, being used in a military operation appears to be a double-edged sword. It proves the utility of their model, yet it forces them to participate in the very violence they have spent years trying to mitigate.

This conflict has escalated into a broader strategic battle between Anthropic and the Trump administration. The administration has pursued a low-regulation AI strategy, aiming to rapidly deploy technology to maintain global competitive advantage. In contrast, Anthropic has been vocal about the risks of AI in autonomous lethal operations and domestic surveillance, pushing for greater regulation and guardrails (The Wall Street Journal, 2026). The friction came to a head in January 2026, when Defense Secretary Pete Hegseth stated that the Department of Defense would not “employ AI models that won’t allow you to fight wars” (The Wall Street Journal, 2026). This comment was widely interpreted as a direct rebuke of Anthropic, signaling a preference for models that prioritize speed and lethality over safety. The Pentagon’s Chief Spokesman, Sean Parnell, echoed this sentiment, emphasizing that the nation requires partners willing to help warfighters “win in any fight” (The Wall Street Journal, 2026). For the Trump administration, Anthropic’s insistence on safety protocols was viewed as an impediment to the efficient execution of military strategy.

The potential fallout from this ideological clash is significant, particularly regarding the $200 million contract awarded to Anthropic last summer. Sources indicate that the administration is considering canceling or restructuring this contract due to Anthropic’s reluctance to cede control over AI deployment to the military (The Wall Street Journal, 2026). The contract was awarded as a pilot program to test the integration of frontier AI into the Defense Department, but the resulting friction suggests that the Pentagon is wary of models that might impose constraints on their operational flexibility. This situation places Anthropic in a precarious position. If they adhere strictly to their safety guidelines, they risk losing their most valuable government contracts to competitors who are more willing to accommodate military needs. If they compromise their values to secure the deal, they risk alienating their core customer base and undermining their brand identity as the “safe” alternative to OpenAI and Google (Kaplan, 2024).

The weaponization of AI in Operation Absolute Resolve also highlights the growing competitive landscape among AI developers. While Anthropic was ostensibly the first to be used in classified operations, competitors like OpenAI and Google have already established a foothold in the military sector. Google’s Gemini and OpenAI’s ChatGPT are already deployed on platforms used by millions of military personnel for analysis and research (The Wall Street Journal, 2026). The deployment of Claude in the Maduro mission positions Anthropic as a contender in this emerging arms race, but it also underscores the speed at which the military is adopting these technologies. The fact that other tools may have been used for unclassified tasks alongside Claude suggests that the military is conducting a wide-scale evaluation of available AI capabilities (The Wall Street Journal, 2026). For Anthropic, the pressure is on to demonstrate that their model offers unique advantages that justify their safety constraints in a combat environment.

The operation sheds light on the broader trend of AI integration into the “kill chain.” The military is increasingly interested in using AI for everything from controlling autonomous drones to optimizing supply chains and predicting enemy movements. The use of Claude in a high-profile operation like the capture of Maduro serves as a proof-of-concept for these more advanced applications. It demonstrates that LLMs can handle the complex, multi-variable problems inherent in modern warfare. However, it also raises difficult questions about accountability. If Claude were to make a mistake in targeting that resulted in civilian casualties or mission failure, who would be held responsible? The military or the AI company? This question is central to the debate over the weaponization of AI and highlights the need for clear protocols and liability frameworks as these systems become more integrated into military operations (Scharre, 2018).

The operational details of the Maduro mission also suggest a new level of integration between data analytics and kinetic action. The bombing of several sites in Caracas indicates a coordinated effort to eliminate potential escape routes and secure the perimeter (Reuters, 2026). The use of AI in this phase of the operation implies that the targeting data was processed rapidly and accurately, allowing for a synchronized military response. This level of coordination would have been difficult to achieve without advanced data analytics and AI-driven decision support systems. So, the success of this mission can be partially attributed to the technological edge provided by Claude and Palantir ecosystem. This success will likely encourage further integration and deployment of AI in warfighting, creating a feedback loop where operational victories drive further technological adoption (Belfiore, 2022).

The geopolitical implications of this extend beyond the immediate success of the Maduro snatch. As other nations observe the U.S. military’s effective use of AI in a real-world conflict, they are likely to accelerate their own AI development programs. The “Absolute Resolve” mission serves as a demonstration of power, not just in terms of military force, but in terms of technological superiority. This will most assuredly trigger an arms race in AI. Nations and non-state actors will compete not just on the size of their armed forces, but on the sophistication of their AI models. For the United States, maintaining this technological edge is a strategic imperative. Successful deployment of Claude is a step in that direction but it is also a shrill alarm of the risks of an AI arms race. The potential for miscalculation, warfighting error and the erosion of ethical norms in warfare is high (Yuan et al., 2023).

Operation Absolute Resolve represents a transformative moment in the history of both warfare and artificial intelligence. The deployment of Claude in the capture of Nicolás Maduro demonstrates the growing capability of LLMs to support complex military operations. It also highlights the tension between safety-focused AI development and the demands of national security. While the mission was a tactical success, it has exposed the friction between Anthropic’s philosophical commitment to “no use in violence” and the Department of Defense’s need for lethality. As the Pentagon reviews its contracts and the competitive landscape of AI continues to evolve, the lessons learned from “Absolute Resolve” will in no small part shape the future of AI in the military. The weaponization of AI is no longer theoretical. It is real, and it is redefining the nature of conflict. The question that remains is whether the military will continue to prioritize speed and capability over safety and ethical considerations, or whether it will find a way to integrate the two to create a new paradigm of intelligent warfare.

C. Constantin Poindexter, MA in Intelligence, Graduate Certificate in Counterintelligence, JD, CISA/NCISS OSINT certification, DoD/DoS BFFOC Certification

Bibliography

  • Anthropic. “Anthropic’s Mission and Approach to AI Safety.” Anthropic Blog. Accessed February 17, 2026. https://www.anthropic.com/index/anthropics-mission-and-approach-to-ai-safety.
  • Belfiore, E. (2022). Technological Warfare: The Future of AI in Military Conflict. Oxford University Press.
  • Kania, J. (2023). “The Fog of War and the Rise of Algorithmic Command.” Journal of Military Strategy, 15(3), 45-62.
  • Kaplan, A. (2024). “The Safety Paradox: How AI Companies Balance Ethics and Growth.” MIT Technology Review, 127(1), 22-31.
  • Reuters. “U.S. military used Anthropic’s Claude AI in operation to capture Maduro.” Reuters. February 5, 2026.
  • Scharre, P. (2018). Army of None: Autonomous Weapons and the Future of War. W. W. Norton & Company.
  • The Wall Street Journal. “Pentagon’s Use of Claude in Maduro Capture Raises Questions About AI Safety.” The Wall Street Journal. February 3, 2026.
  • Yuan, K., et al. (2023). “Geopolitical Competition in Artificial Intelligence: A Framework for Analysis.” International Security, 47(4), 1-32.
Share this post: