Disinformation as “Insurgency”, an American Constitutional View

disinformation, misinformation, espionage, counterespionage, counterintelligence, spy, subversion, psyops

I read with a great deal of interest Jacob Ware’s article “To fight disinformation, treat it as an insurgency” that appeared recently in The Strategist, an Australian Strategic Policy Institute publication. I have always held my own ideas about disinformation, more specifically “inoculation” as a countermeasure and recommending instruction from a very young age much as grade schools do in the baltic states. Ware’s article tackles the subject matter as a ‘control social media’ issue. I do not disagree with the importance of media responsibility for moderation of certain types of content, Ware appropriately identifies “overlook[ing] the important role of digital consumers”, but doubles down on content control. The article suggests that social media companies, as central nodes in the information ecosystem, must be pressured into moderating content more aggressively as much as the importance of digital consumers themselves being hardened against manipulation (“inoculation” as I have written in previous scholarship”. Control, compelling in its framing, raises some not insignificant constitutional issues in the context of the United States, particularly with regard to the First Amendment’s protections of speech, association, and press.

Framing Disinformation as Insurgency: Strategic and Legal Ramifications
Ware’s analogy between insurgencies and disinformation campaigns conveys the existential threat that hostile narratives, particularly those that foreign actors pose to democratic stability. Comparing disinformation actors to terrorist insurgents invites the application of military-style containment and suppression tactics, perhaps even the “cyber-kinetic” removal of bad actors (i.e., content moderation and bans), the targeting of ideological hubs (e.g., online communities, networks, influencers, etc.), and critically, the enforcement of norms through government-backed initiatives.

In the U.S. legal context, much of this may be a non-starter. Insurgents and terrorists operate outside the protection of constitutional law, whereas digital speakers, however misinformed or malicious, are presumptively entitled to the protections of the First Amendment. The Constitution does not permit the government to silence unpopular, false or even offensive ideas unless they meet strict criteria for incitement, true threats, or defamation. This legal boundary sharply limits the government’s ability to treat digital speech as a national security threat without triggering robust judicial scrutiny, even if that information is objectively dangerous disinformation.

Section 230 and Platform Immunity: The Epicenter of the Debate
The article criticizes Section 230 of the Communications Decency Act (1996), which shields internet platforms from liability for user-generated content. This statute is often viewed as the legal linchpin that enabled the growth of the modern internet, on the whole a pretty positive thing. Ware argues that these protections prevent platforms from being held accountable and serve as a digital safe haven for malign actors. From a policy standpoint, this critique doesn’t hold much merit. Critics across the political spectrum argue that Section 230 incentivizes platforms to prioritize engagement and profit over truth and social stability, however, repealing or modifying Section 230 would not directly authorize government censorship. It WOULD expose platforms to civil liability for failing to moderate. Any new federal statute that imposes content-based restrictions or penalties would need to meet all prongs of the constitutional free speech tests and modern U.S. jurisprudence. The courts have routinely ruled that platforms are private entities with their own First Amendment rights therefore even in the absence of Section 230, the government would not be able to compel social media companies to carry or remove specific content unless it satisfies narrow constitutional exceptions.

Free Speech: A Distinctly American Commitment
A central theme in the article is the frustration that American-style free speech doctrines allow dangerous ideas to circulate freely online. Ware writes from an Australian perspective. The article praises the European Union’s Digital Services Act and Australia’s eSafety initiatives as superlative regulatory models. Under those statutory regimes platforms face stiff penalties for failing to suppress harmful content. These approaches may appear pragmatic but they clearly represent a sharp divergence from U.S. legal culture.

The U.S. Constitution’s First Amendment prohibits government abridgement of speech, including offensive, deceptive, or politically inconvenient speech. In United States v. Alvarez (2012), the Supreme Court struck down a federal law criminalizing false claims about military honors, holding that even deliberate lies are constitutionally protected unless they cause specific, fixable harm. Further, in Brandenburg v. Ohio (1969), the Court established that even advocacy of illegal action is protected unless it is directed to inciting imminent lawless action AND is likely to produce such action. So, even under the noble pretext of national defense, any proposal that seeks to directly regulate speech must reconcile with this robust jurisprudence. Foreign governments might be able to implement speech controls without constitutional constraints. We cannot. The U.S. must address disinformation through less intrusive, constitutionally sound means.

Counterinsurgency in a Civilian Space: Policing Thought and Risking Overreach
Ware’s counterinsurgency metaphor extends beyond moderation into behavioral engineering, winning the “hearts and minds” of digital citizens. This vision includes public education, civilian fact-checking brigades, and a sort of civic hygiene campaign against harmful content. Although such measures may be effective as psychological operations (PSYOPs), the distinction between persuasion and indoctrination must be carefully managed in a free society.

There is legitimate concern that state-sponsored resilience campaigns could slip into propaganda or viewpoint discrimination, especially when political actors define what constitutes “disinformation.” The inconvenient truth is that the label of “misinformation” has been applied inconsistently, sometimes suppressing legitimate dissent or valid minority viewpoints. The First Amendment’s commitment to a “marketplace of ideas theory” assumes that truth ultimately prevails in open debate, not through coercive narrative management.

There is another danger. Using the tools of counterinsurgency, even rhetorically, raises alarms about militarizing civil discourse and legitimizing authoritarian measures under the guise of “national security.” In Boumediene v. Bush (2008), the Court warned against extending military logic to civilian legal systems. Applying wartime strategy to cultural or political disputes in the civilian cyber domain risks undermining the very liberal values the state claims to protect.

An Appropriate Role for Government
Despite consitutional guardrails, the federal government is not powerless. Several constitutionally sound measures remain available. These approaches avoid entangling the government in the perilous business of adjudicating truth while still defending the information ecosystem.:

Transparency Requirements – Congress can require social media companies to disclose their moderation policies, algorithmic preferences, and foreign funding sources without dictating content outcomes.

Education Initiatives – Civics education and media literacy programs are constitutionally permissible and could help inoculate the public against disinformation without coercion.

Voluntary Partnerships – The government can engage with platforms voluntarily, offering intelligence or warnings about malign foreign influence without mandating suppression.

Targeting Foreign Actors – The government can lawfully sanction, indict, or expel foreign individuals and entities engaged in coordinated disinformation campaigns under laws governing espionage, foreign lobbying, or election interference.

Ware’s comparison of disinformation to insurgency is strategically evocative, but its prescriptive implications clash with foundational American principles. The First Amendment might seem inconvenient, but it was designed to prevent precisely the kind of overreach that counterinsurgency measures invite. Democracies do not defeat authoritarianism by adopting its tools of censorship and narrative control. If the United States is to confront the threats of disinformation effectively, it must do so in a way that affirms rather than undermines what makes us distinctively American. Educating, not censoring; persuading, not suppressing; and building durable civic institutions capable of withstanding the torrent of falsehoods without succumbing to the lure of government-controlled truth are imperative. Freedom remains the best antidote to tyranny ONLY if we remain vigilant in its defense.

~ C. Constantin Poindexter,

  • Master of Arts in Intelligence
  • Graduate Certificate in Counterintelligence
  • Undergraduate Certificate in Counterintelligence
  • Former I.C. Cleared Contractor

The DNI Report: What is Missing?

seguridad national, espionage, contraespionage, contrainteligencia, c. constantin poindexter

It should come as no surprise in the current polarized political climate that certain threats to U.S. national security are omitted, some overly emphasized and others included but not give a more thorough review. Ironically (or perhaps not so ironically) the omissions and lack of more comprehensive address of certain threat are those very ones that are exacerbated by current Administration policies. The current DNI [unclassified version] contains no surprises, however there are some perils that decidedly lack the attention that they deserve. I’ll be brief.

The weaponization of artificial intelligence against the U.S. population poses and existential threat to the nation that we are not appropriately prepared for. The assessment identifies China’s AI capabilities in surveillance and disinformation, but underestimates the dangers posed by AI-generated disinformation and psychological operations targeting U.S. elections, civil cohesion, and trust in institutions. Synthetic media (deepfakes) at scale are unaddressed and present a very real menace. FIEs that excel in producing these fakes could fabricate major geopolitical incidents and/or falsely incriminate U.S. leaders. This is a “real-world crisis” scenario. Further, in our rush to load up our own AI capability, models trained on U.S. data pose an exposure to having them turned back against us in warfare, negotiation, or economic manipulation contexts. The DNI offers no significant discussion of how adversaries might use advanced LLMs and multi-modal AI to undermine decision-making at every level of our communities, from individual voters and first responders to senior policymakers.

There is a significant danger of the collapse of U.S. domestic infrastructure due to political paralysis and sabotage. The DNI identifies cyber threats to infrastructure (e.g., water, healthcare) however the report understates the systemic vulnerability of U.S. infrastructure to non-digital threats such as aged and neglected critical systems (e.g., bridges, power grids, water systems), and insider sabotage by ideologically motivated actors. White supremacist factionists and extremists like Timothy McVeigh come immediately to mind. Political paralysis and corruption that prevent modernization or resiliency efforts are the final ugly nail in the proverbial coffin. The loss of national security expertise as a result of wholesale firings/layoffs and the sidelining of individuals with decades of tradecraft and professional expertise based on party adherence are a very real threat. The assessment fails to meaningfully consider how polarization and our legislature’s unwillingness to work together are making the U.S. increasingly incapable of protecting or restoring its critical infrastructure after an attack or natural disaster. Don’t think for a moment that Chinese, Russian, Iranian and North Korean FIEs are failing to perceive these vulnerabilities that they can exploit.

Espionage, subversion and other nefarious covert operations against the U.S. and its interests via foreign investment and big-corporate influence are absent. There is really no excuse to omit identification and discussion of how “big money” has affected national security at every level, as even for a layperson is occurring in plain view. China’s cyber espionage and technology theft are addressed in depth, but why are foreign ownership of and influence in U.S. strategic sectors, including agriculture, pharmaceuticals, real estate near sensitive military sites and AI startups left alone? The use of shell corporations and fronting arrangements to embed operatives and proxies within sensitive sectors and policy circles is a serious threat as well. Strategic acquisition of distressed U.S. companies post-COVID by entities linked to FIEs are a mechanism and vehicles for subversion, espionage and sabotage. A brief look at our own history since the end of WWII reveals how these methods are effective and insidious, perhaps presenting a greater danger than cyber-attacks because they provide our adversaries to deep access, deniability and strategic gain that will serve them well for decades. Fragmenting and ‘bull in a china shop’ cancellation of funding paired with broken inter-agency oversight are extremely problematic.

Do better.