Nueva frontera para la inteligencia humana en la era de la I.A.

HUMINT, inteligencia, contrainteligencia, espionaje, contraespionaje, espia, C. Constantin Poindexter, Repúbilca Dominicana, España, DNI, NSA, CIA

El informe The Digital Case Officer: Reimagining Espionage with Artificial Intelligence representa una de las más ambiciosas reflexiones contemporáneas sobre la convergencia entre la inteligencia humana (HUMINT) y la inteligencia artificial (IA). Publicado por el Special Competitive Studies Project en 2025, el documento postula que la comunidad de inteligencia estadounidense se encuentra ante un cambio de paradigma comparable a la irrupción de Internet en la década de 1990. Su tesis central sostiene que la IA, particularmente los modelos generativos, multimodales y agénticos, puede revolucionar el ciclo de reclutamiento, desarrollo y manejo de fuentes humanas, inaugurando una forma de “cuarta generación del espionaje” donde los humanos y las máquinas actúan como un solo equipo operativo (Special Competitive Studies Project 2025, 4–6).

La lectura del informe revela un profundo entendimiento de los desafíos que el entorno digital impone a la práctica de la inteligencia. El texto acierta al reconocer que el valor esencial de la HUMINT no radica en la recopilación de datos observables, tarea donde los sistemas técnicos ya superan al ser humano, sino en la obtención del intento de los actores, es decir, la comprensión de las motivaciones, percepciones y decisiones que solo una fuente humana puede revelar (Special Competitive Studies Project 2025, 13–14). Esta distinción ontológica entre acción e intención preserva la relevancia del agente humano en la era algorítmica. Asimismo, el informe identifica con precisión el fenómeno de la vigilancia técnica ubicua, una realidad que amenaza con borrar el anonimato sobre el que se erigió el espionaje tradicional. Con ello, los autores contextualizan la urgencia de adaptar la profesión a un entorno donde toda huella digital puede delatar la identidad de un oficial de inteligencia.

Aciertos conceptuales: la integración estratégica de IA y HUMINT

Uno de los mayores méritos del documento reside en su capacidad para imaginar escenarios de uso concreto de la IA en las operaciones HUMINT. A través de la narrativa del sistema ficticio MARA, el informe ilustra cómo un agente digital podría analizar grandes volúmenes de datos abiertos y clasificados para identificar candidatos a reclutamiento, entablar contacto inicial mediante personalidades sintéticas y mantener diálogos persuasivos con cientos de potenciales fuentes en paralelo (Special Competitive Studies Project 2025, 8–9). Este ejercicio de prospectiva tecnológica cumple un doble propósito: por un lado, dimensiona la magnitud de la revolución que implicará la IA generativa; por otro, proporciona a los planificadores institucionales una guía pragmática sobre las capacidades y riesgos operativos que deben anticipar.

El texto también acierta al subrayar el principio de Meaningful Human Control (MHC), tomado de los debates éticos sobre armas autónomas, como fundamento normativo para el uso responsable de IA en inteligencia (Special Competitive Studies Project 2025, 24–25). Según este principio, toda decisión que conlleve riesgo humano, como el reclutamiento, la tarea operativa o el manejo de una fuente, debe estar sujeta a supervisión y responsabilidad de un oficial. De este modo, el informe equilibra el entusiasmo tecnológico con una defensa clara de la agencia moral humana.

Asimismo, la obra es sobresaliente al analizar el panorama competitivo internacional. En su Appendix A, el SCSP detalla cómo potencias como China y Rusia ya experimentan con IA generativa para optimizar sus operaciones de influencia, reclutamiento y contrainteligencia (Special Competitive Studies Project 2025, 34–35). El diagnóstico geoestratégico es convincente: los adversarios estadounidenses han comprendido que la IA no solo amplía la capacidad de vigilancia, sino que redefine la estructura misma de la competencia entre servicios de inteligencia. En consecuencia, la pasividad tecnológica equivaldría a la obsolescencia.

Por último, el informe acierta al considerar la dimensión psicológica del espionaje digital. Reconoce que, pese al poder de la automatización, la confianza, la empatía y la gestión emocional siguen siendo atributos exclusivamente humanos. El caso del activo que necesita una relación personal con su oficial para sostener el compromiso con una misión peligrosa, y que podría sentirse traicionado al descubrir que interactuaba con una máquina, demuestra una sensibilidad ética rara vez presente en documentos técnicos de inteligencia (Special Competitive Studies Project 2025, 17–18).

Debilidades conceptuales y metodológicas

A pesar de su sofisticación analítica, el informe presenta varias limitaciones que deben señalarse con rigor académico. Cuatro de ellas son especialmente relevantes: una sobre el alcance ontológico de la IA agéntica, otra sobre la ética instrumental de la manipulación emocional, una tercera sobre la fiabilidad epistemológica de la IA como agente operativo y una cuarta sobre la falta de análisis político de la gobernanza interinstitucional.

Ambigüedad conceptual del oficial digital

El documento define al Digital Case Officer como un sistema agéntico capaz de planificar y ejecutar tareas de reclutamiento con mínima intervención humana. Sin embargo, no ofrece una definición operativa precisa de agencia en el contexto de inteligencia. La noción de autonomía se confunde con la de automatización avanzada: un algoritmo que ejecuta secuencias de diálogo o identifica patrones de vulnerabilidad no es, en sentido filosófico, un agente moral ni un decisor autónomo. Autores como Floridi (2023) y Gunkel (2024) advierten que atribuir agencia a sistemas algorítmicos puede generar ilusiones de responsabilidad desplazada, donde los errores técnicos se interpretan como decisiones de una entidad inexistente. El informe incurre parcialmente en este antropomorfismo tecnológico, lo que debilita su fundamento teórico sobre el control humano y la responsabilidad ética. Una reformulación debería distinguir entre autonomía funcional, entendida como capacidad de operar sin supervisión inmediata, y autonomía decisional, reservando esta última exclusivamente al ser humano.

La ética de la manipulación emocional

El informe justifica el uso de affective computing y modelos conversacionales capaces de detectar y responder a emociones humanas para fortalecer la empatía simulada del oficial digital (Special Competitive Studies Project 2025, 15–16). Si bien reconoce los riesgos de manipulación, sugiere que el problema puede mitigarse mediante líneas rojas éticas y adecuada supervisión. No obstante, esta solución resulta insuficiente. La psicología moral y la ética de la persuasión, desde Kant hasta Habermas, sostienen que simular afecto con fines instrumentales constituye una forma de engaño que instrumentaliza la dignidad humana. Aun si se respetara el principio de MHC, la creación de vínculos emocionales falsos mediante algoritmos erosiona la confianza, fundamento mismo de la relación entre oficial y fuente. Una ética del espionaje digital debería incorporar explícitamente límites deontológicos que prohíban la simulación afectiva con fines coercitivos o de manipulación psicológica profunda.

Fiabilidad epistemológica y sesgos de la IA

Otro problema subestimado es la fiabilidad epistemológica de los modelos generativos como herramientas de reclutamiento. El informe reconoce la existencia de cajas negras algorítmicas que dificultan explicar por qué la IA selecciona un objetivo determinado (Special Competitive Studies Project 2025, 23–24), pero no desarrolla las implicaciones operativas de esa opacidad. En inteligencia, la trazabilidad y la validación de fuentes son pilares del proceso analítico. Si el sistema no puede justificar por qué considera reclutable a un individuo, por ejemplo, si interpreta erróneamente un gesto irónico en redes sociales como disidencia, el riesgo de falsos positivos es inmenso. Además, los modelos de lenguaje están entrenados sobre datos que reflejan sesgos culturales, raciales o ideológicos. En el contexto HUMINT, tales sesgos podrían conducir a la persecución selectiva de grupos o individuos inocentes. El informe debió profundizar en los mecanismos de auditoría algorítmica y control de sesgos que garanticen una epistemología verificable de la IA operativa.

Vacíos de gobernanza interinstitucional

La cuarta debilidad reside en la insuficiente problematización política del marco de gobernanza. Aunque el informe propone medidas de supervisión, auditorías, responsables humanos designados e informes al Congreso, no examina las tensiones burocráticas y jurisdiccionales que históricamente han obstaculizado la cooperación entre agencias como la CIA, el FBI y la NSA. La sugerencia de ofrecer “HUMINT as a Service” para otras agencias es innovadora, pero no se analiza cómo se resolverían los conflictos de autoridad, control de datos o responsabilidad legal ante errores operativos (Special Competitive Studies Project 2025, 29–30). Tampoco se contempla el papel de aliados extranjeros en la compartición de tecnologías sensibles. En un contexto de creciente desconfianza transatlántica y vigilancia cibernética, estas omisiones son significativas. Cualquier marco de inteligencia artificial aplicado a HUMINT debe incorporar un análisis institucional robusto sobre cómo preservar la rendición de cuentas dentro de una comunidad caracterizada por la compartimentación y el secreto.

El impacto psicológico en los agentes humanos

Una debilidad adicional, apenas insinuada, es la falta de atención al impacto psicológico de la hibridación humano máquina sobre los propios oficiales de caso. El informe alude brevemente al peso psicológico del espionaje en un entorno de transparencia total (Special Competitive Studies Project 2025, 17–18), pero no analiza cómo la dependencia operativa de algoritmos puede afectar la identidad profesional, la moral o el juicio ético del oficial. Estudios recientes en neuroergonomía y psicología del trabajo demuestran que la sobreautomatización reduce la confianza en el propio criterio y fomenta una delegación pasiva de la responsabilidad moral (Cummings 2024; Krupnikov 2025). En un oficio donde el discernimiento moral y la intuición interpersonal son esenciales, tal degradación cognitiva tendría consecuencias graves. La gobernanza de la IA en inteligencia debería contemplar programas de resiliencia psicológica y entrenamiento ético para preservar la autonomía moral de los oficiales.

Implicaciones estratégicas y éticas

Más allá de sus debilidades, el informe plantea preguntas fundamentales sobre la ontología del espionaje en el siglo XXI. Si la IA puede simular empatía, gestionar identidades virtuales y ejecutar tareas de persuasión, ¿sigue siendo la HUMINT una relación humana? El documento responde afirmativamente, defendiendo la noción del equipo humano máquina. Sin embargo, el riesgo de deshumanización es real: cuanto más eficaz sea la IA en emular la confianza, más fácil será reemplazar al humano en las etapas iniciales de contacto. Este dilema ético recuerda las advertencias de Shulman (2023), quien argumenta que la automatización de la interacción moral puede generar alienación operacional, un estado en el que los agentes ya no perciben las consecuencias humanas de sus acciones.

Desde una perspectiva estratégica, el modelo propuesto por el SCSP redefine la escala y el ritmo de las operaciones HUMINT. Un solo oficial, asistido por una red de IA, podría interactuar con cientos de objetivos en paralelo, multiplicando exponencialmente el alcance del espionaje. Pero esta misma escalabilidad erosiona los controles tradicionales basados en la supervisión directa. En un entorno donde la velocidad de interacción supera la capacidad humana de revisión, el riesgo de abusos o errores sistémicos aumenta. La historia de la inteligencia demuestra que los fallos no provienen solo de malas intenciones, sino de la combinación de exceso de confianza tecnológica y déficit de deliberación moral.

Hacia una epistemología prudente de la inteligencia artificial

La integración de IA en la práctica del espionaje exige una nueva epistemología prudente, basada en tres principios rectores: transparencia algorítmica, responsabilidad humana y proporcionalidad moral.

En primer lugar, la transparencia implica desarrollar sistemas explicables cuya lógica decisional pueda auditarse en tiempo real. Sin explicabilidad, la confianza institucional se convierte en fe ciega. En segundo lugar, la responsabilidad humana debe ser indivisible. El principio de MHC no debe reducirse a un trámite de aprobación, sino concebirse como una forma de coautoría moral entre humano y máquina, donde el primero mantiene dominio sobre el propósito y el significado de la acción. En tercer lugar, la proporcionalidad exige evaluar el costo moral de cada innovación: la capacidad de hacer más no justifica automáticamente hacerlo todo.

Adoptar estos principios requerirá reformas legales y culturales. A nivel normativo, el Congreso y el Poder Ejecutivo deberían actualizar la Orden Ejecutiva 12333 para definir explícitamente la naturaleza jurídica de los sistemas autónomos de inteligencia y su relación con los derechos civiles de los ciudadanos estadounidenses. A nivel institucional, las academias de inteligencia deberían incorporar formación en ética de IA y filosofía de la tecnología, equipando a los futuros oficiales con herramientas críticas para resistir la automatización acrítica del juicio moral.

Finalmente, el debate sobre el Digital Case Officer invita a reconsiderar la esencia misma del espionaje. Si el futuro de la inteligencia es híbrido, su éxito no dependerá solo de la potencia computacional, sino de la capacidad de mantener el núcleo humanista del oficio. Como advirtió Richard Moore, director del MI6, “la relación que permite que una persona confíe genuinamente en otra sigue siendo obstinadamente humana” (Moore 2023). Esta afirmación resume la paradoja que el informe del SCSP plantea sin resolver plenamente: la tecnología puede ampliar la inteligencia, pero solo el ser humano puede darle propósito moral.

~ C. Constantin Poindexter, M.A. en Inteligencia, Certificado de Posgrado en Contrainteligencia, J.D., certificación CISA/NCISS OSINT, Certificación DoD/DoS BFFOC

Referencias

Cummings, Mary L. 2024. “Automation and the Erosion of Human Judgment in Defense Systems.” Journal of Military Ethics 23 (2): 101–120.

Floridi, Luciano. 2023. The Ethics of Artificial Agents. Oxford: Oxford University Press.

Gunkel, David. 2024. The Machine Question Revisited: AI and Moral Agency. Cambridge, MA: MIT Press.

Krupnikov, Andrei. 2025. “Psychological Implications of Human Machine Teaming in Intelligence Work.” Intelligence and National Security 40 (3): 215–233.

Moore, Richard. 2023. “Speech by Sir Richard Moore, Head of SIS.” London: UK Government.

Shulman, Peter. 2023. “Operational Alienation in Autonomous Warfare.” Ethics & International Affairs 37 (4): 442–460.

Special Competitive Studies Project. 2025. The Digital Case Officer: Reimagining Espionage with Artificial Intelligence. Washington, D.C.: SCSP Press.

Can I.C. HUMINT Operators Counter Facial Recognition Supercharged by A.I.?

HUMINT, facial recognition, intelligence, counterintelligence, espionage, counterespionage, c. constantin poindexter;

The WAPO article in May of this year (“CIA chief faces stiff test in bid to revitalize human spying”) revealed a peril that has been on my radar for a few years. Writers Warren P. Strobel and Ellen Nakashima reported that the CIA is facing ‘unprecedented operational challenges’ in conducting human intelligence (HUMINT) missions, particularly in “denied areas” such as China, Russia, and other heavily surveilled states. The central premise is that advances in artificial intelligence–powered facial recognition, combined with integrated surveillance networks are making it extremely difficult for intelligence officers and sub-handlers to operate covertly. Maybe, . . . but maybe not.

As I.C. agencies grapple with the proliferation of AI-enhanced facial recognition in denied areas, human intelligence (HUMINT) operators must seek new tradecraft to elude detection. Exploiting the inherent bias vulnerabilities and adaptive learning mechanisms within facial recognition systems, HUMINT operatives can deliberately degrade their reliability, more specifically, by flooding systems with inputs that are not identical but very similar thereby “poisoning” the recognition algorithm. Operators can broaden acceptance thresholds and reduce fidelity. Drawing a parallel with Apple’s iPhone Face ID system, whose adaptive mechanism occasionally grants access to similar-looking individuals (e.g., family members), here is how HUMINT practitioners could deliberately introduce adversarial noise to AI surveillance systems to slip through.

Algorithmic Bias in Facial Recognition

Facial recognition systems are susceptible to algorithmic bias rooted in uneven training data. For instance, the now-classic “Gender Shades” study revealed error rates up to 35 % for darker-skinned women versus < 1 % for lighter-skinned males. More broadly, the National Institute of Standards and Technology (NIST) has documented that commercial face recognition systems misidentify Black and Asian faces 10 to 100 times more often than white faces. These disparities not only expose systemic flaws but also point to the system’s sensitivity to subtle variations. Adversarial machine learning research has demonstrated that imperceptible perturbations can dramatically mislead facial recognition models. These adversarial examples exploit “non-robust” features, patterns perceptible to AI but invisible to humans that induce misclassification. Studies in the domain have confirmed that even small alterations in pixel patterns can force erroneous outputs in face recognition systems.

Adaptive Learning: The iPhone Face ID Example

Apple’s Face ID serves as a real-world instance of an adaptive facial recognition mechanism. The system uses a detailed infrared depth map and neural engine adaptation to adjust to users’ appearance changes over time, i.e., aging, makeup, glasses, or facial hair. Critically, Face ID “updates its registered face data” when it detects a close match that is subsequently unlocked via passcode, effectively learning from borderline inputs. This adaptability can lead to misrecognition in practice. A widely reported case involved a ten-year-old boy unlocking his mother’s iPhone X on the first attempt, thanks to their similar features. The system adapted sufficiently that the child could consistently unlock the device in subsequent attempts even though he was neither registered nor the primary user. Apple’s own user disclosure acknowledges that Face ID is statistically more prone to false positives with twins, siblings, and children under thirteen owing to underdeveloped, similar facial features.

HUMINT Application: Poisoning Recognition Systems

HUMINT operators, aware of such adaptive vulnerabilities, could deliberately exploit them when entering denied areas monitored by AI facial recognition cameras or checkpoints. How would that work?

Creating “near duplicate” appearances: Operators could train the system by repeatedly presenting faces that are not identical but nearly identical. Sending similar-looking collaborators through passport control wearing slight variations in makeup, glasses, lighting, or facial hair is a good example. Over time, the system’s adaptive threshold would widen, accepting a broader range of inputs as belonging to the same identity.

Adversarial perturbation via “morphing”: Using adversarial machine learning techniques, operatives could create morphs (digital or printed images blending two individuals) so that the system’s recognition vector drifts toward both identities. The DHS has documented such “morphing attacks” as a real threat to face recognition systems. Not a perfect solution as adversarial C.I. might simply surveil them ALL.

Feedback loop poisoning: With systems that incorporate user feedback (e.g., unlocking after near matches), HUMINT operators might deliberately trigger false acceptances or input other authentication data after near matches, feeding the system mis-labelled data and amplifying its error tolerance. That’s the way siblings or children inadvertently taught Face ID to accept them in the previous example.

Ethical, Operational, and Technical Defense

Is the approach technically plausible or ethically defensible? Technically, the literature on adversarial attacks and adaptive biases confirms that recognition systems can be deliberately misconfigured through controlled input poisoning. Operationally, such techniques must be deployed after careful risk assessment. If a HUMINT operating group consistently “trains” a system in advance, the likelihood of detection increases, perhaps dramatically. However, in dynamic environments with rotating operators and multiple lookalikes, the system can deteriorate in reliability over time without drawing attention to a single individual. Ethically, these strategies are defensible under the doctrine of necessity and deception inherent to espionage. The goal is not harm but evasion in hostile surveillance contexts.

Limitations and Countermeasures

The approach is not foolproof. Highly calibrated systems may lock after repeated unlock failures or require emergency analysis and supervisory resets. Advanced systems may isolate per identity representations, preventing cross-contamination. Systems without adaptive learning or those that guard against morphing remain immune. Nonetheless, many real-world systems are not designed for adversarial resistance, . . . yet. Authoritarian regimes with bulk “brute” surveillance networks, less than state-of-the-art platforms and/or resource constraints may nullify robust defense against poisoning.

In the escalating arms race between AI surveillance and clandestine operations, HUMINT tradecraft must evolve. By exploiting biases and adaptive flaws in facial recognition systems (ex., through near identical inputs, morphing techniques, and feedback poisoning) operators can subtly degrade recognition fidelity. The iPhone Face ID example underscores the viability of such tactics in practice, i.e., a system designed for convenience can become a liability when its adaptability is weaponized. As surveillance proliferates, understanding and manipulating AI’s algorithmic susceptibilities will be indispensable for evasion and operational success.

Facial recognition is not the only sophisticated peril to HUMINT operations. Per Thomas Claburn’s recent report in The Register, “Researchers in Italy have developed a way to create a biometric identifier for people based on the way the human body interferes with Wi-Fi signal propagation. The scientists claim this identifier, a pattern derived from Wi-Fi Channel State Information, can re-identify a person in other locations most of the time when a Wi-Fi signal can be measured. Observers could therefore track a person as they pass through signals sent by different Wi-Fi networks – even if they’re not carrying a phone.” (Claburn, 2025) Tradecraft and countermeasures will likewise have to evolve to address this threat, but I’ll leave that subject for a future piece.

~ C. Constantin Poindexter, MA in Intelligence, Graduate Certificate in Counterintelligence, JD, CISA/NCISS OSINT certification, DoD/DoS BFFOC Certification

References

Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 1–15. https://en.wikipedia.org/wiki/Algorithmic_bias

National Institute of Standards and Technology. (2019). Face recognition vendor test (FRVT) Part 3: Demographic effects (NIST Interagency/Internal Report No. 8280). https://en.wikipedia.org/wiki/Anti-facial_recognition_movement

Goodfellow, I. J., Shlens, J., & Szegedy, C. (2015). Explaining and harnessing adversarial examples. International Conference on Learning Representations. https://www.wired.com/story/adversarial-examples-ai-may-not-hallucinate

Vakhshiteh, A., Alparslan, F., & Farokhi, F. (2020). Adversarial attacks on deep face recognition systems. arXiv. https://arxiv.org/abs/2007.11709

Apple Inc. (2024). About Face ID advanced technology. Apple Support. https://support.apple.com/en-us/102381

Greenberg, A. (2017, December 14). A 10-year-old unlocked his mom’s iPhone X using Face ID. Wired. https://www.wired.com/story/10-year-old-face-id-unlocks-mothers-iphone-x

U.S. Department of Homeland Security. (2023). Risks and mitigation strategies for morphing attacks on biometric systems. https://www.dhs.gov/sites/default/files/2023-12/23_1222_st_risks_mitigation_strategies.pdf

Grief and the HUMINT Operator, the Personal Toll of Covert Intelligence Operations

HUMINT, intellgence, counterintelligence, espionage, counterespionage, c. constantin poindexter;

It’s not all James Bond and Jason Bourne. The good guy doesn’t always win in the end. Covert work, more specifically covert human intelligence (HUMINT) operations are the most psychologically and morally demanding forms of spying. OSINT and keyboard collectors don’t feel the grief of an intelligence officer in the field. Case officers recruit, develop, handle, and ostensibly protect their agents (“sources” or “assets”), instructing them in appropriate tradecraft to steal secrets and avoid getting caught. These activities are routinely conducted in denied areas. When these agents operate these hostile environments, the stakes are life or death. Discovery often means that the asset will be tortured, executed, and their families persecuted or likewise killed. As seasons of service pass, it is almost inevitable that some agents will be compromised and lost. The emotional burden on the officer responsible for their survival is profound, marked by grief, guilt, and an enduring sense of moral failure.

The humanitarian bond and psychological investment

The key to success as a case officer is the cultivation of a very personal relationship, deep personal rapport with his or her source. A true friendship rooted in trust, empathy, and shared purpose is imperative. A psychological study on intelligence elicitation revealed that non-coercive strategies coupled with rapport-building yield richer and more accurate information acquisition, underscoring how vital emotional connection is to both efficacy and trust. These very human bonds mean that officers break bread, confide in, and take proactive steps to protect their agents. The resulting interpersonal ties transcend formal professional promises. This emotional investment means that when an agent is caught, disappeared, tortured, killed, or all of the above, the officer experiences not just operational failure, but also a deep personal loss. They bear responsibility for agent safety so when the wheels come off, the intelligence officer invariably suffers from a sense of personal culpability. Survivor guilt among those who ‘live through’ while others perish is well documented in trauma psychology.

Survivor guilt and moral injury

Survivor guilt refers to the distress and self-loathing felt by individuals who outlive someone else when they played a role in the other’s fate. In HUMINT, officers feel they failed agents that they recruited, agents who trusted them implicitly. This places officers at risk for moral injury, a condition in which one’s actions or inactions violate their own moral code. The loss of an agent can trigger intense guilt. “I could’ve done more,” “I should’ve seen the compromise,” or “I didn’t protect them like a parent protects a child.”, are common recurring emotional punishments. A recent article on traumatic loss highlights how survivor guilt can evolve into chronic shame and self-destructive rumination unless addressed . This phenomenon aligns closely with what seasoned intelligence officers share in post-action debriefs, i.e., guilt compounded by the clandestine nature of their relationship with agents where that guilt must remain hidden behind professional composure and confidentiality oaths.

Grief within the cloak of secrecy

Unlike traditional warfighter loss, agent deaths or arrests rarely receive acknowledgment nor are honored publicly. There’s no funeral, no rope-dropping anniversary ceremonies, no celebration of life nor what the source contributed. The clandestine world awards no medals for agents who vanish. Intelligence officers grieve in silence and isolation with few official outlets, little acknowledgment, and often no practical nor legal avenue to care for a source’s family. Psychology literature highlights that complicated grief, grief unspoken and unacknowledged driver to depression, PTSD, and physical illness. In clandestine HUMINT, agents operate for years within strict tradecraft boundaries. Case officers managing or sole agents develop significant moral and emotional ties to them. Losing an agent isn’t just a tactical failure within the intelligence agency’s collection strategies. It is the death of someone known intimately and often their families.

The moral complexities of manipulation and betrayal

HUMINT work inherently involves manipulation, the cultivation and direction of individuals that betray their countries. There is no pretty way to describe it. We teach assets to lie, steal, and live dangerous double lives. Covert operators must deploy emotional leverage, sometimes deception, frequently bribery, “ . . . to ensure loyalty and compliance”. As reported in ‘Intelligence & National Security’, manipulation is part of the deal but when influence crosses into coercion or deception, moral dilemmas arise. When an agent is lost, the officer may and often does ask him or herself, “Did I manipulate them into this disaster? Did I betray my own moral code by pushing them into extreme danger?” Psychological research warns that psychological manipulation “targets unconscious, intuitive, or emotional modes of thought… violating autonomy, freedom and dignity”.

Training v. operational seasoning

Formal HUMINT training emphasizes tradecraft, security, and risk/reward management. Intelligence officers learn strict protocols around recruitment, handling, and termination of agents. Real-world operations in hostile environments introduce chaotic variables. Even the most seasoned officer cannot foresee novel counterintelligence techniques, surveillance technology, or unexpected betrayals by intermediaries or an insider threat. As one analysis notes, seasoned double- or triple-agent running reduces an officer’s control. The very experience that can make an officer a great handler can become a liability, undermining his or her ability to predict perils to the asset and operation, increasing their feelings of personal responsibility when things go wrong.

Organizational culture and aftercare

Intelligence services are bad at normalizing and institutionalizing grief processing for covert HUMINT operators. Agencies debrief performance and analyze operational failures, but do a piss-poor job at addressing the emotional consequences. There is a stigma associated with grief, and moral stress in environments that emphasize resilience and secrecy. In some Western countries, covert-source legislation acknowledges that agents and handlers engage in crimes to maintain cover and accomplish operations. Despite this, emotional and moral support for the officers who manage such morally complex situations remains painfully limited. Without interventions such as peer support groups, secret welfare services, or external counseling, intelligence officers risk burnout, emotional numbing, and PTSD.

The ripple effect on agents’ families

When an agent is compromised, repercussions often extend to their families, FIS (FIEs) frequently use assets’ families for leverage. They are targeted as co-conspirators, persecuted and attacked extrajudicially. Officers can manage systems to smuggle a family to safety or allow them to assume new identities but they are not as successful as we would like to assume. When agents die, officers feel they have failed an entire family. Culturally, agents’ loyalty often arises from protecting their families. Losing an agent can thus symbolize failure to protect a family entirely dependent on smart decisions by that operative and his or her handler.

Ethics and accountability

Scholars like Stephan Lau argue that intelligence agencies need pragmatic frameworks to distinguish legitimate influence from harmful manipulation in HUMINT operations. Such models assist case officers in making decisions grounded in ethical clarity rather than moral ambiguity. Institutionalized ethical guidance and accountability structures can both reduce morally damaging decision-making and help handlers process loss after operations fail. Albeit not a panacea, ethical oversight on recruitment, coercion thresholds, and risk assessment can lessen post-hoc guilt and defend against corrosive shame.

Operating at the intersection of psychology, ethics, and national security, HUMINT case officers experience pressures unique to clandestine work. They recruit and manage individuals willing to risk their lives and those of their families for a foreign intelligence entity’s objectives. The loss of such agents in hostile environments imposes profound emotional and moral wounds. Survivor guilt, grief, and rumination on perceived ethical failures are the inevitable result. Individual case officer well-being and institutional resilience is possible. By building ethical guidance, grief acknowledgment processes, peer support structures, and mental health interventions tailored to clandestine realities, HUMINT organizations can care for their own and honor the sacrifices of their assets. In so doing, they protect not just robust operational effectiveness, but the humanity of the professional officers who serve in the shadows.

~ C. Constantin Poindexter, MA in Intelligence, Graduate Certificate in Counterintelligence, JD, CISA/NCISS OSINT certification, DoD/DoS BFFOC Certification

References

Goodman Delahunty, J., O’Brien, K., & Gumbert-Jourjon, T. (2014). Reframing intelligence interviews: Rapport and elicitation. Journal of Investigative Psychology and Offender Profiling, 11(2), 178–192.

Lau, S. (2022). The Good, the Bad, and the Tradecraft: HUMINT and the Ethics of Psychological Manipulation. Intelligence and National Security, 37(6), 895–913.

Neria, Y., Nandi, A., & Galea, S. (2008). Post-traumatic stress disorder following disasters: a systematic review. Psychological Medicine, 38(4), 467–480.

Robinaugh, D. J., LeBlanc, N. J., Vuletich, H. A., & McNally, R. J. (2014). The role of grief-related beliefs in complicated grief: A structural equation model. Behavior Therapy, 45(3), 362–372.

Feeney, B. C., & Collins, N. L. (2015). A new look at social support: A theoretical perspective on thriving through relationships. Personality and Social Psychology Review, 19(2), 113–147.

Herman, J. L. (1992). Trauma and Recovery: The Aftermath of Violence—from Domestic Abuse to Political Terror. Basic Books.

Jones, S. G. (2014). Covert Action and Counterintelligence in the Cold War and Beyond. RAND Corporation.

UK Parliament. (2019–2021). Briefing Paper: Covert Human Intelligence Sources (Criminal Conduct) Act.

Shane, S. (2015). Objective Troy: A Terrorist, a President, and the Rise of the Drone. Tim Duggan Books.

Zegart, A. (2007). Spying Blind: The CIA, the FBI, and the Origins of 9/11. Princeton University Press.