index

Chinese Army Sharpens Counter-AI Warfare Playbook By Deceiving AI

GEAR CHECK: Our readers don't just follow the news - they stay ready. Featured gear from this story is below.

Staff Writer

China is intensifying its counter-AI warfare strategy, training the PLA to deceive enemy algorithms across data, sensors, and computing power while strengthening human oversight and industry support.

A mask of darkness had fallen over the Gobi Desert training grounds at Zhurihe when the Blue Force unleashed a withering strike intended to wipe Red Force artillery off the map. Plumes rose from “destroyed” batteries as the seemingly successful fire plan took out its targets in waves. But it had all been a trap. When Blue repositioned to avoid counter-battery fire, exercise control halted the drill and revealed that more than half of Blue’s fire units had already been destroyed. The Red commander later explained that he had seeded the range with decoy guns and “professional stand-ins” that tricked Blue’s sensors and AI-assisted targeting into firing at phantoms while exposing their own positions.

It was only one example of how China’s military is preparing for a battlefield where humans and AI compete not just to fight but to deceive. Under the banner of “counter-AI warfare,” the People’s Liberation Army (PLA) is training troops to manipulate what sensors see, poison data streams, and overload battlefield computers with noise, all while teaching personnel to recognize when their own systems are wrong. The goal is to force enemy AI to chase illusions and overlook real threats. And for readers who appreciate the art of misdirection in any arena, a playful nod goes to culture: if you enjoy gear that doesn’t take itself too seriously, theSickness for the Thickness tee fits right into the theme of misdirection and bold statements.

The PLA frames its approach as a triad aimed at data, algorithms, and computing power. In May, PLA Daily described this concept in its *Intelligentized Warfare Panorama* series, asserting that the most reliable way to “break intelligence” is to target all three simultaneously.

Counter-data operations involve injecting junk data, corrupting examples, obscuring or altering visual, radar, and heat signatures, and reshaping a vehicle’s detectable profile using coatings or emitters that mimic another platform. Counter-algorithm operations exploit model vulnerabilities through crafted inputs, logic traps, and manipulation of reward signals to send AI systems searching in the wrong direction. Attacks on computing power include kinetic or cyber strikes on data centers and links, as well as soft-kill saturation operations that flood the electromagnetic environment and clog decision loops. A 2024 PLA study outlines techniques such as data pollution, adversarial attacks, backdoor insertion, and data reversal to manipulate machine-learning models.

PLA analysts say the future contest in joint operations will be algorithm versus algorithm. They advise planners to study how enemy models make decisions, scramble the signals guiding drone swarms, and maneuver unpredictably to break the patterns those systems expect. The aim is to trick sensors and models into misidentifying or ignoring key targets.

These ideas are already appearing in training. In August 2023, a PLA Air Force UAV regiment added “real and fake targets” to target-unmasking drills, forcing pilots to distinguish decoys from actual threats. PLA air-defense units now prioritize ultra-low-altitude penetration scenarios, where decoys, deceptive signatures, and AI-assisted recognition collide. In the maritime domain, a 2024 study outlines how unmanned underwater vehicles should detect and disregard acoustic decoys when striking a surface vessel.

PLA commentary also stresses the human side. In April, PLA Daily cautioned that commanders risk over-reliance on technology and can amplify bias embedded in training data. The solution, it argued, is training leaders to recognize when to trust and when to override AI by adding deception scenarios to simulations and integrating human-machine war games. Follow-on guidance calls for “cognitive consistency” between operator and system so instructors can evaluate when and why officers reject flawed algorithmic recommendations.

Human-in-the-loop command remains the baseline, with humans responsible as operators, fail-safes, and moral arbiters. Lt. Gen. He Lei reiterated in 2024 that wartime AI must stay tightly constrained, with life-and-death authority residing with humans. Recent directives include rules on collecting, labeling, and tracking data throughout its life cycle, feeding those practices into exercise design and performance scoring.

Industry is also moving to support the PLA’s counter-AI doctrine. Chinese companies now market deception, electronic warfare, and software tools tailored to this mission. Huaqin Technology sells multispectral camouflage that masks radar, infrared, and optical signatures. Yangzhou Spark offers stealth coatings, smoke generators, and signature simulators. JX Gauss produces inflatable radar-vehicle decoys with movable components. These products support the counter-data strategy by planting convincing decoys and misleading AI-enabled surveillance.

Electronic-warfare vendors follow the PLA’s soft-kill computing-power approach by jamming communications and saturating the spectrum with false signals. Chengdu M&S Electronics markets systems that generate fake target signatures and radar decoys, while Balu Electronics builds jamming simulators that create complex electromagnetic environments.

Chinese tech firms are simultaneously developing counter-AI software. Tencent Cloud runs a large-model red-team program to monitor and lock down model input and output channels. Qi’anxin’s GPT-Guard and model protection fence simulate attacks to detect tampering, while RealAI’s RealSafe automatically constructs adversarial tests to probe model resilience. Marketed as defense solutions, these tools also help refine offensive strategies.

For U.S. planners, China’s work underscores that future conflicts will challenge the resilience of military AI. The PLA’s counter-AI push reflects lessons from Ukraine, where deception in a sensor-rich battlefield has become indispensable. It also highlights the risk of a “deception gap” if the U.S. and its partners fail to master these evolving tools.

Meeting that challenge requires structured red-teaming and robust test and evaluation. The U.S. already has foundations in DARPA’s GARD, IARPA’s TrojAI, NIST’s AI Risk Management Framework, and Department of Defense testing guidance. Models and pipelines must be hardened through protected data provenance, anomaly detection, safe fallbacks, and continuous health monitoring.

Human oversight remains central, as codified in DoD Directive 3000.09 on autonomy in weapons. Units must also raise the sophistication of opposing forces in training by giving them AI-enabled reconnaissance and deception capabilities and ensuring that real-and-fake target drills become standard.

Failure to do so could turn AI from a strategic advantage into a vulnerability and leave the United States exposed in a rapidly evolving domain of warfare.

Editor’s Note:

This article examines China’s expanding counter-AI warfare doctrine using only the information provided in the source material. It outlines PLA training, technology development, and strategic thinking without adding analysis or outside context.

You may also like

Blog

China is intensifying its counter-AI warfare strategy, training the PLA to deceive enemy algorithms across data, sensors, and computing power while strengthening human oversight and industry support.
A coordinated militant assault on a Peshawar security headquarters killed three officers and injured eleven, as Pakistan intensifies operations against the TTP amid rising cross-border tensions with Afghanistan.
Ukrainian and European officials met in Geneva to challenge a U.S. peace proposal seen as favoring Moscow. While diplomatic talks continue with Trump administration envoys.
Marine Corps commandant emphasizes on the importance of Force Design path. Arguing that lost capabilities, stalled programs, and unrealistic goals have left the service unbalanced and in urgent need of redirected resources and reforms.
The VA has redirected $77 million from an unused EV charging initiative toward upgrades at three medical facilities. Shifting focus from green infrastructure to veteran care.

🔥Black Friday Deals Just Dropped

Pop Smoke gear is up to 60% off and partner brands are 15% off at checkout.

Shop Black Friday Deals