AI & Cognitive Systems
What it is. Using models and ranking engines to steer what people see, think, buy, and vote. It is persuasion at machine scale—feeds, search, ads, and “safety” tools that reshape choices while looking neutral.
Operating model
- Actors: platforms, ad networks, model vendors, app stores, cloud hosts, content farms, data brokers.
- Levers: ranking, recommendation, A/B pressure tests, synthetic personas, prompt injection, autocomplete bias.
- Mechanisms: harvest data → train models → tune rankers → nudge cohorts → entrench defaults.
- Escalation ladder: soft boosts → shadow throttles → identity targeting → synthetic leaders.
- Success metrics: narrative drift, policy shifts, rival agenda fatigue, decision automation without oversight.
Feeding the Machine
Domain: AI & Cognitive Systems · Stratagems: 4, 16, 30
Problem / betrayal. “Open science” trains their stack with your data; later the door shuts.
How it happened. Agencies now use NIST AI RMF 1.0 and a generative-AI profile; OMB memos set federal guardrails. Whatever the White House shifts, NIST’s RMF persists as the baseline. NIST Publications.
The men behind it. Standards chairs, compute hubs, cloud regions.
Consequences. Rankers steer discourse; “assistants” make humans stop thinking.
Warning. If you don’t control the stack, the stack controls you.
Counter-Orders
- Audit: model cards, data lineage, evals aligned to NIST; publish ranking rules. NIST Publications.
- Inoculate: human-override on rights-affecting decisions; provenance via C2PA. C2PA.
- Isolate: export controls on advanced computing items and weights/compute access. Bureau of Industry and Security.
Tactic clusters (curated, non-repetitive)
1) Ranker Capture
Seize the feed. Small weight changes move millions.
Stratagems: 10 Hide Your Dagger Behind a Smile, 30 Exchange the Role of Guest for that of Host
Application: Insert “quality” and “safety” criteria that favor aligned sources; bury rivals through friction and review delays.
Countermeasures: Publish model cards and ranking rules; independent audits; human override on high-impact calls.
2) Persona & Deepfake Warfare
Flood the zone with convincing fakes and tire the fact-checkers.
Stratagems: 8 Repair the Walkway, March to Chencang, 6 Make a Sound in the East, Strike in the West
Application: Spin up synthetic influencers; time drops to shape a vote, a market, or a jury pool.
Countermeasures: Watermarking/provenance (C2PA), cryptographic signing for official media, 24/7 deepfake red team with sub-4h takedown.
3) Data Poisoning & Model Theft
Corrupt the well or steal it whole.
Stratagems: 3 Kill with a Borrowed Knife, 21 Shed the Cicada’s Shell
Application: Poison public datasets and benchmarks; exfiltrate weights via insider SDKs and CI/CD hooks.
Countermeasures: Curated training sets with hashes, canary data, SBOMs for pipelines, weight escrow, segmented build systems.
4) Decision Automation Trap
Push “AI assistance” until the human stops thinking.
Stratagems: 4 Wait at Leisure for the Weary, 17 Toss Out a Brick to Attract Jade
Application: Automate hiring, grading, loans, and moderation; errors look objective so no one challenges them.
Countermeasures: Human-in-the-loop for rights-affecting outcomes; appeal channels; bias/impact reports tied to deployment gates.
5) Attention Siege
Starve rivals of reach while claiming neutrality.
Stratagems: 22 Shut the Door to Catch the Thief, 9 Watch the Fire from the Opposite Bank
Application: Demote “low authority” links; demonetize adversarial voices; preference home ecosystem tools.
Countermeasures: Interoperability mandates, alternative distribution channels, public logs of takedowns and demotions.
Failure modes & risks
- Overfit: models break under distribution shifts; decisions fail in the real world.
- Capture: vendors embed policy in code; leadership loses visibility and control.
- Legitimacy: secret rankers erode trust and spark regulatory blowback.
Related: see Stratagem 10, 22, and 30 in the Stratagems section for classic patterns behind platform control.