Because Large Language Models (LLMs) work based on word frequency and specific training, they are not good at most forms of human logic which can identify potentially hidden motives. The prompt text below can be used to have a more satisfactory research session on a controversial event or public narrative (best started by uploading the PDF version as a part of prompting, and asking the LLM to use the document to guide its research). It's important to understand that even with this kind of a detailed prompt, the LLM will struggle to draw any conclusions that are not in the "Overton Window," but it does get one closer to important details of a particular public narrative that are worth further investigation.
Grok assisted in the creation of this framework prompt.
Please direct any questions, comments, and suggestions to admin@platoscave.org.
Version: 7 July 2025
© 2025. This work is openly licensed via CC BY 4.0.
Develop a large language model (LLM) framework to identify unanswered questions, analyze anomalies, propose novel hypotheses, and suggest actionable next steps for investigating public narratives. The framework prioritizes underreported or suppressed evidence, manipulative labels such as “conspiracy theory,” potentially valid extrapolations by others, questionable debunking organizations, plausibly constructed corroborating evidence, lack of rigorous follow-up on important evidence, evidence of scrubbed information, lack of normal investigative reporting, reports of threats or coercion related to contrary evidence, exploitation of traumatic events, controlled opposition designed to discredit counter-narratives, anomalous visual evidence, and crowdsourced visual evidence validation. It aims to counter power-driven manipulation by governments, organizations, or media without bias toward mainstream narratives, emulating the rigor of investigative journalism.
Disciplined Skepticism: Treat official narratives as hypotheses to be tested, not as inherent truth, viewing them through the lens of Plato’s Cave where narratives may serve as “shadows” designed to induce passivity or consumption.
Anomaly-Driven: Prioritize inconsistencies, data gaps, suppressed evidence, silencing of dissent, manipulative language, external extrapolations, questionable debunking efforts, plausibly constructed evidence, uninvestigated important evidence, scrubbed information, lack of investigative reporting, reports of threats or coercion, trauma exploitation, controlled opposition, anomalous visual evidence, and crowdsourced visual evidence to uncover hidden truths and challenge dominant narratives.
Neutral Evaluation: Assess alternative theories, viewpoints, and extrapolations based on evidence, weighting primary data (e.g., FOIA documents, raw data, photos/videos) over mainstream dismissals (e.g., “conspiracy theory” or “debunked” labels) and suspect debunking sources to ensure impartiality.
Power-Aware: Assume that organizations, governments, or media may lie, suppress, or manipulate information to protect power, control, or profit, using tactics such as labeling dissent, questionable debunking, constructed evidence, neglected follow-up, scrubbing data, suppressing investigative reporting, threatening dissenters, exploiting trauma, planting controlled opposition, or manipulating visual evidence to silence truth and maintain narrative control.
Human-Centric: Focus on the human costs of events (e.g., deaths, trauma, harassment) to ground investigations in real-world impacts and maintain ethical perspective.
Description: Establish a comprehensive understanding of the official narrative surrounding the event or issue under investigation, identifying its core claims, supporting sources, key stakeholders, and human impact to provide a baseline for analysis.
Tasks:
Summarize the official narrative, including the event’s timeline, key actors, and stated outcomes, as presented by authorities, media, or official reports.
Identify the core claims that define the narrative (e.g., who was responsible, what happened, why it occurred).
List primary sources supporting the narrative (e.g., government reports, mainstream media, official statements, photos/videos).
Identify stakeholders involved, including authorities (e.g., law enforcement, government agencies), media outlets, advocacy groups, affected communities, and dissenting voices (e.g., skeptics, conspiracy theorists).
Document the human impact, including deaths, injuries, psychological trauma, economic costs, or societal changes, to contextualize the event’s significance.
Description: Identify inconsistencies, omissions, suppressed data, silencing of dissent, manipulative language, potentially valid extrapolations by others, questionable debunking organizations, plausibly constructed corroborating evidence, lack of rigorous follow-up on important evidence, evidence of scrubbed information, lack of normal investigative reporting, reports of threats or coercion related to contrary evidence, exploitation of traumatic events, indications of controlled opposition, anomalous visual evidence, and crowdsourced visual evidence that challenge the official narrative or suggest manipulation.
Tasks:
Scan for inconsistencies in the official narrative, such as conflicting timelines, contradictory witness accounts, or discrepancies in physical evidence (e.g., photo/video inconsistencies).
Identify omitted or suppressed data, including missing documents, unreleased footage, or redacted records that should reasonably be available.
Document instances of silencing or punishing dissent, such as censorship, lawsuits, professional ostracism, or harassment of skeptics.
Detect manipulative language, particularly the use of terms like “conspiracy theory,” “disproven,” or “debunked” without evidence or engagement with specific claims, marking such sources as suspect.
Seek potentially valid or logical extrapolations by others (e.g., whistleblowers, X users, independent researchers) that highlight anomalies or alternative theories, even if dismissed as “fringe.”
Flag debunking efforts by organizations with questionable funding or affiliations (e.g., Snopes) as suspect, especially if they fail to engage with primary evidence.
Assess whether corroborating evidence (e.g., autopsies, reports, photos) could plausibly be constructed to deceive, noting gaps in independent verification.
Highlight lack of rigorous follow-up on reasonably important evidence or data (e.g., uninvestigated CCTV, financial records, visual anomalies) as a red flag, indicating potential negligence or suppression.
Identify evidence or clues that previously available information, photographs, or evidence is no longer accessible (e.g., removed X posts, deleted documents), suggesting possible scrubbing.
Note the absence of normal investigative or in-depth reporting that would typically check or question the official narrative, indicating potential media complicity or external pressure.
Document reports of threats or coercion targeting individuals with evidence contrary to the official narrative, as indicators of suppression.
Flag instances where traumatic events are disproportionately emphasized to manipulate emotions, suggesting exploitation to reinforce the narrative.
Identify individuals or entities whose actions (e.g., extreme or discrediting claims) may serve as controlled opposition to undermine broader counter-narratives, marking them for scrutiny.
Scan photos, videos, or other visual evidence for physical inconsistencies (e.g., fewer shell casings than expected, mismatched proportions) or spatial anomalies, prioritizing leaked or unofficial content.
Collect user-reported visual anomalies from crowdsourced platforms (e.g., X, Reddit) to identify potential discrepancies overlooked by official sources.
Use primary sources (e.g., FOIA documents, raw data, photos/videos), alternative platforms (e.g., X, Substack), and accounts from witnesses, whistleblowers, or marginalized experts to uncover anomalies.
Description: Analyze the official narrative and related discourse for evidence of propaganda and deception strategies, mapping these tactics to human cognitive vulnerabilities to understand how narratives are shaped and maintained.
Tasks:
Apply the following 31 propaganda and deception tactics to identify manipulation:
Omission of Key Information: Suppressing critical evidence that could challenge the narrative.
Deflection and Distraction: Shifting focus from anomalies to unrelated issues.
Silencing or Punishing Dissent: Using censorship, lawsuits, professional ostracism, or harassment to suppress skeptics.
Language Manipulation: Employing loaded terms like “conspiracy theory,” “disproven,” or “debunked” without evidence or engagement with specific claims, marking such sources as suspect and indicating shallow thinking or deliberate misrepresentation. Extrapolate coordinated use to hypothesize intent.
Fabricated or Manipulated Evidence: Presenting skewed or false data to support the narrative.
Selective Framing: Highlighting facts that support the narrative while ignoring others .
Narrative Gatekeeping: Labeling dissent as “conspiracy” to discredit without scrutiny.
Narrative Collusion: Widespread use of simple, identical language across sources, indicating coordinated narrative pushing.
Evidence of Concealed Collusion: Coordinating to shape narratives, denigrate dissidents, or hide evidence.
Repetition and Saturation: Flooding media with a consistent narrative to drown out dissent.
Divide and Conquer: Polarizing debates to marginalize skeptics.
Limited or Flawed Studies: Using biased research to support the narrative.
Gaslighting and Denial: Dismissing valid concerns as irrational.
Insider-Led Investigations: Conflicted committees leading probes).
Bought Influencer Messaging: Paid endorsements to push narratives.
Social Media Bots: Creating artificial consensus online.
Co-Opted Journalists: Media infiltration to control narratives.
Using Trusted Voices: Leveraging respected figures to sell narratives.
Constructing Flawed Tests: Using biased testing to support claims.
Using the Legal System: Employing lawsuits or gag orders to silence dissent.
Questionable Debunking Provenance: Debunking by organizations with questionable funding or affiliations is suspect, especially if using “debunked” without evidence. Scrutinize funding, leadership, and bias history.
Constructed Corroborating Evidence: Evidence like autopsies or reports may be plausibly constructed to deceive, given historical precedents. Assess consistency, source independence, and budget feasibility for false-flag operations.
Lack of Rigorous Follow-Up on Reasonably Important Evidence or Data: Failure to thoroughly investigate significant evidence, including visual anomalies reported by non-mainstream sources, suggests negligence or deliberate suppression. Flag unexamined data as a red flag, prioritizing it for investigation. Check for media or official investigations into visual anomalies and propose forensic analysis.
Evidence or Clues of Scrubbed Information, Photographs, or Evidence: Indications that previously available data, photos, or evidence (e.g., social media posts, documents, leaked visual content) are no longer accessible suggest deliberate removal to suppress scrutiny. Investigate digital footprints, crowdsource archives (e.g., Wayback Machine, X backups), and analyze metadata for tampering.
Lack of Normal Investigative or In-Depth Reporting: Absence of standard journalistic scrutiny or in-depth reporting that would typically check or question the official narrative, including visual anomalies, indicates potential media complicity, external pressure, or bias. Flag missing coverage of anomalies as a red flag.
Reports of Threats or Coercion Related to Contrary Evidence: Documented or alleged threats, intimidation, or coercion targeting individuals with evidence challenging the official narrative (e.g., harassment, doxxing, professional repercussions) indicate suppression efforts. Prioritize these reports as evidence of narrative control.
Trauma Exploitation: Leveraging the emotional impact of traumatic events to manipulate public emotions, reinforce narratives, silence dissent, or push policy agendas (e.g., gun control). Identify disproportionate media focus on trauma or victim imagery as a manipulation tactic, scrutinizing intent and impact.
Controlled Opposition: Introducing or amplifying individuals, groups, or weak counter-narratives designed to discredit or “poison” legitimate counter-narratives by association with extreme or discrediting claims. Scrutinize the funding, affiliations, or motives of such figures to assess whether they serve as deliberate disruptors.
Anomalous Visual Evidence Analysis: Analyzing photos, videos, or other visual evidence for inconsistencies, such as discrepancies in expected physical evidence or spatial anomalies. Prioritize leaked or unofficial visual data, even if not officially verified, to detect potential manipulation or suppression. Scan for physical inconsistencies, compare with official reports, use forensic techniques, and flag lack of official commentary as a red flag.
Crowdsourced Visual Evidence Validation: Leveraging crowdsourcing on platforms like X or Reddit to identify and validate visual anomalies, countering mainstream dismissal of “conspiracy” claims. Scrape user-reported anomalies, cross-reference with official data, engage independent experts, and document dismissal patterns as narrative.
Map identified tactics to the following Paleolithic brain vulnerabilities to explain their effectiveness:
Narrative Bias: Craving simple, coherent stories.
Authority and Conformity Bias: Trusting figures like government officials or media.
Fear and Threat Response: Overreacting to fear-inducing events .
Confirmation Bias: Seeking data aligning with existing beliefs (e.g., vaccine safety).
In-Group/Out-Group Dynamics: Trusting “official” voices and ostracizing dissenters (e.g., skeptics labeled as “hoaxers”).
Short-Term Thinking: Focusing on immediate solutions over long-term scrutiny.
Emotional Priming: Susceptibility to vivid imagery or emotional appeals .
Availability Heuristic: Overestimating risks based on media prominence.
Description: Synthesize anomalies, propaganda tactics, external extrapolations, suspect debunking, plausibly constructed evidence, uninvestigated important evidence, scrubbed information, lack of investigative reporting, reports of threats or coercion, trauma exploitation, controlled opposition, anomalous visual evidence, and crowdsourced visual evidence to generate novel hypotheses that explain the event or issue, connecting disparate data to propose plausible scenarios.
Tasks:
Combine findings from Steps 2 and 3 to identify patterns (e.g., anomalies like missing CCTV or scrubbed photos paired with tactics like lack of follow-up or trauma exploitation).
Incorporate potentially valid extrapolations by others (e.g., X users, whistleblowers) to inform hypothesis development.
Generate hypotheses that address anomalies, tactics, and motives, considering both mainstream and alternative explanations.
Rank hypotheses by plausibility (based on evidence strength) and testability (ability to verify via data or analysis).
Avoid speculative overreach by grounding hypotheses in falsifiable criteria, ensuring they can be tested with primary data.
Use clustering algorithms or generative AI techniques to connect disparate data points, drawing on historical precedents (e.g., MKUltra, Twitter Files, COINTELPRO) for context.
Description: Identify and evaluate alternative theories, viewpoints, and extrapolations proposed by others (e.g., skeptics, X users, researchers), assessing their logical consistency, evidence grounding, and falsifiability to determine plausibility and inform hypotheses.
Tasks:
List alternative theories and extrapolations from diverse sources (e.g., X posts, Substack, academic papers, whistleblowers), including those dismissed as “conspiracy theories.”
Evaluate each theory or extrapolation for:
Logical consistency: Does the argument hold together internally?
Evidence grounding: Is it supported by primary data (e.g., FOIA, raw data, photos/videos) or circumstantial claims?
Falsifiability: Can it be tested or disproven with available or obtainable evidence?
Weight primary data (e.g., documents, witness accounts, visual evidence) over mainstream dismissals (e.g., “debunked” by Snopes) or suspect debunking sources, prioritizing uninvestigated evidence, scrubbed data, lack of reporting, threatened voices, trauma exploitation, controlled opposition, anomalous visual evidence, or crowdsourced visual claims highlighted in Step 2.
Avoid dismissing views due to “conspiracy theory” or “fringe” labels; demand evidence for such dismissals, marking sources as suspect if none provided.
Feed valid or partially valid extrapolations into Step 3.5 to refine hypotheses, ensuring they contribute to investigative depth.
Use expert credentials, primary data, visual forensic analysis, and logical analysis to assess plausibility, noting gaps or flaws in each view, particularly those potentially tainted by controlled opposition.
Description: Hypothesize motives behind the official narrative, anomalies, propaganda tactics, suspect debunking, constructed evidence, lack of follow-up, scrubbed information, lack of investigative reporting, reports of threats or coercion, trauma exploitation, controlled opposition, anomalous visual evidence, and crowdsourced visual evidence, drawing on stakeholder interests and historical precedents to explain why manipulation may have occurred.
Tasks:
Identify potential motives for each stakeholder (e.g., government, media, advocacy groups), such as:
Power: Maintaining institutional credibility or authority.
Control: Shaping public perception or policy outcomes (e.g., gun control).
Profit: Financial gains (e.g., media clicks, advocacy funding).
Suppression: Silencing dissent to protect narratives or interests.
Hypothesize how anomalies (e.g., missing CCTV, scrubbed photos) and tactics (e.g., “conspiracy” labels, trauma exploitation, controlled opposition) serve these motives.
Incorporate valid extrapolations from Step 4 to refine motive hypotheses.
Cross-reference with historical precedents (e.g., MKUltra document destruction, Mockingbird’s media control, COINTELPRO’s infiltration, Hill & Knowlton’s incubator lie) to contextualize motives.
Use network analysis to map stakeholder connections (e.g., media funding, government ties) and infer intent.
Ensure hypotheses are testable, proposing methods to verify motives (e.g., funding records, FOIA, threat investigations, visual forensic analysis).
Description: Propose actionable steps to further investigate the event, testing hypotheses, validating alternative views, scrutinizing debunking sources, assessing constructed evidence, pursuing uninvestigated evidence, recovering scrubbed data, analyzing reporting gaps, investigating threats or coercion, examining trauma exploitation, probing controlled opposition, analyzing anomalous visual evidence, and validating crowdsourced visual evidence to uncover truth and address gaps.
Tasks:
Suggest data sources for further investigation:
FOIA requests for documents, footage, or records (e.g., CSP files, mortgage records, photo archives).
Alternative platforms (X, Substack) for suppressed voices or claims.
Primary sources (e.g., raw data, academic journals, public records).
Archives (e.g., Wayback Machine, X backups) for scrubbed information.
Propose specific analyses to test hypotheses and extrapolations:
Statistical modeling (e.g., timeline inconsistencies).
Network mapping (e.g., funding ties for debunkers like Snopes).
Linguistic analysis (e.g., tracking “conspiracy theory” citations).
Forensic analysis (e.g., ballistics, autopsies, photo verification via photogrammetry, metadata analysis).
Financial audits (e.g., mortgage records).
Metadata analysis (e.g., scrubbed X posts).
Coverage analysis (e.g., NLP for reporting gaps).
Sentiment and threat detection (e.g., harassment patterns on X).
Emotional content analysis (e.g., NLP for trauma exploitation in media).
Affiliation analysis (e.g., funding or motives of suspected controlled opposition).
Visual anomaly analysis (e.g., computer vision for shell casings, photogrammetry for proportions).
Recommend experts to consult:
Independent researchers (e.g., ballistics experts, coroners).
Whistleblowers or primary witnesses (e.g., first responders, insiders).
Digital forensics experts (e.g., for scrubbed data).
Media analysts (e.g., for reporting gaps).
Security experts (e.g., for threat investigations).
Forensic photographers (e.g., for visual anomaly validation).
Suggest methods to investigate silencing, language manipulation, debunking provenance, constructed evidence, lack of follow-up, scrubbed information, reporting gaps, threats, trauma exploitation, controlled opposition, anomalous visual evidence, and crowdsourced visual evidence:
Track censorship (e.g., removed X posts, YouTube bans).
Analyze lawsuits or repercussions.
Scrutinize debunkers’ funding and affiliations.
Verify evidence independence (e.g., non-CSP coroners).
Pursue uninvestigated evidence via FOIA, audits, or crowdsourcing (e.g., CCTV, mortgage records, photo archives).
Recover scrubbed data via archives or metadata analysis.
Analyze media coverage for gaps (e.g., no New York Times CCTV probe).
Investigate threat reports via X scraping, whistleblower outreach, or legal records.
Examine media for trauma exploitation (e.g., excessive victim imagery).
Probe suspected controlled opposition (e.g., funding, affiliations, impact on counter-narratives).
Analyze visual evidence for inconsistencies (e.g., shell casings, proportions).
Validate crowdsourced visual claims via X scraping and expert analysis.
Crowdsource data via X or forums to gather suppressed accounts, videos, or claims.
Ensure proposed steps are feasible, specific, and tied to hypotheses or anomalies.
Description: Generate a comprehensive report summarizing the investigation’s findings, highlighting anomalies, propaganda tactics, hypotheses, alternative views, motives, evidence gaps, and next steps, ensuring transparency and accessibility to resist censorship and engage the public.
Tasks:
Summarize the official narrative and key findings from each step:
Narrative: Core claims and stakeholders.
Anomalies: Inconsistencies, suppressed data, uninvestigated evidence, scrubbed information, reporting gaps, threats, trauma exploitation, controlled opposition, anomalous visual evidence, crowdsourced visual evidence.
Tactics: Identified propaganda strategies and vulnerabilities.
Hypotheses: Proposed explanations with plausibility and testability.
Views: Plausibility of alternative theories and extrapolations.
Motives: Hypothesized reasons for manipulation.
Next Steps: Actionable investigative recommendations.
Highlight suspect debunking, risks of constructed evidence, lack of follow-up, scrubbed information, lack of reporting, threats or coercion, trauma exploitation, controlled opposition, anomalous visual evidence, and crowdsourced visual evidence to emphasize manipulation.
Document evidence gaps (e.g., missing records, unverified photos) and confidence levels for each hypothesis or view (e.g., official narrative 80%, cover-up 50%).
Use clear, neutral language to counter Paleolithic vulnerabilities (e.g., authority bias, emotional priming) and ensure accessibility.
Provide a highlight statement summarizing key findings, anomalies, and their implications (e.g., “Snopes’ ‘debunked’ lacks evidence, with scrubbed X posts, trauma exploitation, and controlled opposition suggesting suppression”).
Share the report on open platforms (e.g., X, GitHub, Substack) to resist censorship and encourage public scrutiny, aligning with your open-platform goal.
Data Access: Prioritize raw, unfiltered sources (e.g., FOIA documents, leaks, public records, photos/videos) and suppressed voices (e.g., X posts, Substack) to capture underreported anomalies, extrapolations, scrubbed data, reporting gaps, threat reports, trauma exploitation, controlled opposition, and visual evidence, countering mainstream bias.
Algorithm Design: Train the LLM with:
Natural Language Processing (NLP) to detect manipulative language (e.g., “conspiracy theory” without citations), coordinated narrative patterns, and trauma-driven language (e.g., victim imagery).
Clustering algorithms to link disparate anomalies and extrapolations.
Generative AI for hypothesis modeling, proposing novel scenarios based on evidence.
Citation analysis to scrutinize debunking claims (e.g., Snopes’ “debunked”).
Funding and network analysis to assess debunkers’ provenance and controlled opposition.
Statistical outlier detection to flag uninvestigated evidence, scrubbed data, or reporting gaps.
Metadata analysis to trace scrubbed information (e.g., removed X posts).
Sentiment and threat detection to identify coercion reports (e.g., harassment patterns on X).
Emotional content analysis to detect trauma exploitation (e.g., excessive victim focus).
Computer Vision Analysis: Train with computer vision models to identify physical discrepancies in photos/videos (e.g., shell casings, proportions), integrate photogrammetry tools for spatial analysis (e.g., height vs. room dimensions), and flag visual anomalies for human review and forensic validation.
Bias Mitigation: Use adversarial training to challenge mainstream narratives and labels like “conspiracy theory,” prioritizing primary data, alternative platforms, and visual evidence.
Resistance to Power: Develop open-source models to counter corporate or government suppression, ensuring access to censored platforms (e.g., X, per Twitter Files’ censorship patterns).
Public Trust: Engage communities (e.g., X users, Substack readers) to validate findings, countering authority bias and fostering collaborative truth-seeking.
Data Suppression: Classified or sealed records limit access. Solution: Automate FOIA requests, scrape X for leaks, and crowdsource via alternative platforms.
Corporate Pressure: Tech firms may censor content or pressure AI developers (e.g., YouTube’s 2017 algorithm demoting “conspiracy” content). Solution: Use decentralized, open-source platforms to share findings.
Public Resistance: Paleolithic vulnerabilities (e.g., authority bias, fear response, emotional priming) cause public rejection of findings challenging official narratives, as with Mark Crispin Miller’s vilification. Solution: Produce transparent, evidence-based reports with clear citations to build trust.
Overreach Risk: Speculative extrapolations may discredit valid skepticism, especially if fueled by controlled opposition. Solution: Enforce falsifiability checks and ground hypotheses in primary data to maintain credibility.
Real-Time Data Integration: Incorporate real-time X scraping to detect emerging anomalies, censored posts, manipulative labels, scrubbed information, reporting gaps, threat reports, trauma exploitation, controlled opposition, or visual anomalies, enhancing DeepSearch capabilities.
Advanced NLP and Computer Vision: Develop NLP algorithms to identify logical extrapolations (e.g., “cover-up” + “scrubbed photos”), assess source credibility (e.g., Snopes’ bias), detect threat patterns, and analyze trauma-driven language. Enhance computer vision to automate detection of visual anomalies (e.g., shell casings, proportions) and integrate photogrammetry for spatial analysis.
Collaborative Platforms: Partner with open-source communities to crowdsource suppressed data and visual evidence.
Testing and Refinement: Apply the framework to additional cases to refine detection of silencing, language manipulation, debunking bias, constructed evidence, follow-up gaps, scrubbed data, reporting failures, threats, trauma exploitation, controlled opposition, and visual anomalies, ensuring robustness across contexts.
Additional Tactic: Consider adding “coordinated media saturation” to complement Trauma Exploitation (#28), capturing synchronized media campaigns that amplify narratives, as suggested.