Intelligence Grading: Why the Admiralty Code Matters
- Matthew Wold

- Sep 19
- 6 min read
In cyber threat intelligence, collecting information is rarely the hardest part. The real challenge is deciding what to trust. Analysts are constantly flooded with data points from threat feeds, malware reports, open sources, and human reporting. Some of it is solid and reliable, while other pieces are rumor, speculation, or even deliberate misinformation. Without a structured way to evaluate the quality of intelligence, teams risk acting on bad data or ignoring information that could have prevented an attack.
This is where the Admiralty Code comes in. Originally developed during World War II and later adopted by NATO, the system provides a clear framework for rating both the reliability of the source and the credibility of the information. By applying this method, analysts can separate what is trustworthy from what is questionable, giving decision-makers a more accurate picture of the threat landscape.

History of the Admiralty Code
The Admiralty Code was first developed by the British Royal Navy during World War II. At the time, Allied forces were dealing with a flood of intelligence from signals intercepts, spies, reconnaissance flights, and resistance networks across Europe. The challenge wasn’t just collecting information; it was figuring out what could be trusted, what was rumor, and what was deliberate deception.
The British Admiralty introduced a grading system that separated two critical dimensions:
How reliable is the source? (for example, a long-trusted agent versus an unproven informant)
How credible is the information? (for example, a report corroborated by multiple intercepts versus an unverified single claim)
This separation turned out to be essential. Reliable sources could still pass on bad information, and questionable sources could still provide intelligence that was later confirmed. By breaking evaluation into two parts, analysts had a structured way to communicate confidence.
The Admiralty Code quickly proved its value. It gave analysts, commanders, and policymakers a way to compare intelligence from different channels without relying only on gut instinct. After the war, NATO adopted and standardized the system, which is why it is sometimes called the NATO System for Evaluating Intelligence.
The Two Dimensions of Evaluation
The Admiralty Code rates intelligence on two separate scales: the reliability of the source and the credibility of the information. When combined, these form a two-part grade, such as B2 or F6.
Source Reliability (A–F)
A – Completely reliable
B – Usually reliable
C – Fairly reliable
D – Not usually reliable
E – Unreliable
F – Reliability cannot be judged
Information Credibility (1–6)
1 – Confirmed by other independent sources
2 – Probably true
3 – Possibly true
4 – Doubtful
5 – Improbable
6 – Truth cannot be judged

By combining these, an analyst can clearly state their level of confidence. For example, an insider report from a well-established source that matches technical evidence might be graded A1. A rumor from an unproven forum account with no corroboration might be logged as F6.
Applying the Admiralty Code in Cyber Threat Intelligence
In modern CTI work, the Admiralty Code is used to bring structure to the flood of threat data. Every report, IOC, or observation can be graded to show how much weight it should carry in decision-making.
Example 1: IOC from a trusted feedA domain indicator comes from a commercial vendor with a strong track record. Other telemetry confirms it is linked to malware activity. This might be rated B1 (usually reliable source, confirmed information).
Example 2: OSINT reportA researcher on Twitter claims a ransomware group is shifting to new infrastructure, but provides no technical details. Their past reporting has sometimes been accurate, sometimes not. That might be logged as C3 (fairly reliable source, possibly true information).
Example 3: Dark web forum postAn anonymous user with no history posts a vague warning about an upcoming attack. No corroboration exists. This would be graded F6 (reliability and truth cannot be judged).
Using this method, analysts avoid treating all intelligence as equal. High-confidence reports can drive immediate action, while low-confidence ones are noted but set aside until further evidence emerges.
Benefits of Applying the Admiralty Code
Applying the Admiralty Code to cyber threat intelligence offers several advantages:
Standardized language – Analysts can use the same grading system, which reduces confusion and improves collaboration across teams.
Bias reduction – Separating source reliability from information credibility helps avoid assumptions, such as trusting everything from a known source or dismissing untested ones too quickly.
Transparency – Decision-makers can see not just the intelligence but also how confident analysts are in it.
Improved sharing – Many organizations and partners already understand the Admiralty Code, making it easier to share intelligence in a consistent way.
Prioritization – High-confidence intelligence can be acted on immediately, while low-confidence reports are tracked until corroboration emerges.
Overall, the system improves trust, clarity, and communication in the intelligence cycle.
Limitations and Challenges
The Admiralty Code is a useful tool, but it does come with challenges:
Subjectivity – Ratings depend on analyst judgment, which can vary between individuals and organizations.
Risk of inflation – Analysts may overrate sources or information to push urgency, reducing the system’s value over time.
Context limitations – The model does not capture details like source motivation, timeliness, or bias.
Time-sensitive intelligence – In fast-moving situations, information may be graded before corroboration is possible, which can cause later rework.
Training requirement – Analysts need to apply the system consistently. Without practice and oversight, ratings can become uneven.
Despite these limitations, the Admiralty Code remains one of the simplest and most effective frameworks for building confidence around intelligence.

Real-World Example in Cybersecurity
Consider a phishing campaign targeting a regional healthcare provider:
Initial report – An anonymous tip on a security forum claims that a phishing campaign will target healthcare organizations in the Midwest. No indicators are provided, and the source has no posting history. This is logged as F6.
Supporting evidence – A few days later, the SOC receives multiple suspicious emails with subject lines referencing patient portals. The sending infrastructure overlaps with domains tied to previous phishing activity. This information is rated B2 (usually reliable feed, probably true).
Confirmation – Endpoint logs confirm that a user clicked one of the phishing emails and connected to the reported domain, which is later identified as hosting a credential-harvesting kit. This is graded A1 (completely reliable source, confirmed by other sources).
By tracking the same campaign across multiple stages and applying the Admiralty Code consistently, the organization can see how confidence grows over time. What began as an unverified rumor matured into confirmed intelligence that directly supported incident response.
Best Practices for Practitioners
To get the most value out of the Admiralty Code, analysts and teams should:
Be conservative with ratings – Avoid inflating confidence without evidence. It is better to underrate and upgrade later than to overrate and lose credibility.
Always separate source and information – A reliable source can still report inaccurate details, and an unreliable source can sometimes be right.
Document reasoning – Record why a piece of intelligence was given a certain grade so others can understand the decision.
Revisit older reports – As new information surfaces, low-confidence reports should be reviewed and re-graded when appropriate.
Train consistently – Teams should practice applying the Admiralty Code to the same scenarios to ensure consistency across analysts.
One practical way to build skill is to practice grading open-source material such as news stories. Analysts can treat the news outlet as the source and the reported claim as the information, then assign a grade based on reliability and credibility. This exercise develops consistency and helps reinforce the habit of separating source reliability from information credibility.
Following these practices helps organizations maintain accuracy, transparency, and consistency in how they evaluate intelligence.
Conclusion
The Admiralty Code has stood the test of time because it solves a problem that remains just as relevant today as it was in World War II: how to decide what intelligence to trust. By separating source reliability from information credibility, it provides analysts with a structured way to communicate confidence and decision-makers with a clearer picture of risk.
In cyber threat intelligence, where data volume is overwhelming and misinformation is common, this framework helps ensure that action is based on the best possible information. It is not perfect, but when applied consistently, the Admiralty Code improves clarity, reduces bias, and builds trust in the intelligence process.
References:
Clark, R. M. (2013). Intelligence analysis: A target-centric approach (5th ed.). CQ Press.
Lowenthal, M. M. (2017). Intelligence: From secrets to policy (7th ed.). CQ Press.
NATO Standardization Office. (2016). AAP-06 Edition 2016 – NATO glossary of terms and definitions. NATO.
Rid, T., & Buchanan, B. (2015). Attributing cyber attacks. Journal of Strategic Studies, 38(1–2), 4–37. https://doi.org/10.1080/01402390.2014.977382
The Intel Lab. (2020, June 3). Intelligence tradecraft – The Admiralty Code [Video]. YouTube. https://youtu.be/NhvVL9Ic2_s
UK Ministry of Defence. (2011). Joint doctrine publication 2-00: Understanding and intelligence support to joint operations. UK MOD.
Wikipedia contributors. (n.d.). Admiralty code. In Wikipedia. Retrieved September 19, 2025, from https://en.wikipedia.org/wiki/Admiralty_code



Great article Matt!