True Positives

True Positives Definition
A true positive (TP) is when a system correctly identifies something that actually exists. In cybersecurity, this means detecting a real cyberthreat, such as malware or unauthorized access. In machine learning, it’s when an algorithm correctly flags a positive case. True positives are used to measure how well systems or models perform in recognizing what they’re designed to catch.
How True Positives Work
A result is a true positive when a system signals something as positive, and the outcome confirms it. The frequency of true positives is often measured using the True Positive Rate (TPR), also called recall in machine learning. TPR shows the proportion of correctly identified positive cases.
However, true positives don’t give the full picture. A system may also generate false positives and false negatives, which affect overall accuracy. To fully evaluate performance, all of these outcomes need to be considered together.
Examples of True Positives
- Spam filters: An unwanted email is flagged as spam, and it turns out to be spam.
- Intrusion detection: A cybersecurity tool raises an alert for unauthorized access, and an investigation confirms the attack.
- Antivirus detection: Security software flags a file as malware, and analysis shows it’s malicious.
- Machine learning results: An algorithm predicts fraud in a dataset, and the prediction is confirmed.
- Vulnerability testing: A security test identifies a system weakness, and further checks confirm it exists.
True Positives vs Other Outcomes
Outcome | What It Means | Example |
True positive | A positive case is correctly flagged | A spam email is marked as spam |
True negative | A negative case is correctly ignored | A safe email stays in the inbox |
False positive | A negative case is wrongly flagged | A safe email is marked as spam |
False negative | A positive case is missed | A spam email reaches the inbox |
Read More
FAQ
Measuring true positives shows how well a system detects what it’s supposed to catch. This helps compare different tools, fine-tune settings, and balance accuracy with efficiency.
A high number of true positives is generally a sign that a system is working well. However, if each true positive requires an alert or a manual review, the volume can overwhelm both the system and the people managing it. This can slow down responses and increase overall workload.
True positives and false positives usually appear together because both reflect how a system handles predictions. A system with only true positives and no false positives would mean it never misclassifies negative cases as positive, which is rare in practice. Most real systems show a balance between the two.
Recall is the measure of how many actual positives a system successfully detects, while precision shows how many of the detected positives are correct. True positives play a key role in both. Recall increases when more true positives are found compared to missed positives. Precision rises when true positives make up a larger share of all detected cases. These measures help show how reliable a model is in practice.