True Positives

True Positives Definition

A true positive (TP) is when a system correctly identifies something that actually exists. In cybersecurity, this means detecting a real cyberthreat, such as malware or unauthorized access. In machine learning, it’s when an algorithm correctly flags a positive case. True positives are used to measure how well systems or models perform in recognizing what they’re designed to catch.

How True Positives Work

A result is a true positive when a system signals something as positive, and the outcome confirms it. The frequency of true positives is often measured using the True Positive Rate (TPR), also called recall in machine learning. TPR shows the proportion of correctly identified positive cases.

However, true positives don’t give the full picture. A system may also generate false positives and false negatives, which affect overall accuracy. To fully evaluate performance, all of these outcomes need to be considered together.

Examples of True Positives

True Positives vs Other Outcomes

OutcomeWhat It MeansExample
True positiveA positive case is correctly flaggedA spam email is marked as spam
True negativeA negative case is correctly ignoredA safe email stays in the inbox
False positiveA negative case is wrongly flaggedA safe email is marked as spam
False negativeA positive case is missedA spam email reaches the inbox

Read More

FAQ

Measuring true positives shows how well a system detects what it’s supposed to catch. This helps compare different tools, fine-tune settings, and balance accuracy with efficiency.

A high number of true positives is generally a sign that a system is working well. However, if each true positive requires an alert or a manual review, the volume can overwhelm both the system and the people managing it. This can slow down responses and increase overall workload.

True positives and false positives usually appear together because both reflect how a system handles predictions. A system with only true positives and no false positives would mean it never misclassifies negative cases as positive, which is rare in practice. Most real systems show a balance between the two.

Recall is the measure of how many actual positives a system successfully detects, while precision shows how many of the detected positives are correct. True positives play a key role in both. Recall increases when more true positives are found compared to missed positives. Precision rises when true positives make up a larger share of all detected cases. These measures help show how reliable a model is in practice.

×

Time to Step up Your Digital Protection

The 2-Year Plan Is Now
Available for only /mo

undefined 45-Day Money-Back Guarantee