AI Visual Inspection Accuracy: Detection Rates, False Positives and What Affects Performance

What Is Defect Detection Accuracy and Why Does It Matter?

When manufacturers evaluate AI visual inspection systems, the most important number they ask about is defect detection accuracy. But accuracy is a composite metric that hides important trade-offs. Understanding the components — and how to set them correctly for your application — is the difference between a system that works and one that creates more problems than it solves.

There are two types of errors in any inspection system. A false negative is when a defective part passes inspection — the error that matters most in quality-critical applications. A false positive is when a good part is rejected — the error that matters most for production efficiency. Optimising for one always comes at some cost to the other. The right balance depends on your product, your industry, and the consequences of each error type.

Key Accuracy Metrics Explained

Detection Rate (Sensitivity / Recall)

Detection rate is the percentage of actually defective parts that the system correctly identifies as defective. A detection rate of 99% means that for every 100 defective parts presented to the system, 99 are correctly flagged and 1 passes through. For pharmaceutical and automotive safety-critical applications, detection rates of 99.5% or higher are typically required. For cosmetic defects in non-safety applications, 95-98% may be acceptable.

False Positive Rate (False Alarm Rate)

The false positive rate is the percentage of good parts incorrectly rejected. A false positive rate of 1% on a line running 10,000 parts per shift means 100 good parts are rejected every shift — parts that must be manually reviewed and re-sorted, adding cost and slowing output. High false positive rates also erode operator trust in the system, leading to inspection bypasses. Target false positive rates are typically 0.1% to 2% depending on the value of the product and the cost of false rejection.

Overall Equipment Effectiveness (OEE) Impact

Beyond detection and false positive rates, the right metric for AI visual inspection is its impact on your overall equipment effectiveness. A system that detects 99% of defects but generates enough false positives to slow the line has a net negative impact on OEE. The business case for AI inspection must account for both the reduction in defective product reaching customers and the operational cost of false rejections.

What Affects DeepVision Accuracy?

Training Data Quality and Quantity

The quality and representativeness of training images is the single biggest driver of DeepVision accuracy. Images should cover the full range of normal product variation — all lighting conditions, all acceptable surface finishes, all positional variation within the fixture. Defect images should include examples of each defect type you need to detect, ideally captured from actual production rather than artificially induced defects.

More images generally produce better models, but there are diminishing returns. For most applications, 300-500 good images and 50-150 defect images per defect type produce a model within 2-3% of the performance achievable with unlimited data. Beyond this, the priority shifts to ensuring coverage of rare but important defect types rather than simply adding more images of common defects.

Lighting Consistency

Consistent, well-designed lighting is the most controllable hardware factor affecting accuracy. Diffuse, even illumination that eliminates specular reflections and shadows reduces the image-to-image variation that the model must cope with. Structured light — raking illumination at a low angle — is the best choice for revealing surface topology defects such as dents, scratches, and raised contamination. The Indus Vision application team designs lighting specifically for each application.

Camera Resolution and Optics

Resolution determines the smallest defect that can be physically detected. A camera with insufficient resolution to image a defect cannot detect it regardless of the AI model. As a rule of thumb, the smallest defect of interest should occupy at least 5×5 pixels in the image. Indus Vision’s application engineers will calculate the required field of view and minimum camera resolution based on your inspection requirements.

Typical DeepVision Accuracy Benchmarks

ApplicationTypical Detection RateTypical False Positive Rate
Surface scratch and dent inspection97–99%0.5–2%
Solder joint inspection (PCB)98–99.5%0.3–1%
Label and print verification99–99.9%0.1–0.5%
Component presence verification99–99.9%0.1–0.5%
Pharmaceutical tablet inspection99–99.8%0.2–1%
Weld quality inspection96–99%0.5–2%

These ranges reflect real production deployments. Actual performance depends on the specific defect types, product complexity, and image quality achieved in your installation. Indus Vision provides validated accuracy data from acceptance testing before any system goes live.

To get an accuracy assessment for your specific application, contact Indus Vision for a free technical consultation.

What to read next

Justifying capital investment in AI visual inspection requires more than a vendor case study. Quality managers and plant heads need

Zero defect manufacturing (ZDM) is no longer aspirational — it is a procurement requirement. Indian Tier 1 and Tier 2

AI visual inspection was once the exclusive domain of large manufacturers with dedicated automation engineering teams and capital budgets measured

Scroll to Top

Request a Demo