Accuracy for Intelligent Capture | Knowledge Base | Definition
What Does “Accuracy” Mean, and How is Value Measured for Accuracy?
When it comes to defining accuracy there is a lot of confusion and little effort to create real clarity in intelligent capture. Accuracy can mean many different things depending upon what is being measured and what is most important to your organization. Just as many confuse OCR with intelligent capture, system accuracy is also a topic that requires more investigation.
At a very crude level, OCR systems can achieve 98%-99% accuracy at a character or word level. This is fine if your objective is to convert scanned documents so that they can be easily searched or edited. And yet, 99% accuracy at a character or word level means nothing for intelligent capture. The key value proposition is to reduce manual data entry for structured information derived from documents. This means that systems need to provide the largest amount of usable structured data from your documents presented at the highest levels of accuracy. Here two measurements matter: (1) the amount of data that a system can produce and (2) the accuracy of that data.
Without both measurements, a vendor can easily claim to provide 99% accuracy and be completely truthful. But you may only get 5% of your data automated. The “Mars shot” of intelligent capture is 100% data extraction at 100% accuracy. We probably won’t get there in the next several decades, but those are the measurements with which all systems should be compared with the objective of maximizing both numbers as far as possible.