Handwriting Recognition | Knowledge Base | Definition

 

What Is Handwriting Recognition and how is it different from OCR?

Basically, OCR works on fonts while handwriting recognition has to use other mechanisms. This is because, while there is a finite number of fonts (e.g., the number of fonts in Microsoft Word), there is a “font” or writing style for every person that creates written text. OCR is trained at the individual character level to recognize fonts and font sizes and then can evaluate them to create computer text.

Handwriting recognition also analyzes characters and words, but must implement different algorithms to perform “best matches” to an inventory of letters. Handwriting recognition must accommodate a wide variety of variations in letters and words that normal OCR can avoid. As a result, handwriting recognition uses computer vision along with deep learning to create abstract models of letters and words (much like humans do) to reliably resolve handwritten letters and words.

With deep learning, handwriting recognition performance has come a long way in a short amount of time. While older forms of handwriting recognition requires a lot of help in the form of dictionaries and other context, deep learning-based recognition can transcribe a full page of information without any help; and do it pretty reliably. Still, the wide variation of handwriting styles also means that the performance of handwriting recognition, relative to OCR on machine-print, is lower. Field-level handwriting recognition (e.g., forms) achieves from 80 to 95% automation at 98-99% accuracy compared to 95%-98% automation rates for OCR on machine-print. Page-level transcription of handwriting, available only recently, can achieve transcription rates of around 90%  at 99% accuracy compared to 98% transcription rates of OCR on machine-print.