Management Imaging Recognition Scanning Software Strategy Privacy

Current Filter: Document>>>>>>

PREVIOUS

   Current Article ID:2235

NEXT



Rational Health Service

Editorial Type: Feature     Date: 09-2010    Views: 2925   





OITUK's Dr. Vijay Magon describes how forms recognition technologies can assist healthcare bodies in managing legacy medical records

The majority of document management solutions in use in NHS sites in the UK provide facilities for capturing, managing, and delivering electronic patient records. A key requirement at most sites is to capture the legacy paper records - records which have been typically collated and managed over the years with few, if any, guidelines on how to manage them. There is huge variety in the way Trusts file paper records, ranging from random storage within casenote folders (worst case) to organised filing within tabs or sections held in casenotes. Consequently, the high investment required to sort, prepare, and digitise such records for use by practitioners is hard to justify. As a result, scanning processes are put in place to digitise the paper casenotes using the quickest and cheapest options - i.e. scan the casenotes as they are found! It is worth stating at the outset, that new (or ongoing) records captured within document management systems and information created within such systems will not fall into the same trap - classification of new records is much more granular and, furthermore, automated to a large degree. Consequently, access and use of these records within an electronic system is more acceptable and welcomed by practitioners.

A common driver for digitising paper records is alleviating storage space. The cost models for this type of project are based on scanning casenotes as they are found, and have not changed over time. Consequently, given the poor and variable paper filing practices, digitised casenotes actually add little value in delivering the electronic patient record and do not adequately compensate for the loss of the universal convenience of paper! While clever facilities within the viewing software help users to navigate through the electronic patient record, these are far from an ideal solution and, at worst, lead to "IT failures" due to poor user acceptance. Given that the time-consuming and costly processes necessary to sort, prepare, and in many cases re-structure existing casenotes, cannot be justified, what are the alternatives?

RECOGNITION & CLASSIFICATION
Paper casenotes are scanned to generate electronic images which can be viewed as pages. A scanned image is little more than a collection (or pattern) of dots called pixels. Pixel patterns create shapes, letters, words etc. - computers do not understand these patterns. Recognition technologies such as Optical Character Recognition (OCR) can reverse engineer such patterns into electronic text recognisable by computers.

Generation of recognisable text from pixel patterns is not an exact science - it involves complex pattern recognition processing and internal voting to estimate a probable outcome or accuracy. Advances in processing powers and pattern recognition algorithms have led to significant improvements in accuracy rates - 95% or more, i.e. 'acceptable' accuracy. There are many real-life examples of business process automation in use based on data extracted by OCR engines. The accuracy rates are highly dependent on several factors. Two key factors are:

• quality and legibility of scanned images
• content - typed text is acceptable, joined up hand-writing is not

Intelligent Character Recognition (ICR) was developed to recognise hand writing. This technology has evolved over time and is in widespread use on hand-held devices - tablet PCs, mobile hand-sets, etc. However its use is strictly controlled to assure an acceptable accuracy and it offers few (if any) benefits within automated processes.

The text output from use of OCR technologies can be re-used within application software and with further validation, used to make decisions, i.e. automate selected processes. The text output falls into two broad categories:

• full text
• zonal text, i.e. text which is expected in specific areas on an image



Page   1  2  3

Like this article? Click here to get the Newsletter and Magazine Free!

Email The Editor!         OR         Forward ArticleGo Top


PREVIOUS

                    


NEXT