Venue

Campus Center - Ross

Major

Mathematics

Field of Study

Computer Science

Abstract

How does a computer recognize individual letters and numbers? One method to approach this problem is to use a Convolutional Neural Network (CNN). To explore this method, we are investigating the accuracy of CNNs in classifying handwritten text relative to a variety of parameters that influence predictive accuracy. In particular, our focus is a dataset of handwritten Arabic letters, which have more similarity to each other than do handwritten Arabic numerals. Some of the parameters we are experimenting with are the size of the “window” that a CNN "views" an image with, or how separated these "viewing windows” are. Other parameters include the number of “window types” that a CNN might use to classify an image in conjunction with these other structural hyperparameters. We are attempting to achieve a classification accuracy of greater than 95% for Arabic handwriting by altering structural hyperparameters, which tune the way that a CNN “views” images. Ultimately, we would like to build a tool that can very accurately tell the difference between handwritten digits and letters without human intervention.

Start Date

25-4-2019 11:00 AM

End Date

25-4-2019 11:15 AM

Share

COinS
 
Apr 25th, 11:00 AM Apr 25th, 11:15 AM

Computer Vision and Handwriting Analysis

Campus Center - Ross

How does a computer recognize individual letters and numbers? One method to approach this problem is to use a Convolutional Neural Network (CNN). To explore this method, we are investigating the accuracy of CNNs in classifying handwritten text relative to a variety of parameters that influence predictive accuracy. In particular, our focus is a dataset of handwritten Arabic letters, which have more similarity to each other than do handwritten Arabic numerals. Some of the parameters we are experimenting with are the size of the “window” that a CNN "views" an image with, or how separated these "viewing windows” are. Other parameters include the number of “window types” that a CNN might use to classify an image in conjunction with these other structural hyperparameters. We are attempting to achieve a classification accuracy of greater than 95% for Arabic handwriting by altering structural hyperparameters, which tune the way that a CNN “views” images. Ultimately, we would like to build a tool that can very accurately tell the difference between handwritten digits and letters without human intervention.