How does a computer recognize individual letters and numbers? One method to approach this problem is to use a Convolutional Neural Network (CNN). To explore this method, we are investigating the accuracy of CNNs in classifying handwritten text relative to a variety of parameters that influence predictive accuracy. In particular, our focus is a dataset of handwritten Arabic letters, which have more similarity to each other than do handwritten Arabic numerals. Some of the parameters we are experimenting with are the size of the “window” that a CNN "views" an image with, or how separated these "viewing windows” are. Other parameters include the number of “window types” that a CNN might use to classify an image in conjunction with these other structural hyperparameters. We are attempting to achieve a classification accuracy of greater than 95% for Arabic handwriting by altering structural hyperparameters, which tune the way that a CNN “views” images. Ultimately, we would like to build a tool that can very accurately tell the difference between handwritten digits and letters without human intervention.