Date of Award


Embargo Period


Degree Type


Degree Name

Doctor of Philosophy (PhD)


Electrical and Computer Engineering


Vijayakumar Bhagavatula


Support vector machines (SVMs) and correlation filters (CFs) are popular for automatic target recognition (ATR) and other computer vision tasks. SVMs are designed to maximize the separation between two classes in some feature space. SVMs are popular for classification (determining the class-label of a target) and generalize well for targets not in the training set, but SVMs are not specifically designed for localization (finding where a target is). When using SVMs, regions of interest (ROIs) are usually first extracted using some other detector before SVMs are applied. CFs accurately localize the target of interest in a large scene but their classification performance may not be as good (compared to SVMs) for targets not in the training set. In this thesis we introduced new linear CFs by combining the criteria used in state-of-the-art CFs to improve performance. Using our improved linear CF designs, we present a new type of classifier called the Maximum Margin Correlation Filters (MMCFs), which combine the generalization capabilities of SVMs and the shiftinvariance of correlation filters (CFs), i.e., MMCF is also invariant to shifts between the training images and the query image, thereby avoiding the need for image registration and detection before SVM-based classification. We extend our work to quadratic correlation filters (QCFs) and present the Quadratic MMCF (QMMCF) and show its relation to the second order polynomial Kernel SVM (referred to as Quadratic SVM (QSVM) in this thesis). We extend the capabilities of CFs and therefore of MMCFs and QMMCFs to include vector features (as opposed to scalar features, i.e., gray-scaled pixels) in order to use features such as the Histogram of Oriented Gradients (HOG).We test the efficacy of the our designs on real data and show improvement over linear and quadratic CFs and linear and quadratic SVMs.