Carnegie Mellon University
Browse
Elementary Algorithms for Solving Convex Optimization Problems.pdf (940.67 kB)

Elementary Algorithms for Solving Convex Optimization Problems

Download (940.67 kB)
thesis
posted on 2014-05-01, 00:00 authored by Negar Soheili Azad

The rapid growth in data availability has led to modern large scale convex optimization problems that pose new practical and theoretical challenges. Examples include classification problems such as customer segmentation in retail and credit scoring in insurance. Classical optimization and machine learning techniques are typically inadequate to solve these large optimization problems because of high memory requirements and slow convergence guarantees. This thesis develops two research threads to address these issues. The first involves improving the effectiveness of a class of algorithms with simple computational steps for solving convex optimization problems arising in machine learning, data mining, and decision making. The second involves the refinement of conditioning and geometry of convex optimization problems via preconditioning. I elaborate on these two threads below. 1. The main theme of this thesis focuses on the class of elementary algorithms for solving convex optimization problems. These algorithms only involve simple operations such as matrix-vector multiplications, vector updates, and separation oracles. This simplicity makes the computational cost per iteration and memory requirement of these algorithms low. Thus, elementary algorithms are promising for solving emerging big data applications in areas such as classification, pattern recognition, and online learning. A major hurdle that needs to be overcome is the slow convergence of these algorithms. We develop new elementary algorithms that are enhanced via smoothing and dilation techniques. These enhancements yield algorithms that retain the attractive simplicity of the elementary algorithms while achieving substantially improved convergence rates. Thus, these enhanced algorithms are better suited for solving modern large scale convex optimization problems. 2. A significant difficulty when solving large convex optimization problems is poor conditioning caused by the existence of at and nearly at geometries. This thesis shows that a combination of two simple preprocessing steps generally improve the geometric structure of problem instances. We improve instances' geometric structure by reconciling the properties of three different but related notions of conditioning. More precisely, when one of these measures is large, in which case the problem instance is certainly poorly conditioned, our procedure reduces it without making the remaining measures worse. Our preconditioning procedures can be potentially useful for the convergence properties of a large class of iterative methods without changing their ultimate solution.

History

Date

2014-05-01

Degree Type

  • Dissertation

Department

  • Tepper School of Business

Degree Name

  • Doctor of Philosophy (PhD)

Advisor(s)

Javier Pena

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC