Robust Low-Rank Optimization

alt text 

A recent line of work has shown that a surprisingly large class of smooth-but-nonconvex low-rank optimization problems — including matrix completion and sensing, phase retrieval, and dictionary learning — have a benign landscape, i.e., every local solution is also global. Despite the nonconvexity of these problems, their benign landscape imply that simple local-search algorithms are guaranteed to converge to a globally-optimal solution, thus leading to significant computational savings and zero optimality gap. In general, the validity of these results relies heavily on the smoothness of the objective function. However, such smooth objective functions are not robust against outliers, i.e., they cannot correctly identify and reject large-and-sparse noise values. Inspired by this deficiency in existing methods, we studied the following open problem: does robust low-rank optimization in the presence of a nonsmooth objective function still have a benign landscape?

In [1], we considered an important class of such problems, namely non-negative robust principal component analysis (NRPCA), in which the goal is to exactly recover the non-negative and low-rank component of a measurement matrix, despite a subset of the measurements being grossly corrupted with large noise values. We proved that NRPCA has no spurious local minima under a set of necessary and sufficient conditions, such as strict positivity of the true components, as well as the absence of bipartite components in its sparsity graph. This implies that, despite the highly nonsmooth and nonconvex nature of NRPCA, simple local search algorithms can efficiently recover its globally-optimal solution. By building upon this result and leveraging contemporary techniques in random graph theory, we provided probabilistic guarantees on the absence of spurious local minima under random sampling and noise regime. In particular, we showed that up to a constant factor of the measurements could be corrupted by large amounts of noise without creating any spurious local solution.

Related publications:

1- S. Fattahi and S. Sojoudi, Exact Guarantees on the Absence of Spurious Local Minima for Non-negative Rank-1 Robust Principal Component Analysis, submitted to Journal of Machine Learning Research, 2019.

Graphical Model Inference

alt text 

Graphical models are fundamental methods for obtaining interpretable descriptions of large-scale datasets. For instance, in Neuroscience, it is shown that brain connectivity can be studied by inferring an associated graphical model based on functional MRI measurements. As another example, graphical models can be used in short- and long-term traffic flow prediction and control of intelligent transportation systems (ITSs). At the heart of graphical model inference is sparse inverse covariance estimation. The best known algorithms for sparse inverse covariance estimation have time complexities on the order of O(n^4), making them prohibitive to solve massive-scale instances of the problem. This is despite the fact that in high-dimensional settings, the sample covariance matrix can be efficiently constructed in O(n^2). The prohibitive computational cost of the current solvers for sparse inverse covariance estimation motivated us to investigate the following question: can we reduce the complexity of sparse inverse covariance estimation to that of computing the sample covariance matrix?

In a series of papers [1-4], we provided an affirmative answer to this question. In particular, we showed that, under mild assumptions, a simple thresholding operation on the sample covariance matrix reveals the sparsity pattern of the inverse covariance matrix. By building upon this result, we proved that sparse inverse covariance estimation can be solved to near-optimality in O(n^2). Furthermore, we showed the graceful scalability of the proposed method on massive-scale datasets, including real-life functional MRI data and traffic flows for transportation networks. In practice, our proposed method obtained accurate estimates of inverse covariance matrix for instances with more than 3.2 billion variables in less than 30 minutes on a normal laptop computer, while other methods did not converge within 4 hours on a server with 32 CPU cores.

Related publications:

1- S. Fattahi and S. Sojoudi, Graphical Lasso and Thresholding: Equivalence and Closed-form Solutions, Journal of Machine Learning Research (JMLR), 2019 - Winner of INFORMS Data Mining best paper award and Katta G. Murty best paper award

2- R. Y. Zhang, S. Fattahi, and S. Sojoudi, Large-Scale Sparse Inverse Covariance Estimation via Thresholding and Max-Det Matrix Completion, International Conference on Machine Learning (ICML), 2018

3- S. Fattahi, R. Y. Zhang and S. Sojoudi, Sparse Inverse Covariance Estimation for Chordal Structures, European Control Conference (ECC), 2018

4- S. Fattahi and S. Sojoudi, Graphical Lasso: Sparsity Path and Closed-Form Solution, American Control Conference (ACC), 2018 - Finalist for the best student paper award

Nonconvex Online Optimization

alt text 

At the crux of the results on the absence of spurious local minima is the assumption on the static and time-invariant nature of the optimization. Yet, in practice, most of the real-world and data-driven problems are time-varying and require online optimization. For instance, the optimal power flow (OPF) problem in the electrical grid should be solved every 5 minutes in order to match the supply of electricity with a demand profile that changes over time. This observation gives rise to the following question: do fast local-search algorithms become stuck at spurious local minima in online optimization, similar to their time-invariant counterparts?

To answer this question, we considered the OPF problem with the California load profile in [1] and empirically showed that, despite having multiple local solutions at every time step, simple ‘‘short-sighted’’ algorithms can avoid them. Inspired by this observation, we introduced the notion of spurious local trajectories for online optimization, as a generalization to the notion of spurious local solutions. In particular, we considered a class of online optimization problems and showed that their local trajectories can be characterized as solutions to an ordinary differential equation (ODE). By studying the stability of the proposed ODE, we further highlighted the role of data variation in the absence of spurious local trajectories. In particular, we proved that if the problem is time-invariant, the spurious local trajectories are ubiquitous since any local minimum is a stable equilibrium point of the ODE. On the other hand, if the ODE is time-varying, the local minima of the optimization problem may neither be equilibrium nor stable, thereby facilitating different algorithms to efficiently avoid them.

Related publications:

1- S. Fattahi, C. Josz, R. Mohammadi, J. Lavaei, and S. Sojoudi, Absence of Spurious Local Trajectories in Time-varying Optimization, submitted for conference publication, 2019

Learning-Based System Identification and Distributed Control

alt text 

A long-standing challenge in the control of dynamical systems is uncertainty in their models. The unknown nature of a dynamical system implies that any viable control policy should actively interact with the system to learn the model, and then make robust decisions by taking into account the uncertainty of the learned model. Indeed, a practical data-driven control framework should not have a ‘‘long interaction’’ with an unknown system in the learning phase to avoid jeopardizing its safety, and it should be efficient to design and implement. We addressed these challenges in [1-4], where we introduced a series of robust and learning-based system identification and distributed control schemes for linear and time-invariant systems that benefit from efficient sample and computational complexities. In other words, our method only makes a logarithmic number of interactions with the unknown system to learn the model, and then designs a controller in near-linear time.

Related publications:

1- S. Fattahi and S. Sojoudi, Data-Driven Sparse System Identification, Annual Allerton Conference on Communication, Control, and Computing, 2018

2- Salar Fattahi and Somayeh Sojoudi, Sample Complexity of Sparse System Identification Problem, submitted to IEEE Transactions on Automatic Control, 2019

3- S. Fattahi, N. Matni, and S. Sojoudi, Learning Sparse Dynamical Systems from a Single Sample Trajectory, to appear in IEEE Conference on Decision and Control (CDC), 2019

4- S. Fattahi, N. Matni, and S. Sojoudi, Efficient Learning of Distributed Linear-Quadratic Controllers, submitted to SIAM Journal on Optimization and Control, 2019

Nonlinear Network Optimization

alt text 

Network flow problems play a crucial role in operations research with a myriad of applications in assignment, electrical power, and production networks, to name a few. Most of the classical results on the network flow problem are contingent upon the lossless nature of the network. However, physical systems are lossy, where the loss is often a nonconvex function of the flows. An example is power networks where the loss over each line is given by a parabolic function due to Kirchhoff's circuit laws. In [1,2], we investigated optimization over lossy networks in the context of the generalized network flow (GNF) problem. Solving GNF to optimality is a daunting task due to the incorporation of nonlinear losses in its formulation. However, we introduced an efficient convex relaxation of the problem that incurs zero optimality gap.

In particular, we proved that, under practical conditions, the globally optimal cost and nodal injections can be efficiently obtained by simply relaxing the nonconvex equality constraints to convex inequalities. Unlike the computationally-expensive convexification techniques — such as sum-of-squares (SOS) — that are based on lifting the problem to a higher dimension, our proposed convex relaxation is defined over the original space of variables, making it suitable for the real-time operation of lossy networks in realistic scales.

Related publications:

1- S. Sojoudi, S. Fattahi and J. Lavaei, Convexification of Generalized Network Flow Problem, Mathematical Programming, 2019

2- S. Fattahi and J. Lavaei, Convex Analysis of Generalized Flow Networks, IEEE Conference on Decision and Control (CDC), 2015

Real-Time scheduling of Power Systems

alt text 

The unit commitment (UC) problem aims to find an optimal schedule of generating units subject to demand and operating constraints for an electricity grid. The objective of first part of this project was to obtain a convex model of polynomial size for practical instances of the UC problem. To this end, we developed a convex conic relaxation of the UC problem. The major benefit of the proposed method compared to the existing techniques is threefold: (i) the proposed formulation is a single convex model with polynomial size and, hence, its global minimum can be found efficiently, (ii) unlike heuristic methods and local-search algorithms that return local minima whose closeness to a global solution cannot be measured efficiently, the proposed formulation aims at obtaining global minima, (iii) the proposed convex model can be used in convex-hull pricing to minimize uplift payments made to generating units in energy markets [1,4].

Next, we studied the optimal transmission switching (OTS) problem for power systems, where certain lines are fixed (uncontrollable) and the remaining ones are controllable via flexible switches. The goal was to identify a topology of the power grid that minimizes the cost of the system operation while satisfying the physical and operational constraints. Most of the existing methods for the problem are based on first converting the OTS into a mixed-integer linear program (MILP) or mixed-integer quadratic program (MIQP), and then iteratively solving a series of convex relaxations. The performance of these methods depends heavily on the strength of the MILP or MIQP formulations. Here, we showed that finding MILP or MIQP formulation of the OTS is NP-hard. Furthermore, we proved that there is no constant-factor approximation algorithm for constructing these variable bounds. Despite the inherent difficulty of obtaining the strongest bounds in general, we presented a simple bound strengthening method to strengthen the convex relaxation of the problem. The proposed method can be treated as a preprocessing step and can be carried out offline before initiating any off-the-shelf solver. A remarkable speedup in the runtime of the mixed-integer solvers is obtained using the proposed bound strengthening method for medium- and large-scale real world systems [2,3].

Related publications:

1- S. Fattahi, M. Ashraphijou, J. Lavaei and A. Atamturk, Conic Relaxation of the Unit Commitment Problem, Energy, 2017

2- S. Fattahi, J. Lavaei and A. Atamturk, A Bound Strengthening Method for Optimal Transmission Switching in Power Systems, IEEE Transactions on Power Systems, 2018.

3- S. Fattahi, J. Lavaei and A. Atamturk, Promises of Conic Relaxations in Optimal Transmission Switching of Power Systems, IEEE Conference on Decision and Control (CDC), 2017

4- M. Ashraphijou, S. Fattahi, J. Lavaei and A. Atamturk, A Strong Semidefinite Programming Relaxation of the Unit Commitment Problem, IEEE Conference on Decision and Control (CDC), 2016

Distributed Control Policies in Large-Scale Dynamical Systems

alt text 

The efficient operation of dynamical infrastructures — such as smart cities and grids — demands a shift from classical centralized control policies toward efficient edge-computing with distributed control schemes. In [1-3, 5, 6], we proposed a series of scalable optimization frameworks that can transform a given centralized control policy to a distributed one with a pre-defined structure. we characterized the cost of decentralization, i.e., the degradation in the performance of the distributed controller compared to its centralized counterpart, and proved that this degradation is small in practice.

In [4], we targeted the theoretical complexity of solving stochastic optimal distributed control problem. We showed that if either the measurement noise covariance or the input weighting matrix is not too small, the problem is locally convex. Under such circumstances, the design of a decentralized controller with a bounded norm subject to an arbitrary sparsity pattern is naturally a convex problem.

Related publications:

1- S. Fattahi, G. Fazelnia and J. Lavaei, Transformation of Optimal Centralized Controllers Into Near-Global Static Distributed Controllers, IEEE Transactions on Automatic Control, 2018

2- G. Darivianakis, S. Fattahi, J. Lygeros and J. Lavaei, High-Performance Cooperative Distributed Model Predictive Control for Linear Systems, American Control Conference (ACC), 2018

3- S. Fattahi, J. Lavaei and M. Arcak, A Scalable Method for Designing Distributed Controllers for Systems with Unknown Initial States, IEEE Conference on Decision and Control (CDC), 2017

4- S. Fattahi and J. Lavaei, On the Convexity of Optimal Decentralized Control Problem and Sparsity Path, American Control Conference (ACC), pp. 3359-3366, 2017

5- S. Fattahi, J. Lavaei, Theoretical Guarantees for the Design of Near Globally Optimal Static, 54th Annual Allerton Conference on Communication, Control, and Computing, 2016

6- S. Fattahi, G. Fazelnia and J. Lavaei, Transformation of Optimal Centralized Controllers Into Near-Global Static Distributed Controllers, IEEE Conference on Decision and Control (CDC), 2015