Losing the symmetry

If you know about my work, you know that for the past several years I’ve been working on a special type of problems called structured inverse eigenvalue problems (or SIEP’s for short). Well, I mainly focused on SIEP’s for Graphs (or IEPG’s for short). The question is that you give me a (multi-)set of n numbers \Lambda and a graph G and ask me if there is a matrix whose graph is G and its spectrum (the set of eigenvalues) is \Lambda. Previously, Bryan Shader and I proved a few theorems and in particular we showed that whenever you choose those n numbers to be distinct real numbers, then there is a real symmetric matrix with the given graph and spectrum. A little after that, Sudipta Mallik and I had more fun with this problem and in particular showed that if you choose those numbers to be 2k distinct purely imaginary numbers in complex conjugate pairs and l zeros, and if you choose a graph on 2k+l vertices which has a matching of the size k, then there is a real skew-symmetric matrix with the given graph whose eigenvalues are the given numbers.

For the longest time I was afraid to touch the general problem of what happens in the non-symmetric case? That is, if you give me a bunch of complex conjugate pairs and a bunch of real numbers, can I answer the question that if there exist a matrix with a given graph which has those eigenvalues or not. Until this past winter that I saw a question on Research Gate asking if any arrowhead matrix (a matrix whose graph is subgraph of a star) is similar to a tridiagonal matrix (a matrix whose graph is subgraph of a path). Having done all the work I responded to the question, assuming eigenvalues are distinct and real, and the matrix is real and symmetric. The OP responded that they actually work with not-necessarily-symmetric matrices; exactly what I was afraid of. To make it worse, the OP mentioned that they actually need an algorithm to find the similar tridiagonal matrix without computing its eigenvalues, because computing eigenvalues for matrices of order larger than 4 is not exact. The problem is hard, and I can’t still suggest a nice way to do it, but there might be ways to solve the problem for arrowhead matrices and tridiagonal matrices, since they are very special. But I decided to approach the problem as an IEPG.

Considering that I have recently been reading Vladimir Arnold’s works (mostly the not serious works) and that I was a student of Bryan Shader, I started some experiments. 2 \times 2 matrices seem easy enough to play with. So I started there and the problem was easily solved. It even gave me so much wiggle room in the solution that I hoped for more (In the Jacobian method we usually look for a solution that is robust, i.e. with a lot of wiggle room, and then use the Implicit Function Theorem to do the rest of the work. In this paper it is put together as a black-box tool that solves many problems. All you need to do is to find a “generic” matrix to begin with). In fact

A = \begin{bmatrix}  a+d & x \\ y & a-d\end{bmatrix}

has eigenvalues a \pm b \rm{i}, where d^2 = -xy -b^2 for any choice of x and y such that xy \leq -b^2. Then there was time to go to 3 \times 3 case which I suddenly realized I can just look at direct sums of 2 \times 2 and 1 \times 1 blocks, to construct a matrix (ignoring the graph) with the given spectrum.

A = \left( \bigoplus_{j=1}^{k} \left[ \begin{array}{cc} \lambda_j & \mu_j\\ -\mu_j & \lambda_j \end{array} \right] \right) \oplus \left( \bigoplus_{j=1}^{l} \left[ \begin{array}{c} \gamma_j \end{array} \right] \right)

Easy enough. Then, the problem was to show that the matrix is generic (in the sense of the Jacobian method).

First, I needed to show that if I have a real polynomial with a simple root a, then small (real) perturbations of it also have a simple root close to a. It was clear that this should hold by looking at a graph of a polynomial, but writing it formally was ugly. So I posted it here and soon I got a few answers, among them one by Paul Horn which had a rigorous analytic proof, and one by Sajjad Lakzian who suggested the geometric approach of transversality.

roots

The analytic idea is that if you have a simple root and if you are going to lose it as a result of perturbation (i.e. for it to become a nonreal root),  two real roots should come together on the real axis to join and then separate into a complex conjugate pair, so for a simple root, if you keep your perturbations small, such thing won’t happen.

tangents

The geometric idea is that, the graph of the polynomial and the x-axis intersect transversally at a simple root (i.e. generically, i.e. non-horizontal tangents), so as an implication of the Implicit Function Theorem a perturbation of them also intersects transversally in a nearby point.

The rest of it was really using the same ideas as in my works with Bryan and Sudipta and doing new computations and book-keepings to prove this:

Theorem. For a given set \Lambda consisting of k distinct complex conjugate pairs and l distinct real numbers, and a given graph G on 2k+l vertices with a matching of size k, there is a real matrix with graph G whose spectrum is \Lambda.

And a cute corollary of this theorem is the following:

Corollary. A real matrix has distinct eigenvalues if and only if it is similar to a real irreducible tridiagonal matrix.

The paper is submitted and the preprint will be is available through arXiv on Monday.


UPDATE: The paper is published now and it is available here.

Advertisements

One thought on “Losing the symmetry

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s