Volume 51 Issue 01 January/February 2018
Research

Snap to Structure

The Adobe Photoshop image manipulation program has a feature called “snap to,” which snaps edges of a selection into alignment with grid lines superimposed over the image as users move the selection around. This feature is an invaluable tool for producing perfectly aligned, professional-quality graphics. An analogous operation in mathematics is what Alan Edelman calls “snap to structure.” Here, a mathematical object that is required to have a particular property, but fails to do so, is perturbed so that it has the property. An orthogonal projection onto the set of interest typically defines the perturbation.

A ubiquitous example of snap to structure occurs in floating-point arithmetic. When we compute the sum or product of two double-precision floating-point numbers, the exact result may have more significant digits than we started with. The IEEE standard requires the exact product to be snapped back to double precision according to precisely defined rounding rules, with round to nearest the default.

In recent work, Jörg Liesen and Robert Luce [1] consider the question of whether a given \(m \times n\) matrix is a Cauchy matrix — whether its \((i,j)\) element has the form \(1/(s_i-t_j)\) for all \(i\) and \(j\) and some \(m\)-vector \(s\) and \(n\)-vector \(t\). Various computations can be done asymptotically fast with such matrices (of which the famous Hilbert matrix is a special case). The authors also treat the problem of approximating a matrix by a Cauchy matrix, that is, snapping to Cauchy structure.

Snapping to a matrix structure is commonly described as solving a matrix nearness problem. Here, I take distance to be measured in the Frobenius norm. One of the most familiar examples is forming the nearest rank-\(k\) matrix. The Eckart-Young theorem states that to solve this problem one simply computes a singular value decomposition (SVD) of the given matrix and sets all singular values beyond the \(k\)th largest to zero. In the context of solving a linear least squares problem, snapping the coefficient matrix to rank \(k\) corresponds to forming a truncated SVD solution.

Loss of definiteness of a symmetric matrix is a common problem in many applications. One can find the nearest positive semidefinite matrix to a given matrix by computing a spectral decomposition and setting any negative eigenvalues to zero.

Cartoon created by mathematician John de Pillis.
Cartoon created by mathematician John de Pillis.

Another example, which I first encountered in aerospace computations, concerns orthogonality. A \(3 \times 3\) direction cosine matrix drifts from orthogonality because of rounding errors and thus must be orthogonalized, with practitioners favoring the nearest orthogonal matrix over other orthogonalization techniques. For a complex scalar \(z=re^{i\theta}\), the nearest point on the unit circle is \(e^{i\theta}\). The matrix case generalizes this observation: if \(A=UH\) is a polar decomposition of a real, nonsingular matrix \(A\) (\(U\) orthogonal, \(H\) symmetric positive definite), then \(U\) is the nearest orthogonal matrix to \(A\).

Snap to structure is also natural when a real solution is expected but the relevant algorithm has made an excursion into the complex plane, potentially leaving an imaginary part of rounding errors. In this case, snapping means setting the imaginary part to zero. This is done in the MATLAB function funm (for computing a function of a matrix) when the matrix is real and the result’s imaginary part is of the order of the unit roundoff.

Of course, much mathematical research is concerned with preserving and exploiting known structure in a problem, making snapping to structure unnecessary. For example, geometric numerical integration is about structure-preserving algorithms for the numerical solution of differential equations. For a simple illustration, suppose we decide to plot circles by numerically solving the differential equations \(u'(t) = v(t)\) and \(v'(t) = -u(t)\) with initial values \(u(0) = 1\) and \(v(0) = 0\), for which \(u(t) = \cos t\) and \(v(t) = -\sin t\). The forward Euler method produces solutions spiralling away from the circle, while the backward Euler method produces solutions spiralling into the origin. We could apply one of these methods and project the solution back onto the circle at each step. However, a better strategy would be one that produces approximations guaranteed to lie on the circle; the trapezium method has this property. For this problem, \(u(t)^2 + v(t)^2\) is an invariant of the differential equations, which the trapezium method automatically preserves.

Despite the large body of theory on structure-preserving algorithms, we live in a world of imperfect data, use finite-precision arithmetic, and have a limited choice of available software, all of which can result in errors that destroy structure. So enforcing structure at some point during a computation may seem sensible. But is it always a good thing to do?

Consider the problem of computing the determinant of a matrix of integers in floating-point arithmetic. Using the definition of determinant by expansion by minors will give an integer result, excluding the possibility of overflow. But this approach is too costly except for very small dimensions. The determinant is normally computed via a pivoted LU factorization: \(PA = LU\) implies \(\det(A) = \pm \det(U)\). The computed determinant is likely to have a nonzero fractional part because of rounding errors. Rounding the result to the nearest integer might seem natural, but consider this MATLAB example involving a matrix whose elements are from Pascal’s triangle:

>> n = 18;  A = pascal(n);  det(A)

ans =

  1.8502e+00.

If we round to the nearest integer, we declare the determinant to be \(2\). But the determinant is \(1\) (as for all Pascal matrices) and \(A\) is formed exactly here, so the only errors are within the determinant evaluation. Early versions of MATLAB used to snap to structure by returning an integer result after evaluating the determinant of a matrix of integers. This behavior was changed because, as we have just seen, it can give a misleading result. Interesting rounding error effects can also occur in the evaluation of the determinant of a matrix with entries \(\pm 1\).

Photoshop sensibly allows users to disable snap to structure, as one does not aways want elements to line up with a grid. Likewise, snapping to a mathematical structure is not always the correct way to handle a loss of that structure. Understanding why is one of the things that makes life interesting for an applied mathematician.


Acknowledgments: I am grateful to Alan Edelman for stimulating discussions on the topic of this article.

References

[1] Liesen, J., & Luce, R. (2016). Fast Recovery and Approximation of Hidden Cauchy Structure. Linear Algebra Appl., 493, 261-280.

About the Author