This is a short but neat addition to my previous post. In that post I showed one quick way to circumvent structure requirements in linear algebra software by exploiting low-rank-update properties of a matrix. This approach had one weakness however: we must know the low-rank-update structure ahead of time in order to use the sherman-morrison formula. In reality one has to be very lucky indeed to exactly know this structure and exploit it, thus here I show how we can *still* use the low-rank-update structure if we don’t even know it! It does mean we can no longer use the sherman-morrison formula, but fortunately there is a slick algorithm that simultaneously gives a solution to the linear system and also has a strong tendency to uncover low-rank information automatically: Generalized Minimum Residual (GMRES). As before I will give a little mathematical background, and then some Python code you can play with.

GMRES is a sophisticated algorithm with very deep mathematics and history behind it. I have talked about it before and it played a central role in my PhD thesis - but there is just too much to say about it that I can’t do it justice with just a few paragraphs, so I will simply say that GMRES is an iterative algorithm for solving a system of linear equations.

The system of linear equations in this case will be to solve for \( x \) in

\[ Bx = b \]

where

\[ B = A + UV^T \]

and \(B,A \) are \( n \times n \) matrices, and \(U,V\) are \(n \times k \) matrices. (If you’re following along from the previous post then this is a rank-k update rather than only a rank-1 update). The idea here is that \( A \) will have some very special structure which we can exploit and perform very fast solves for systems \( Ay=c \), and also that \( k \) is small.

In this post however I assume that we don’t know what \( U,V \) are, only that they exist. Thus we can’t use low-rank-update formulas as they require knowledge of \(U,V\).

This turns out to be fine however. I show how to still exploit this structure in Python below, but I start by showing what happens if we do nothing except a naive attempt at solving with GMRES.

If we try and solve for \(B\) directly just by naively calling GMRES in Python the results will not be that great:

```
import numpy as np
import scipy.sparse.linalg as sp
import scipy.linalg as lin
#Size of our problem
n=250
#Size of our low-rank update.
k=4
#Make a random matrix A
A=np.random.rand(n,n)
#Make our random low-rank-update matrices
u=np.random.rand(n,k)
v=np.random.rand(n,k)
#Form low-rank update for B
B=A+u@v.transpose()
#Form a random right-hand-side vector. We want to find "x" in the equation Bx=b.
b=np.random.rand(n)
#Define a simple callback that tells us
#how quickly GMRES is converging
it=0
def callback(rk):
global it
print("it: {}, res: {}".format(it,np.linalg.norm(rk)))
it=it+1
sp.gmres(B,b,callback=callback,maxiter=n,restart=n)
```

Your mileage will vary but on most of my runs this takes GMRES the full `n`

iterations, here is an abbreviated output from my callback:

```
...
it: 234, res: 0.11409829482336799
it: 235, res: 0.11278962294171825
it: 236, res: 0.09578954258042327
it: 237, res: 0.09315292938350243
it: 238, res: 0.09138344557231964
it: 239, res: 0.08632416225466737
it: 240, res: 0.08583797506215765
it: 241, res: 0.08491903738978214
it: 242, res: 0.05876121296369029
it: 243, res: 0.05564899628997501
it: 244, res: 0.048411026937269265
it: 245, res: 0.048375868873523103
it: 246, res: 0.04800915530486452
it: 247, res: 0.041199189556598945
it: 248, res: 0.015848328657732898
it: 249, res: 1.865347135180585e-15
```

This is bad. GMRES is only useful as an algorithm if it takes *much* fewer iterations than the full problem size \( n \)! We need to exploit the low-rank structure now.

To exploit low-rank structure here I will give \( A \) as a *preconditioner* to GMRES. Thus instead of solving

\[ Bx= b \]

we solve

\[ A^{-1} B x = A^{-1}b \].

In general it’s too expensive to fully calculate \(A^{-1} B \) for large \( n \), but since the incoming assumption is that it’s fast to solve *systems* with \( A \) we need only supply that as a callback to GMRES, I demonstrate this below:

```
import numpy as np
import scipy.sparse.linalg as sp
import scipy.linalg as lin
#Size of our problem
n=250
#Size of our low-rank update.
k=4
#Make a random matrix A
A=np.random.rand(n,n)
#Make our random low-rank-update matrices
u=np.random.rand(n,k)
v=np.random.rand(n,k)
#Form low-rank update for B
B=A+u@v.transpose()
#Form a random right-hand-side vector. We want to find "x" in the equation Bx=b.
b=np.random.rand(n)
#Define a simple callback that tells us
#how quickly GMRES is converging
it=0
def callback(rk):
global it
print("it: {}, res: {}".format(it,np.linalg.norm(rk)))
it=it+1
#Define callback for solving linear systems with A
#Here I just use a generic linear solver
#but if A had useful structure we would use a corresponding
#fast solver instead
def solveA(x):
return lin.solve(A,x)
#Define preconditioner input to GMRES
M=sp.LinearOperator(matvec=solveA,shape=(n,n))
sp.gmres(B,b,M=M,callback=callback,maxiter=n,restart=n)
```

With this preconditioner in GMRES takes exactly \( k+1 \) iterations!

```
it: 0, res: 2.1465936288193483
it: 1, res: 0.7839568884321737
it: 2, res: 0.7821841310319828
it: 3, res: 0.5683891782804404
it: 4, res: 3.800256062188319e-14
```

That means GMRES calls the preconditioner \( k+1 \) times, so if \( k \) is small and \( M \) is cheap to evaluate then this is potentially a big win.

The trick here is we need to know \( A \). If we know this, then

\[ A^{-1} B = I + A^{-1} UV^T \].

This presents an ideal situation to GMRES. GMRES works very well when there is clustering of eigenvalues. In this case We will have *at least* \(n-k\) eigenvalues exactly equal to \( 1 \) (coming from the identity matrix \( I \) ), and then \( k \) possibly different eigenvalues coming from \( A^{-1} UV^T \). The cluster of \(n-k\) eigenvalues gets resolved immediately by GMRES in a single iteration, and then the remaining \( k \) take an iteration each (unless by chance they also happen to be clustered very closely together).

Here I show how to use low-rank-update structure even if it is unknown. I used the base matrix as a preconditioner input to GMRES and if the original system does indeed have low-rank-update structure then GMRES converges in the same number of iterations as the rank of the update. Thus if the rank of the update is small, preconditioned GMRES will converge rapidly, only requiring a few iterations of the base matrix solver.