Research in Progress: Nested Dissection Sparse QR Factorization

November 13, 2020

I especially am curious about sparse QR factorization because it works very effectively on nonsymmetric matrices, does not usually require pivoting for stability, and is not negatively impacted by spectral properties of a matrix (e.g. indefiniteness). I have some questions about the algorithm which I am actively working to answer:

  1. Can nested dissection approach improve parallelism & data locality opportunities for sparse QR factorization?
  2. Can I use nested dissection directly define a sparse QR factorization recursively? (leads to easier proofs of key facts)
  3. Can big blocks of fill-in be compressed? e.g. randomized SVD
  4. Can a compressed/approximate version of this factorization function as a good preconditioner to GMRES?

My preliminary answers to these questions are:

  1. Yes
  2. I think so
  3. It seems so, but needs more experimentation on different matrices
  4. If (1),(3) are true then very likely.

Answering these questions requires a hefty dose of not just numerical linear algebra, but graph theory. Direct methods require deep graph theory on top of numerical linear algebra. “Divide and conquer” strategies common for dense linear algebra simply will not work in the sparse case because applying them naively results in explosively large memory requirements. Judicious use of graph theory algorithms can help preserve sparsity and keep memory needs under control.

I have not fully answered all the above questions, but I continue to work on them. I might upload a paper to arxiv which describe my findings. In the meantime I made a movie of a sparse-QR factorization as it processes a 2D Laplacian matrix which has already been reordered according to a few steps of nested-dissection. These movies can really help understand what is going on and why fill-in occurs where it does.

I will update my blog as I learn more.