Tell me more ×
Computational Science Stack Exchange is a question and answer site for scientists using computers to solve scientific problems. It's 100% free, no registration required.

I intend to solve Ax = b where A is complex, sparse, unsymmetric and highly ill-conditioned (condition number ~ 1E+20) square or rectangular matrix. I have been able to solve the system with ZGELSS in LAPACK accurately. But as the degrees of freedom in my system grow, it takes a long time to solve the system on a PC with ZGELSS as the sparsity is not exploited. Recently I tried SuperLU (using Harwell-Boeing storage) for the same system but the results were inaccurate for condition number > 1E+12 (I am not sure if this is a numerical issue with the pivoting).

I am more inclined towards using already developed solvers. Is there a robust solver which can solve the system I mentioned quickly (i.e. exploiting the sparsity) and reliably (in view of the condition numbers)?

share|improve this question
Can you precondition it? If so, Krylov subspace methods could be effective. Even if you insist on direct methods, preconditioning will help control numerical errors. – Geoff Oxberry Apr 27 at 7:50
I also made good experience with preconditioning pretty much the way it is described here: en.wikipedia.org/wiki/… You can do the preconditioning in exact arithmetic. My matrices however are all dense, so cannot point you to more specific methods/routines here. – AlexE Apr 27 at 10:24
9  
Why is the condition number so large? Perhaps the formulation can be improved to make the system better conditioned? In general, you cannot expect to be able to evaluate a residual more accurately than $(\text{machine precision})\cdot(\text{condition number})$, which makes Krylov of little value once you have run out of bits. If the condition number is really $10^{20}$, you should use quad precision (__float128 with GCC, supported by a few packages including PETSc). – Jed Brown Apr 27 at 14:19
1  
Where are you getting this condition number estimate from? If you ask Matlab to estimate the condition number of a matrix with a null space, it might give you infinity or sometimes it might just give you a really huge number (like what you have). If the system you're looking at has a null space and you know what it is, you can project it out and what you're left with might have a better condition number. Then you can use PETSc or Trilinos or what have you. – Daniel Shapero Apr 27 at 15:58
2  
Daniel- the truncated SVD method used by ZGELSS determines the null space (the singular vectors associated with tiny singular values in the SVD are a basis for N(A)) and finds the least squares solution to $\min \| Ax-b \|$ over $perp(N(A))$. – Brian Borchers Apr 27 at 20:56
show 4 more comments

2 Answers

When you use ZGELSS to sovle this problem, you're using the truncated singular value decomposition to regularize this extremely ill-conditioned problem. it's important to understand that this library routine is not attempting to find a least squares solution to $Ax=b$, but rather it is attempting to balance finding a solution that minimizes $\| x \|$ against minimizing $\| Ax-b \|$.

Note that the parameter RCOND passed to ZGELSS can be used to specify which singular values should be included and excluded from the computation of the solution. Any singular value less than RCOND*S(1) (S(1) is the largest singular value) will be ignored. You haven't told us how you've set the RCOND parameter in ZGELSS, and we no nothing about the noise level of the coefficients in your $A$ matrix or in the right hand side $b$, so it's hard to say whether you've used an appropriate amount of regularization.

You do seem to be happy with the regularized solutions that you're getting with ZGELSS, so it appears that the regularization effected by the truncated SVD method (which finds a minimum $\| x \|$ solution among the least squares solutions that minimize $\| Ax-b \|$ over the space of solutions spanned by the singular vectors associated with the singular values larger than RCOND*S(1)) is satisfactory to you.

Your question could be reformulated as "How can I efficiently obtained regularized least squares solutions to this large, sparse, and very ill-conditioned linear least squares problem?"

My recommendation would be to use an iterative method (such as CGLS or LSQR) to minimize the explicitly regularized least squares problem

$\min \| Ax-b \|^{2} + \alpha^{2} \| x \|^{2}$

where the regularization parameter $\alpha$ is adjusted so that the damped least squares problem is well conditioned and so that you're happy with the resulting regularized solutions.

share|improve this answer
My apologies for not mentioning this at the outset. The problem being solved is Helmholtz equation of acoustics using FEM. The system is poorly conditioned because of the plane wave basis used to approximate the solution. – Ganesh Diwan Apr 28 at 2:58
Where do the coefficients in $A$ and $b$ come from? Are they measured data? "exact" values from the design of some object (that in practice can't be machined to tolerances that are 15 digits...)? – Brian Borchers Apr 28 at 3:35
The matrices A and b are formed using the weak formulation of Helmholtz PDE, see: asadl.org/jasa/resource/1/jasman/v119/i3/… – Ganesh Diwan Apr 28 at 4:09

Jed Brown has already pointed this out in the comments to the question, but there is really not very much you can do in usual double precision if your condition number is large: in most cases, you will likely not get a single digit of accuracy in your solution and, worse, you can't even tell because you can't accurately evaluate the residual corresponding to your solution vector. In other words: it's not a question of which linear solver you should choose -- no linear solver can do something useful for such matrices.

These sort of situations typically happen because you choose an unsuitable basis. For example, you get such ill-conditioned matrices if you chose the functions $1,x,x^2,x^3,...$ as the basis of a Galerkin method. (This leads to the Hilbert matrix, which is notoriously badly conditioned.) The solution in such cases is not to ask which solver can solve the linear system, but ask whether there are better bases that can be used. I would encourage you to do the same: think about reformulating your problem so that you don't end up with these kinds of matrices.

share|improve this answer

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.