A method which produces a sequence of numerical approximations which converges (provided technical conditions are satisfied) to the solution of a problem, generally through repeated applications of some procedure. Examples include Newton's method for root finding, and Jacobi iteration for ...
2
votes
1answer
66 views
Jacobi Iteration diverges?
I'm working through a problem in a textbook as follows:
"Consider the $d \times d$ Toeplitz matrix
$$ A = \left[ \begin{array}{ccccc} 2 & 1 & 0 & \cdots & 0 \\
-1 &2 ...
2
votes
0answers
52 views
What are Implications of Commutative Diagrams?
This question may be too broad. But I really want to know some concrete explanations. I often find various commutations appear here and there, which concerns the application order of two operators. ...
7
votes
2answers
149 views
Looking for an algorithm that allocates climbing hold colors to wall sectors
I posted this question earlier on stackoverflow, where it was closed as off-topic. I hope it survives here.
I our climbing gym, the routes need to be re-set from time to time. The following rules ...
5
votes
2answers
52 views
Levenberg optimizer halts quickly when given more variables, or fewer constraints
I'm using the g2o C++ optimization library to refine a GPS trajectory using accelerometer data.
The program uses a Levenberg-Marquardt optimizer over data points representing the position and ...
4
votes
3answers
144 views
Fastest polynomial root finder for a given accuracy
I am looking for a very fast polynomial root finder, hopefully with a matlab code. I don't need very accurate results, 2-3 decimal places would be fine. Also the method should be able to optionally ...
9
votes
3answers
234 views
Understanding the “rate of convergence” for iterative methods
According to Wikipedia the rate of convergence is expressed as a specific ratio of vector norms. I'm trying to understand the difference between "linear" and "quadratic" rates, at different points of ...
0
votes
2answers
99 views
Checking for error in conjugate gradient algorithm
What is a good way to check if the any numerical error is occured in conjugate gradient algorithm. Additionally why is it not suggested to check error by checking A-orthogonality of search direction ...
4
votes
2answers
106 views
Krylov subspace iterative methods in floating point arithmetic
Is there any work that considers Krylov subspace iterative methods in floating point arithmetic? I'm especially interested in how rounding errors influence the convergence and the accuracy of the ...
5
votes
2answers
246 views
How to remove Rigid Body Motions in Linear Elasticity?
I want to solve $K u = b$ where $K$ is my stiffness matrix. However some constraints may be missing an therefore some rigid body motion may be still present in the system (due to eigenvalue zero). ...
3
votes
2answers
118 views
Does right-hand side influence convergence rate of a Krlylov supspace method?
Consider general system $Ax=b$. Does convergence of the Krylov subspace methods depend on actual vector $b$ assuming initial guess is zero? I mean such factors as locality of the source (with the ...
8
votes
1answer
130 views
What is the current state of polynomial preconditioners?
I wonder what has happened to polynomial preconditioners. I am interested in them, because they appear to be comparatively elegant from a mathematical perspective, but as far as I have read in surveys ...
5
votes
3answers
280 views
Eigenvectors with the Power Iteration
To compute the eigenvector corresponding to dominant eigenvalue of a symmetric matrix $A\in\mathbb{R}^{n\times n}$, one used Power Iteration, i.e., given some random initialization, ...
4
votes
4answers
378 views
Simulated Annealing proof of convergence
I implemented downhill simplex simulated annealing algorithm. Algorithm is very hard to tune, w.r.t. parameters including cooling schedule, starting temperature...
My first question is about ...
1
vote
3answers
138 views
Convergence of the gradient descent and linear vs non-linear fixed point iteration
Suppose a system $$Ax=b$$ is given, with $A\in\mathbb{R}^{n\times n}$ being a symmetric positive-definite matrix, and some non-zero $b\in\mathbb{R}^n$. The gradient method with optimum step length can ...
6
votes
2answers
589 views
How to choose a method for solving linear equations
To my knowledge, there are 4 ways to solving a system of linear equations (correct me if there are more):
If the system matrix is a full-rank square matrix, you can use Cramer’s Rule;
Compute the ...