Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.

Consider the following algorithm for linear programming, minimizing [c,x] with A.x <= b.


(1) Start with a feasible point x_0

(2) Given a feasible point x_k, find the greatest alpha such that x_k - alpha.c is admissible (straighforward, look at the ratios of the components of A.x0 to A.c)

(3) take the normal unit vector n to the hyperplane we just reached, pointing inwards. Project n on the plane [c,.] giving r = n - [n,c]/[c,c].c, then look for the greatest beta for which x_k - alpha.c + beta.r is admissible. Set x_{k+1} = x_k - alpha.c + 1/2*beta.r

If x_{k+1} is close enough to x_k within tolerance, return it, otherwise go to (2)


The basic idea is to follow the gradient until one hits a wall. Then, rather than following the shell of the simplex, like the simplex algorithm would do, the solution is kicked back inside the simplex, on a plane where the solutions are no worse, in the direction of the normal vector. The solution moves halfway between the starting point and the next constraint in this direction. It's no worse than before, but now it's more "inside" the simplex, where is has a shot at taking long leaps towards the optimum.

  • Though the probability of hitting an intersection of more than one hyperplane is 0, if one gets close enough to multiple hyperplane within a certain tolerance, the average of the normals may be taken.

  • This can be generalized to any convex objective function by following geodesics on the levels of the function. For quadratic programming in particular, one rotates the solution towards the inside of the simplex.


Questions:

  • Does this algorithm have a name or fall within a category of linear-programming algorithms?

  • Does it have an obvious flaw that I'm overlooking?

share|improve this question
 
sounds like a homework problem –  Tim Tisdall Oct 23 '13 at 17:11
 
no, not by any stretch of the imagination –  Arthur B. Oct 23 '13 at 22:48
add comment

2 Answers

up vote 1 down vote accepted

I am pretty sure this doesn't work, unless I miss something: your algorithm will not start moving in most cases.

Assume your variable x is taken in R^n.

A polyhedron of the form Ax <= b is contained in a 'maximal' affine subspace P of dimension p <= n, and usually p is much smaller than n (you will have a lot of equality constraints, which can be implicit or explicit: you cannot assume that the expression of P is simple to obtain from A and b).

Now assume you can find an initial point x_0 (which is far from being obvious, btw) ; there is very little chance that the direction of the gradient c is a feasible direction. You would need to consider the projection of the direction c on P, and this is very difficult to do in practice (how would you compute such projection?).

Then, what you want in your step (3) is not the normal direction of the hyperplane you reached, but again its projection on P (visualize the polyedron as a 2d polyedron embedded in a 3d space can help).

There is a very good reason why barrier functions are used in the interior point methods: it is very difficult to describe in practice the geometry of the high-dimension convex sets from the constraints (even simple ones like polyedrons), and things that "seems obvious" when you draw a picture on a piece of paper will not usually work when the dimension of the polyedron increases.

One last point is that your algorithm would not give the exact solution, whereas the simplex does in theory (and I read somewhere it can be done in practice by working with exact rational numbers).

share|improve this answer
 
I am indeed assuming that my polyhedron isn't flat. I'm interested in a class of problems where there are no equality constraints, I will clarify this. –  Arthur B. Oct 24 '13 at 12:28
 
However, this shows how a flattish simplex can be problematic. In R^2 suppose we want to minimize the y coordinate and the simplex looks like a very thin needle at a 45 degree angle, the algorithm will end up taking tiny steps down and left. –  Arthur B. Oct 24 '13 at 13:11
 
Clearly there are some polyedrons on which the simplex will perform poorly. However, consider the minimization of y over x<=1, y<=1, y<=x. The simplex will take at most one iteration to get the optimal solution; your algorithm starting from (1,1) will take log2(n) iterations to have a solution "close" to the optimal value with a tolerance of 1/n! –  Nicolas Grebille Oct 25 '13 at 11:24
add comment

read up on interior point methods: http://en.wikipedia.org/wiki/Interior_point_method

this approach can have nice theoretical properties, but the algorithm performance can tend to tail off in practice

share|improve this answer
 
Yes, it is an interior point method, but it doesn't try to set up barriers, so it's quite different from the method described in the wikipedia article. –  Arthur B. Oct 23 '13 at 22:51
add comment

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.