Friday, December 9, 2011

An observation about ns1687037


Hans Mittleman test various optimization software packages on his benchmark website. One of the test  problems is ns1687037. An inspection of that problem reveals many constraint blocks of the form

R0002624: 5.015e+004 C0024008   ....+ C0025749 = 5.0307748e+007
R0002625: 0e+000 <= - C0025749 + C0025750
R0002626: 0e+000 <= C0025749 + C0025750
R0002627: 1.9877659e-008 C0025750 - C0025751 = 0e+000
                  C0025749 is free

These constraints implies

 -C0025750  <=  C0025749 <=  C0025750
1.9877659e-008 C0025750  = C0025751

and hence

1.9877659e-008   abs( C0025749)  <= C0025751

In other words variable C0025749 is an elastic/penalty variable for constraint R0002624. Moreover, we see variable C0025751 is identical 1.0e-8 of the elastic variable in constraint R0002624. Now the right hand side of constraint R0002624 is 10^7 and hence it is more or less the rounding error that is penalized. That makes me wonder if that model provides reliable results or does what the author wants!

Wednesday, November 23, 2011

A follow up on lower bounds on fill-in

This is an follow up on my previous post on lower bounds on the fill in when doing a sparse Cholesky factorization.

I posted my question to the NA-NET and below is a commented version of the replies I got.

Sivan Toledo pointed out the paper:

  • A Polynomial Approximation Algorithm for the Minimum Fill-In Problem
  • A. Natanzon, R. Shamir and R.Sharan
  • SIAM Journal on Computing, Vol. 30, No. 4, pp. 1067-1079 (2000)

and mentioned there might be a paper by Phil Klein. Most likely it is the report:
and the paper
  • Cutting down on fill using nested dissection: provably good elimination orderings 
  • Ajit Agrawal, Philip N. Klein, and R. Ravi 
  • Graph Theory and Sparse Matrix Computation, edited by A. George, J. Gilbert, and J. W. H. Liu, volume 56 in the IMA Volumes in Mathematics and its Applications, Springer-Verlag (1993), pp. 31-55.
A related work to the above work the ph.d.by Wee-Liang  Heng with the title: "Approximately Optimal Elimination Orderings for Sparse Matrices".  I have printed copy somewhere but cannot locate it now.

The above mentioned work provides algorithm for computing a good  symmetric permutation. However, to the best of my knowledge they are not used in practice but definitely something that I should check out.

Esmond Ng replied and said it is hard to come up with lower bounds general matrices but mentioned bounds can be obtained for special matrices. The relevant papers are

  • Complexity Bounds for Regular Finite Difference and Finite Element Grids
  • Hoffman, Alan J.; Martin, Michael S.; Rose, Donald J.
  • SIAM Journal on Numerical Analysis, vol. 10, no. 2, pp. 364-369.
and
  • Nested Dissection of a Regular Finite Element Mesh 
  • A. George
  • SIAM J. Numer. Anal. 10, pp. 345-363.
I am aware of this work but I am was mainly looking for information about general matrices since the matrices I experience in MOSEK almost always never have grid structure. MOSEK is an optimization package and hence the underlying applications are mostly from economics and planning which give rise to very different matrices than those coming from physics applications. 
Jeff Ovall point a diffrent line of research in his reply:
If you are willing to relax your notion of fill-in a bit, then I may be able to point you in a helpful direction.  Instead of thinking of fill-in in terms of putting non-zeros where their used to be zeros, one can also think of it as (slightly) increasing the rank of low-rank blocks.  For example, the inverse
of the tridiagonal matrix with stencil (-1,2,-1) has no non-zero entries, but the rank of any off-diagonal block is precisely one, so the "fill-in" in this sense is small (requiring only O(n log n) storage instead of n^2).  Hierarchical matrices (Hackbusch, Grasedyck, Boerm, Bebendorf, ... ) exploit this notion of low-rank fill-in not only to compress the "important" information in a matrix, but also to maintain a compressed format while performing factorizations (LU, Cholesky).  From the algebraic point of view, it is the Rank-Nullity Theorem which implies that inverses of sparse matrices will have large blocks which are of low rank.  If the LU-factorization is thought of block-wise, then  this theorem also has something to say about the ranks of the blocks which appear, though it is not obvious to me how to get sharp lower-bounds.  The paper: 
  • The interplay of ranks of submatrices
  • Strang, G. & Nguyen, T. 
  • SIAM Rev., 2004, 46, 637-646 (electronic)
To summarize then there does not seem to be any good lower bound on the minimal amount of fill in possible when computing a sparse Cholesky factorization of a symmetric permuted matrix.




Thursday, November 17, 2011

Lower bounds on the minimal fill-in when factorizing a positive symmetric matrix. Any help out there?

When computing a Cholesky factorization of a positive definite
symmetric matrix A, then it is well-known that a suitable symmetric
reordering helps keeping the the number of fill-ins and the total
number flops down.

Finding the optimal ordering is NP-hard but good orderings can be
obtained with the minimum degree, nested dissection (aka graph
partitioning based) algorithms.  Those algorithms provides an upper
bound on the minimal amount of fill-in. However, I wonder is there any
way to compute a nontrivial lower bound on the minimal amount of
fill-in?

I have been searching the literature but have not found a any good
reference.  Do you have any suggestions? The problem sounds hard I
know.

Why is a lower bound important? Well, it would help evaluate the
quality of the ordering algorithms that we have implemented in MOSEK
(www.mosek.com).  Moreover, the minimum degree ordering is usually
computational cheap whereas nested dissection is expensive. A lower
bound could help me determine when the minimum degree ordering
potentially could be improved by using nested dissection.




Thursday, September 8, 2011

Conference in Honor of Etienne Loute

Yesterday, I was at a conference honoring Etienne Loute where I presented the talk: "Convex Optimization : Conic Versus Functional Form". The messages of the talk is if you can formulate an optimization problem as a conic quadratic optimization problem (aka SOCP) then there are many good reasons to prefer this form over other forms. Some reasons are:
  • Conic quadratic problems are convex construction.
  • They are almost as simple as LPs to deal with in software.
  • Duality theory is almost as simple as the linear case.
  • The (Nesterov-Todd) primal-dual algorithm for conic quadratic problems is extremely good.

I had feared that the distinguished audience would consider my talk too simple. However, the speaker before me was Bob Fourer of AMPL. The title of his talk was about checking convexity of general optimization problems formulated in AMPL and it had two parts. The first part was about checking convexity and the second part was about how you in some cases automatically could convert optimization problems on functional form to a conic quadratic optimization problem. So the two talks complemented each other very well and it made me feel better about my talk.

Finally, I would like to mention that Yurii Nesterov was one of the speakers. He must have enjoyed the day since two speakers was talking about his baby named optimization over symmetric cones.

Monday, May 23, 2011

A report from SIAM Optimization 2011

Last week I was at the SIAM Optimization 2011 conference in Darmstadt. It was very nice conference for the following reasons:
  • It was very easy to go there since Darmstadt is close to the major airport in Frankfurt.
  • The conference center Darmstadium in the center on Darmstadt was execellent.
  • The hotel was excellent and only 2 mins walk from the conference center.
  • The food was fairly cheap and very good. The same was true for the beer.
  • The scientific program was excellent but also very packed. I did not experience any no shows. 
Some of major topis on the conference was MINLP (mixed-integer nonlinear optimization) and SDP (semi-definite optimization). These two topics kind of got together in the plenary talk of Jon Lee the last day because it seems that SDP will play an important role in MINLP. In fact to concluded his talk by saying: "We need powerful SDP software". Since we plan to support SDP at MOSEK then this was a nice conclusion for us.

One of the most interesting talks I saw was a tutorial by my friend Etienne de Klerk. He discussed a preprocessing technique that can be used in semi-definite optimization to reduce the computational complexity for some problems. The idea is that a big semi-definite variable can decomposed into a number of smaller number of semi-definite variables given some transformations are performed on the problem data.

Another major topic at the conference was sparse optimization where by sparse is meant that the solution is sparse. There was plenary about this topic by Steve Wright and a tutorial by Michael Friedlander. Michael presented among other things a framework that could be used to understand and evaluate all the various algorithms suggested to solve structured solution sparse optimization problems.  This topic was also addressed a talk about robust support vector machines by Laurent El Ghaoui which  I liked quite a bit.

Finally, a couple MOSEK guys gave a presentation in a session. The other speakers in that session was Joachim Lofberg the author of YALMIP and Christian Bliek of IBM. Christian talked about the conic interior-point optimizer  in CPLEX. One slightly surprising announcement Christian made was that CPLEX will employ the homogenous model in their interior-point optimizer as the default from version 12.3. In MOSEK the homogeneous model always been the default.


I have definitely overlooked or forgotten something important at the conference but with 4 days conference from early morning to late evening then that is avoidable unfortunately.

Wednesday, April 27, 2011

Formulating linear programs are hard!

When I was younger I was teaching linear programming (LP) to business students. One lesson I learned is that formulating an LP is much harder than it appears when reading a standard text book. Recently I came  across the paper  "Formulating Integer Linear Programs: A Rogues’ Gallery" which provides many ideas that can be useful when formulating LPs.

Wednesday, April 20, 2011

In need of a optimal basis improvement procedure in linear programming.

One of our MOSEK customers is solving an LP with the primal simplex optimizer. Next she does a sensitivity analysis but that fails because the optimal basis is singular. This should not happen in ideal world but can happen for at least three reasons:
  • A bug may cause the optimal basis to be singular.
  • The primal simplex optimizer works on a presolved and scaled problem. Whereas the sensitivity analysis is performed on the original problems. The basis might be well-conditioned in the presolved and scaled "space" and not the original space.
  • The simplex optimizer usually starts with a nicely conditioned basis. In each iteration an LU representation of the basis is updated using rank 1 updates. If the rank 1 update signals that the basis is ill-conditioned then the iterations is continued. Using an idea by John Tomlin it is possible in some cases to discover that the basis becomes ill-conditioned during the rank-1 update. Hence, it might very well be that an ill-conditioned basis is not discovered. Particularly if it becomes ill-conditioned in the last simplex iteration.
Since most real world LPs have multiple optimal basic solutions then looking for the best conditioned (near) optimal basis might be very useful before doing sensitivity analysis or even hotstart. Finding the best conditioned optimal basis is most likely not computationally feasible but maybe it can done in approximate way.

At ISMP 2009 I saw a talk about the cutting plane methods. There was some relation between the quality of the cuts generated and the conditioning of the basis.

Btw. the John Tomlin article I am referring to is "An Accuracy Test for Updating Triangular Factors", Mathematical Programming Study 4, pp. 142-145 (1975).

Wednesday, March 2, 2011

Is it safe to move lower bounds to zero?

Assume we have the problem

  min c'x
  st.   A x = b        (P0)
         x >= l

where l is large in absolute value say l_j=-1000 for all js. (l is short for lower bounds). The dual problem is


   max b'y + l's
   st.    A'y + s = c  (D0)
           s >= 0

 It is very common to transform the problem to

  min c'x + c'l
  st.   A x = b- A l  (P1)
         x >= 0

for efficiency reasons. Indeed all interior-point optimizers will do that.

The dual problem is

   max b'y + c'l
   st.    A'y + s = c  (D1)
           s >= 0


Let us say we solve (P1) and (D1). Moreover,

   A'y + s =  c

holds only approximate which definitely be the case for interior-point methods. To be precise we have

   A'y + s =  c + e

holds exactly where 0 < |e|| << 1.


This implies if (y,s) from (D1) is reported as an optimal solution to (D0) then there can be a big error in the dual objective value. Note that is not the case if l=0. Now if instead we report (y,c-A'y) as the optimal dual solution to (D0) then objective value will be correct but then s>=0 might be violated.

The question is that which dual solution to report. I guess the answer is that it depends on your priorities.

I will leave it as an exercise to the interested reader to construct a small example demonstrating this. Since I just spend all day figuring out that happening on instance with 100K variables.