8/04/2012

Who said you can not multiply in a linear program?

Lately I have been using linear programming, for work. Linear programs differ much from procedural programming. They are a way of stating an optimisation problem in a very effective way.

A linear program consist of variables, a target function, and constraints. Both the target function and the constraints are expressed in terms of variables. The target function is the expression which you aim to maximise or minimise in line of the constraints you specified.

It is not called linear by chance, both the target function and the constraints have to be linear i.e. of the form \(a + b * x <=> c\) where \(a,b\) and \(c\) are constants. Here we only have one variable \(x\) but you can have as many as your solver can handle. The variable can be of several types. It can be an integer, a boolean (restriction on integer with only values 0 and 1), or a real number. Restricting the variable to a boolean enables to describe optimisation problems which can involve mappings.

Lets say we want to map the elements of two sets \(A\) and \(B\). That both sets and all the combinations of elements are enough so we need a computer but not enough for all the combinations to be too many to enumerate in practice. We also assume that we can calculate the cost of a given mapping. That is that the cost \(c_11\) of \(a_1\) being mapped to \(b_1\) is computable in practice. We can then define a target function as the sum of the costs of the active mappings \(target =\sum_i,j m_{ij}*c_{ij}\). The variable \(m_{ij}\) is 0 if \(a_i\) and \(b_j\) are mapped. We can then specify as constraints any other rules that we may want. At one stage the constraints may become constraints about the mappings themselves. We can very simply imaging that the costs are altered if two elements are mapped.

For the sake of the argument lets say that we want to decrease the cost by a constant \(z\) if \(a_1\) is mapped to \(b_1\) and \(a_2\) is mapped to \(b_2\). That is modifying the target function to be \(target =\sum_{i,j} m_{ij}*c_{ij} - m_{11} * m_{22} *z\) This last statement would all be fine if you could multiply the two variables \(m_{11}\) and \(m_{22}\). You need for both mappings to exist i.e. be 1 in order for \(z\) to be subtracted from the target function. The thing with linear programs is that you can not multiply two variables, the problem definition would then cease to be linear. We would not be able to solve the mapping with the same techniques.

Here we are then truly stuck. We can not perform an and. We can not multiply the two boolean variables, which is another way of saying the same. But no, there is a way of doing it.

What we need is a set of constraints that hold emulating the truth table of an and. That is given three variables \(a\), \(b\) and \(c\). We want \(c\) to be equal 1 only when both \(a\) and \(b\) are 1 and 0 otherwise. Bear in mind that the variables can only take one of two values 0 or 1. And that the value that the variable takes has always to be the maximal if we maximise, or the minimal otherwise.

We will present here the method if we are maximising.

Since all the constraints have to hold, the trick is to first find a constraint that will give us the result we want. That is when both variables are 1 the result is also worth 1 $a + b <= 1 +c $ is such a constraint. Pretty but there is a but. The problem is now that for all other cases of either \(a\), \(b\) or both being 0 the constraint also holds for \(c\) being 1. Which is not what we want. OK, not a cool thing, but we can define other constraints which will prevent those cases from occurring.

This can be done with the help of the two following constraints. \(c < = a\) and \(c < = b\). This means that when \(a\) or \(b\) are 0 \(c\) has to be 1. Technically the constrains as stated hold for \(a = 0, c = 0\) but since we are maximising \(c\) is forced to take the value 1, in such scenario. (NOTE: To better understand this last point look at the subtle role that the free variables play in the SIMPLEX algorithm.) If we then combine all the constraints we have:

Constraints mimicking an and operation
$a + b < = 1 +c $
\(c < = a\)
\(c < = b\)

Which hold for the following values of \(a\), \(b\), and \(c\)

\(a = 0, b = 0, c = 0\)
\(a = 1, b = 0, c = 0\)
\(a = 0, b = 1, c = 0\)
\(a = 1, b = 1, c = 1\)

But not for:

$a = 0, b = 0, c = 1$
$a = 1, b = 0, c = 1$
$a = 0, b = 1, c = 1$
$a = 1, b = 1, c = 0$

Which is exactly how the truth table of an and looks like. So we have found a way to emulate an and, which is the multiplication of two boolean variables in a linear program.

Impossible is just another word, for look-around.

12/19/2011

Clocks with dynamic hands... and how Fortran makes sense

I have recently encountered the need to iterate through arrays with a dynamic number of dimensions. The usual way of having nested for loops does not work any more since the number of for loops would need to be adapted at run time, Generating code is cool, but to do it at runtime based on a dynamic data structure in a language which is not LISP can be infeasible.
The solution we propose comes in two parts, first the vectorisation of the matrix in order to facilitate access to the elements i.e. make it more consistent. When thinking about this, Fortran all of a sudden made sense.

The second part involves one of the possible implementations of the access to the data structure.
This new representation makes for a very simple way of iterating through arrays of dynamic dimensions.
Lets first provide a bit more context to illustrate all this.
In Octave you can create data structures with any number of dimensions.
When you want to iterate through them you either know the number of dimensions ahead of time or you can use a handy operator ":" that converts to a vector any matrix. This operation is called vectorisation and as such can be found on wikipedia.

This approach is fine but if you have to do something special per row, column or any slice of the array playing with indices can get scary. A way of getting the best of both words is to calculate the index of each array position as a function of an index of each one of the dimensions. When figuring out the mapping, which is really not difficult. It then becomes apparent why Fortran has its column first strategy.
The mapping is indeed very simple, and even simpler if we apply the Fortran convention
With the help of an array $D$ which describes the dimensions of the matrix.
$D$ can be used besides for validating if the element is inside bounds. To compute quickly what is the index of the element in the vectorized form of the matrix.
For the bound check we have that for a given index vector $V$ any of its elements $v_i$ for dimension $i$ has to be strictly bound by the value of $D$ for the same dimension $i$ a.k.a $d_i$ or in other words $0\leq v_i < d_i$.
The position of an element in the vectorialised form of the array is given by the following formula. $p = \sum_i=0^|D| v_i \prod_j=1^i d_j $ as you can see the first dimension is skipped.
The cool thing about this formula is that it still holds for the row first or Cs' way of storing arrays, all you need to do there is swap $v_0$ and $v_1$ as well as $d_0$ and $d_1$.

The other method although more involved programatically yields itself better to more complex manipulations. I like to call it the dynamic hand clock, because it functions just like a clock only the number of hands and the maximum value that they reach are dynamic.
The index of the current element is stored in an array $v$ where $v_i$ stores the current position for dimension $i$. Alongside the vector $v$ we also need a vector $m$ storing the maximum valid value for each dimension. The next index $v$ is then calculated by adding 1 modulo the maximum value for that index. If a rol-over occurs the vector is incremented at position $i+1$.

It becomes then very easy to react on such rol overs at each dimension, making this method very convenient for an iterator or a delegate pattern.

6/22/2011

First test with AsciiMathML

{EAV_BLOG_VER:fb91687c1ec3a8ab}

Since I am now going a phase of re-connecting with my past
I have decided to give this old blog some new posts.
I am back now to write code but mostly in Octave, which when integrated with strong typed languages does pose some problems. Notably how to iterate over a data structure which is completely dynamic, pretty much like the arrays in octave. And to do so without reverting to linked list or other container which may hurt performance and may not be as fun.

As part of preparing to post that solution, I wanted to experiment with AsciiMath ML since I will need it fairly soon.

The only thing preventing me from posting the solution is that I want to still figure one small part of the problem and that it is 1:35, and I have an early flight tomorrow.
But enough rambling In order to describe my solution I will need to use equations of the sort:

$idx= j+i|J|+k|J||I|$

You can imagine what this is all about.

4/09/2006

Fibbo K

One of the oldest most visited problems in any computer science course the Fibonacci series a very good explanation including sample algorithms can be found in the almighty Wikipedia

This program presents a slight extension in that you can calculate series formed by the sum of the k prior elements. All you have to do is change the size of the array that stores the k prior numbers.

There is just a little trick to work out in which element of the array will the nth element be stored.


;working array of only two el elements
;If the array is initialised to k the program will calculate series or order k
;That is f(i)=f(i-1)+...+f(i-k) when k=2 its the Fibonacci series
(setq arr (make-array 2))

;initialising the array
; builds the first array for k as (0 ... k)
(dotimes (i (array-dimension arr 0))
(setf (aref arr i ) i )
)

;function summing all the elements in the array
(defun sum (vect)
(setq temp 0)
(dotimes (i (array-dimension arr 0))
(setq temp (+ temp (aref vect i )))
)
(setq sum temp)
)

;function calculating the fibbonaci-k series without recursion
(defun fib(pos)
(setq buffSize (array-dimension arr 0 ))
(dotimes (i pos)
(setq j (mod i buffSize))
(setf (aref arr j) (sum arr))
)
(setq extract (mod (- pos 1) (array-dimension arr 0)))
(setq fib (aref arr extract))
)

(print '(I will calculate for you the nth fibonacci number n= ) )
(setq num (read))
(print '(using k=))
(print (array-dimension arr 0))

(setq res (fib num))

(print res)
;(print arr)


This is one of the typical algorithms that gains alot from being de-recursified. Like this the memory footprint is low, the computation time also, as we only add a modulus operation, and everyone is happy.

4/06/2006

Renversant?

I decided to name this code snippet after the french term for reversing because this algorithm does just that. It iterates through the perimeter of the array, extracts the anti-diagonal "(-1,1) direction" and reverses that list. Then it simply asigns it back to the same array.


Little idea extracted from an exercise of "Programming pearls"


The program reads:



;example of array transposition based on rotations
;The method consists of the following: we reverse each of the anti-diagonal arrays
;first we have to extract and re assign each of the elements

(setq arr (make-array '(11 11)))

;initialise the array
(dotimes (i (array-dimension arr 0))
(dotimes (j (array-dimension arr 1))
(setf (aref arr i j ) (+ (* i (array-dimension arr 0 )) j) )
)
)

;utility method for computing the sum of the elements in a list
(defun sum(in)
(setq temp 0)
(loop for item in in do
(setq temp (+ temp item))
)
(setq sum temp)
)

;utility for working out the first co-ordinate of the diagonal
(defun next-diag(pos dim)
(setq x (car pos))
(setq y (cadr pos))
(cond
((eq dim x)
(setq y (+ y 1 ))
)
((eq dim x)
(setq x (- x 1))
)
)
(cond
((eq y 0)(setq x (+ x 1)))
((eq y 0)(setq y (+ y 1)))
)
(setq pos (list x y))
(setq next-diag pos)
)

;method extracting the contents of an anti-diagonal of the array
;starting at the specified co-ordinates
(defun extract-diagonal(pos data)
(setq x (car pos))
(setq y (cadr pos))
(setq temp(list (aref data x y)))
(loop while(and (> x 0) (< y (- (array-dimension data 1) 1))) do
(setq x (- x 1))
(setq y (+ 1 y))
(setq temp (append temp (list (aref data x y )) ) )
)
(setq extract-diagonal temp)
)
(defun insert-diagonal(pos data diagonal)
(setq x (car pos))
(setq y (cadr pos))
(dolist (k diagonal)
(setf (aref data x y) k )
(setq x (- x 1))
(setq y (+ 1 y))
)
)

(defun transpose(data)
(setq xdim (- (array-dimension data 0) 1))
(cond
((evenp xdim)(setq modif 2))
((oddp xdim)(setq modif 3))
)
(setq numdiag (- (sum (array-dimensions data)) modif ))
(setq diagpos '(0 0))
(dotimes (i numdiag)
(setq diagpos (next-diag diagpos xdim))
;step one extract the diagonal to a list
(setq diagonal (extract-diagonal diagpos data))
;step two reverse
(setq diagonal(reverse diagonal))
(print diagonal)
;step three profit ;-)
(insert-diagonal diagpos data diagonal)
)
)

(print 'Before )
(print arr)
(print 'transpose)
(transpose arr)
(print 'After )
(print arr)


Program output is also very pretty:


prozak@Mistral:~/Exploration/lisp/pearls%clisp transpose.lisp

BEFORE
#2A((0 1 2 3 4 5 6 7 8 9 10)
(11 12 13 14 15 16 17 18 19 20 21)
(22 23 24 25 26 27 28 29 30 31 32)
(33 34 35 36 37 38 39 40 41 42 43)
(44 45 46 47 48 49 50 51 52 53 54)
(55 56 57 58 59 60 61 62 63 64 65)
(66 67 68 69 70 71 72 73 74 75 76)
(77 78 79 80 81 82 83 84 85 86 87)
(88 89 90 91 92 93 94 95 96 97 98)
(99 100 101 102 103 104 105 106 107 108 109)
(110 111 112 113 114 115 116 117 118 119 120))
TRANSPOSE
(1 11)
(2 12 22)
(3 13 23 33)
(4 14 24 34 44)
(5 15 25 35 45 55)
(6 16 26 36 46 56 66)
(7 17 27 37 47 57 67 77)
(8 18 28 38 48 58 68 78 88)
(9 19 29 39 49 59 69 79 89 99)
(10 20 30 40 50 60 70 80 90 100 110)
(21 31 41 51 61 71 81 91 101 111)
(32 42 52 62 72 82 92 102 112)
(43 53 63 73 83 93 103 113)
(54 64 74 84 94 104 114)
(65 75 85 95 105 115)
(76 86 96 106 116)
(87 97 107 117)
(98 108 118)
(109 119)
(120)
AFTER
#2A((0 11 22 33 44 55 66 77 88 99 110)
(1 12 23 34 45 56 67 78 89 100 111)
(2 13 24 35 46 57 68 79 90 101 112)
(3 14 25 36 47 58 69 80 91 102 113)
(4 15 26 37 48 59 70 81 92 103 114)
(5 16 27 38 49 60 71 82 93 104 115)
(6 17 28 39 50 61 72 83 94 105 116)
(7 18 29 40 51 62 73 84 95 106 117)
(8 19 30 41 52 63 74 85 96 107 118)
(9 20 31 42 53 64 75 86 97 108 119)
(10 21 32 43 54 65 76 87 98 109 120))

This is the sort of algorithm that may be handy if you have little memory and need to do everything in place, there is no need for a second array making things easy on the memory side of things.

4/02/2006

A friend of mine passed me today a very good book Programming Pearls by Jon Bentley, in it I saw a method for rotating the elements of a vector that I just had to implement. Since I now mainly write java for work I decided to implement it in Lisp.

The method for rotating the vector is very simple and elegant it just holds in the following code.

To rotate an array of size N  by k positions

  1. We start by reversing the first k elements, 
  2.  Then we reverse the N-k last elements
  3. Finally we reverse the whole array


 Its simple and completely fool proof its one  of those methods that just work out of the box, the code is so simple that it can not go wrong. If it appears unclear play around with some examples until it cliks in.

Here are the same three lines translated to lisp. 

 ;rotate by num elements the collection
(defun rot (collection num)
        (myreverse collection 0 num)
        (myreverse collection (+ num 1) (- (length collection) 1))
        (myreverse collection 0 (- (length collection) 1 ))
)



I decided not to use any libraries to make the example easier to understand  so here is the full code of a demo:

;Example of an in place N rot i in O(N)
;this implementation is as presented in programming pealrs by Jon Bentley

(setq arr (make-array 23))

;psedudo algo is as follows for all data i between start and end swap
(defun myreverse(data start end)
(dotimes (i (/ (- end start) 2))
(setq k (- end i))
(setq l (+ start i))
(swap data k l)
)
)

;standard swapping of two elements in a vector
(defun swap(vect i j)
(setq temp (aref vect i))
(setf (aref vect i) (aref vect j))
(setf (aref vect j) temp)
)


;rotate by num elements the collection
(defun rot (collection num)
(myreverse collection 0 num)
(myreverse collection (+ num 1) (- (length collection) 1))
(myreverse collection 0 (- (length collection) 1 ))
)

;initialising the collection with simple integers
(dotimes (i (length arr))
(setf (aref arr i) i)
)

;invoking the rotation of five elements
(print "Before: " )
(print  arr)

(print "(rot arr 5)" )
(rot arr 5)
(print "After: " )
(print arr)


Running it in your favoutite Lisp interpreter will give you:


prozak@Mistral:~/Exploration/lisp/pearls%clisp rot.lisp

"Before: "
#(0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22)
"(rot arr 5)"
"After: "
#(6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 0 1 2 3 4 5)



There is some inherent beauty in the simple basic elements of programming, today I had the chance to re-discover the basics. Wonder what it will be like if I finally get round to read Knuth's books. ;-)

3/11/2006

What is this?

I have spent a lot of my life writing, writing about waht happens in my life or around me. Now I spend most of my time writing code, so I will write about it here. Weapons of choice, Java, PERL, LISP, SciLab, and any other thing  that  I happen to get my hands on. Watch this space, if you may...