simplex, rand, fifties
here’s more on links between early digital computing in aerospace (whirlwind, mit instrumentation lab) and economics (rand santa monica). since there were only a relative handful of computers and coders in the early fifties, it’s actually not so surprising, and another puzzle piece fell into place this week. here’s a comment from a post back in april.
covariance matrices, quadratic forms with ‘squared units’. the same thing appeared in finance via markowitz portfolio theory at the same time. it would be interesting to know how much contact there was between kalman and markowitz, or whether it was simply an effect of the availability of digital computers and fortran. this also took place only a few years after dantzig and von neumann’s work with the simplex algorithm, and it’s possible connections can be found between the whole group.
dantzig moved to rand santa monica in 1952 specifically to do the reference implementation of simplex on ibm cpc, 701, 704. it seems probable that most of his work with von neumann was just prior to that, set on the east coast, and focused on less computational concepts such as duality. dantzig was certainly in washington dc working on the seac computer at the national bureau of standards from 1950 to 1952. at that time, laning was creating the fortran prelude ‘george’ on whirlwind and moving to the mit instrumentation lab to work on missile guidance with battin, where kalman would soon enter the story.
the numerical challenge for simplex was matrix inversion, and the solution came at rand with the revised simplex method and the pfi product form of inverse. the pfi allowed revised simplex to stay in the ‘inverse space’ and operate on column vectors, the now familiar ‘leaving’ and ‘entering’ vectors.
the new puzzle piece is that markowitz joined dantzig at rand, and by 1955 had introduced a particular type of pfi that’s still standard today, the efi elimination form of inverse. efi is closely related to the ‘by hand gaussian elimination’ taught in every linear algebra course. it’s in fact the lu lower-upper triangular factorization, though it’s not clear the terminology existed at the time, and leads onward to the cholesky factorization. these are in a sense ‘matrix square-roots’, and besides solving equations, are important for dealing with numerical issues in the ‘squared units’ of covariance matrices. see for example the ‘square-root’ kalman filter. cholesky was used for the earth gravity field solutions generated by the nasa topex and grace missions at center for space research in austin, part of the motivation for bringing cray supercomputers to utexas back in the seventies and eighties.
it will be interesting to learn what happened between 1955 and 1960. in this period, markowitz and kalman became associated with covariance matrices, in very different contexts. the end result was similar though. representing uncertainty within algebra code. extension of the concept of probabilistic variance to multiple variables.