Friday, July 6, 2007

In Support of Math Based Computer Science

Earlier in my day, I ran across a book review for Computer Science Reconsidered: The Invocation Model of Process Expression. The premise of the book, at least garnered second hand from the review:
Mathematicians and computer scientists are pursuing fundamentally different aims, and the mathematician's tools are not as appropriate as was once supposed to the questions of the computer scientist
I had a number of reactions to that immediately. Most of them were, frankly, emotional. Certainly most of the code that gets churned out these days has very little conscious basis in Mathematics. But I would argue it doesn't have much of a conscious basis in Computer Science, Algorithms, or Finite Automata either which are all definitely critical some of the time. That is largely because so much of it is so well understood, and so abstracted away from first principles, that the underlying rigor isn't required to get the day to day bits churned out. The more important the code is, the more it moves down that spectrum of rigor.

But when we're really reaching for something new, something interesting, and something that isn't just an incremental change from the conventional wisdom, then my instincts say that Math provides a very valuable framework for describing something out of nothingness.

My gut was validated just hours later when reading an IEEE journal article on the so-called FastTCP active queue management TCP congestion control algorithm. The approach looks at congestion control as "a distributed algorithm over the Internet to solve a global optimization problem [.. to ..] determine the equilibrium and performance of the network"
Moreover, the underlying optimization problem has a simple structure that allows us to efficiently compute these equilibrium properties numerically, even for a large network that is hard to simulate.

Specifically, we can regard each source as having a utility function that measures its “happiness” as a function of its data rate. Consider the problem of maximizing the sum of all source utility functions over their rates, subject to link capacity constraints. This is a standard constrained optimization problem for which many iterative solutions exist. The challenge in our context is to solve for the optimal source rates in a distributed manner using only local information. A key feature we exploit is the duality theory. It says that associated with our (primal) utility maximization problem is a dual minimization problem. Whereas the primal variables over which utility is to be maximized are source rates, the dual variables for the dual problem are congestion measures at the links. Moreover, solving the dual problem is equivalent to solving the primal problem. There is a class of optimization algorithms that iteratively solve for both the primal and dual problems at once.

TCP/AQM can be interpreted as such a primal-dual algorithm that is distributed and decentralized, and solves both the primal and dual problems. TCP iterates on the source rates (a source increases or decreases its window in response to congestion in its path), and AQM iterates on the congestion measures (e.g., loss probability at a link increases or decreases as sources traversing that link increase or decrease their rates). They cooperate to determine iteratively the network operating point that maximizes aggregate utility. When this iterative process converges, the equilibrium source rates are optimal solutions of the primal problem and the equilibrium congestion measures are optimal solutions of the dual problem. The throughput and fairness of the network are thus determined by the TCP algorithm and the associated utility function, whereas utilization, loss, and delay are determined by the AQM algorithm.
It seems clear here that math provides very strong underpinning for what the article needs to describe and achieve. To be fair to the author of the original book, he was trying to promote another basis for expressing key Computer Science thoughts: "the invocation model of process expression". Which from casual glance looks interesting, I just don't get why you have to tear down something old (e.g. "The Problem: Why the underlying theory of contemporary computer science is not helpful") in order to build up something new.

Maybe being shocking is good for selling books. Though, I'm not sure the labeling of math as not helpful is all that shocking to the general book buying population.

Check out www.fastsoft.com where some of the authors of that paper have created a clever hardware bridge to seamlessly migrate a legacy TCP data center into one that sends with FastTCP congestion control algorithm.