Thursday, May 6, 2010

Motivating Divide and Conquer paradigm

Just the other day I went in my old University, met former teachers and collaborators... fun!

I also attended a lecture on algorithms. The topic: Divide and Conquer.

The professor presented a very simple example to make the technique apparent. The problem? Find the minimum element of a list. Instead of doing the usual (in pseudocode):

result = +oo;
foreach e in list {
  result = min(e, result)
}

The divide 'n conquer approach would be:

function min(list)  {
  if (list.size == 1) return array[0]
  else {
    k = list.length / 2
    return min(
      findMin(list.subList(1, k)), 
      findMin(list.subList(k+1, list.size)))
  }
}

One could ask, "what's the point". In this example, this doesn't sound important at all. A student with good practical experience with programming would readily recognize that the latter, sophisticated approach is likely to perform worse than the straight-forward iteration. Indeed, a student there asked, "do we gain any gain in time complexity by following the latter approach in this example?". Unfortunately, the answer was no, both versions are O(n), so the exercise "is" pointless. 


Quite some years after the period when I was an undergraduate student, and reflecting on that experience, that was my main gripe with how Computer Science was being taught. Without proper motivation! The student must somehow "guess" by him/herself the importance of the covered topics. An example dialogue that could ensue between students: "-Hey, what did you learn today? -Nothing much, something called 'divide and conquer', it's just a way to create complicated solutions when simple and as good solutions already exist".


Since I was attending, I thought I should intervene and give the very, very, very imporant motivation for learning this algorithm design paradigm. The reason is the all-too-known fact that CPUs have stopped getting faster, and they only become more numerous. This can be easily recognized, but it also has a tremendous implication in how we design software. Even for this simplistic problem, the first method of computing the minimum element is doomed. It may not fail tomorrow, it might not fail next year or the year after, but it's doomed nevertheless. Imagine you have a billion elements of which to find the minimum. Also imagine your run-of-the-mill computer has hundreds of processing cores (by the way, we should stop calling them CPUs/Central Processing Units, how "central" is something you have hundreds of?).


Here is the expression that the first method is trying to evaluate:






I.e., you first find the minimum of the first two elements, then you find the minimum of that and the third element, then the minimum of that and the fourth element, etc, etc. The point is, the Kth call (K>1) to min() always must wait first for the result of the (K-1)th call of min() to become available. Each min() invocation takes (say) a unit of time, so the final computation cannot consume less that K time units, no matter whether you have a single core or thousands of them. The so called "critical path" (the longest path of things that must happen sequentially) as is also apparent in the diagram, has length K. Bad.

This is what happens with divide and conquer instead:


Still, the root depends to two other min() invocations, and must wait for them before evaluating itself. But those two are not dependent on each other, they may run in parallel. The critical path here has length only logK. Thus, divide and conquer can solve this problem in O(logN) instead of O(N). Without leaving your excess of processing cores to sit (and waste energy) idly.

As I explained to the student, this epitomizes programming of the future. Of course that's a hyperbole, but hyperboles are useful to drive a point home. And it is certainly better than present something dry to the student, without any hint of its importance - that's the recipy to make him/her not pay attention, and soon forget about this weird, exotic thingy.

That's all for now! Oh, and if you want to actually implement algorithms like this, and you code in Java, the fork/join framework is something you'll definitely want to learn.

No comments:

Post a Comment