In 2011 Alan Kay gave a talk/lecture on Programming and Scaling at the Hasso-Platner-Institute Potsdam. The take away is that you have to be creative and step out of the box to design new software that is both very simple and very powerful. The Cairo compositing system is his example. Here’s are a few comments.
A. Kay’s main message is that it is possible to design excessive simple building blocks that can sustain extremely large, extremely complex systems that are very robust. His key argument: TCP/IP is simple; it supports the Internet which is a large, complex, dynamic system that has never been halted nor has been crashed. This begs the question, can we build software this way?
He proposes to combine both intelligence, domain expertise and “new outlooks” — new ways to look at problems. He accuses the current software field to move forward like pop culture: lots of innovation, but little structure and outlook towards the future. He also interestingly describes our current struggle as an artifact of how the brain works, of how our psychology perceives the difference between past, present and future. For example “Once we start learning something it’s really hard to see what’s is going on [from a higher perspective].”
He describes the programming system Nile as an example alternative approach, with the success story of the Cairo compositing system which has been build in quite a different way from traditional approaches.
My comments: Nile and Cairo’s success story could be told in 10mn. The other 40mn is really not a good talk overall.
First of all his overall description of why creativity is different from engineering and tinkering is very long-winded (nearly 20mn) and he could tell the same in much less time. I was really sensitive to this since this is precisely the point I cover in the first 2 pages of my book.
Also he really makes a couple of glaring mistakes.
For one the beginning of the video is messy and quite wrong from the hardware architecture perspective.
In particular A. Kay starts by expounding on the “beautiful hardware from the 70’s which used bytecode and could not crash”. This is a misrepresentation of Barton‘s work and the Burrough computers.
It is true that these machines were quite different and quite elegant. They used a stack-based processor architecture instead of the register machine model commonly used today. This is a major conceptual difference (which I explain in the intro of my book, too), and it made these processors much simpler to program.
In addition the Burrough processors were microcoded, ie the instruction set was configurable at run-time. The OS would load different microcodes in the CPU depending on the programming language of the application. However contrary to what A. Kay states the processor could very well crash, if the microcode contained bugs for example. Note that most Intel CPUs nowadays are microcoded too, but Intel explicitly disallows OSes to change the microcode to prevent such bugs (you can still update the microcode using Intel “firmware updates”).
Another point is that “lines of code” is really a terrible measure to build an argument on. For example, lots of the code on Vista/Access (which he uses as example) are automatically managed by graphical IDEs nowadays, and are never processed by humans. It is like counting the number of bytes in the final compiled executable: a program that has more bytes is not necessarily more complicated.
Actually others have argued before that software source code nowadays is much less complicated to read/maintain than it was 30 years ago, because of the development of automated analysis and best practices for programming style. LOCs are thus a terrible way to measure “software complexity”.
Finally a lot of his argument relies on describing computing systems as living organisms, including finding software “DNA” and evolution/genetic perspectives. That was particularly unscientific: lots of fluff and sensationalism, very little facts, knowledge and concrete predictions about what could/should happen to improve progress.