The Cost of Creating Abstractions
As a software engineer, I’m required to understand most, if not all, of the software stack. From high level source code down to—at least, hopefully—the assembly language level, the steps in-between should pose no threat to my understanding. However, as new software-based technologies emerge everyday (Ruby on Rails, Cocoa, Jekyll—just to name a few), the ladder of abstraction is getting bigger.
This is a good and bad thing.
The good thing is that these new abstractions have enabled us to create better tools and products for consumers and producers alike. It has made the labor of making stuff easier than ever before. And not to mention, it has nearly razed the barrier of entryway to coding for curious-minded individuals. The computer engineering world has seen its cake of abstractions raising, all the way from writing “code” with physical punch cards to assembly code to high level language to visual programming. What an evolution, I daresay! I probably would have shot myself in the foot if my first computer science class at my university was taught in x86 assembly.
The bad thing is that these new abstractions create the illusion that the step of moving down the ladder is unimportant, unnecessary, and rather laborious. “If it works at this level, why do we need to go down one step to understand that level? Shouldn’t we just accept this as magic and move on?” Andy Matuschack makes a great point when he discusses feeding abstraction with understanding:
Abstractions empower and accelerate. As usefully encapsulated nuggets of understanding, the creation of novel abstractions drives a field’s progress, but their invention is possible only with deep understanding of present ideas. So I declare: if we are to master a field, we must accept none of its abstractions as magic. Rather, we should yoke them as automations of what we already understand.
A prime example of understanding what you’re working with is programming in the language C. Unlike other languages that have automatic garbage collection, C requires not only knowing when to use memory functions, like malloc and free, but why. A good knowledge of the latter doesn’t only make your C program less prone to memory leaks, but it also necessitates a more acute understanding of memory representation, segmentation faults, and a whole array of other core computer science problems.
Like Andy mentioned, we run into obstacles when we fail to learn the foundations of the abstractions we’re working with. To be truly a great cook, one must not only accurately follow recipes, but understand why they’re fabricated as such.
The cost, then, of creating new abstractions is that the next generation will likely ignore the underlying abstractions that serve as the building blocks for the present one they’re working with. This becomes complicated when something goes wrong and they’re unable to synthesize a solution because the problem stems from lower steps in the ladder of abstraction. Creating new abstractions then has the side effect of desensitizing previous ones, even the ones that occupy the foundations of the ladder. Instead of regarding them as instruments of magic, we should instead deconstruct them as processes we already ought to know.
If you’re constantly working with abstractions, comprehending the composition of the different levels in the cake you’re working with will ultimately help you make a better cake. Andy provides us with motivation for confronting the abstraction problem:
The goal is to provide a sandbox—not a syllabus—for experimentation and the formation of understanding. Understanding is marvelous because it’s so readily a feedback loop: we can use it to make more of it.
If we are to build a ladder to the moon, we should be comfortable enough to climb back down to earth. Moving up the ladder of abstraction, then, is just as important as moving down. As the brilliant Bret Victor points out, we learn the most about a system not by understanding the individual layers, but by effortlessly transitioning between them.