I’ve never wanted to start a sentence with “I’m old enough to remember…” because, well, who does? But here we are. I remember the enormously successful Apple IIe and Commodore 64, and a world before Microsoft. Smart phones were science fiction. To do much more than word process or play games one had to learn a programming language. These ancient days seemed at the time—and in hindsight as well—to be the very dawn of computing. Before the personal computer, such devices were the size of kitchen appliances and were hidden away in military installations, universities, and NASA labs.
But of course we all know that the history of computing goes far beyond the early 80s: at least back to World War II, and perhaps even much farther. Do we begin with the abacus, the 2,200-Year-Old Antikythera Mechanism, the astrolabe, Ada Lovelace and Charles Babbage? The question is maybe one of definitions. In the short, animated video above, physicist, science writer, and YouTube educator Dominic Walliman defines the computer according to its basic binary function of “just flipping zeros and ones,” and he begins his condensed history of computer science with tragic genius Alan Turing of Turing Test and Bletchley Park codebreaking fame.
Turing’s most significant contribution to computing came from his 1936 concept of the “Turing Machine,” a theoretical mechanism that could, writes the Cambridge Computer Laboratory “simulate ANY computer algorithm, no matter how complicated it is!” All other designs, says Walliman—apart from a quantum computer—are equivalent to the Turing Machine, “which makes it the foundation of computer science.” But since Turing’s time, the simple design has come to seem endlessly capable of adaptation and innovation.
Walliman illustrates the computer’s exponential growth by pointing out that a smart phone has more computing power than the entire world possessed in 1963, and that the computing capability that first landed astronauts on the moon is equal to “a couple of Nintendos” (first generation classic consoles, judging by the image). But despite the hubris of the computer age, Walliman points out that “there are some problems which, due to their very nature, can never be solved by a computer” either because of the degree of uncertainty involved or the degree of inherent complexity. This fascinating, yet abstract discussion is where Walliman’s “Map of Computer Science” begins, and for most of us this will probably be unfamiliar territory.
We’ll feel more at home once the map moves from the region of Computer Theory to that of Computer Engineering, but while Walliman covers familiar ground here, he does not dumb it down. Once we get to applications, we’re in the realm of big data, natural language processing, the internet of things, and “augmented reality.” From here on out, computer technology will only get faster, and weirder, despite the fact that the “underlying hardware is hitting some hard limits.” Certainly this very quick course in Computer Science only makes for an introductory survey of the discipline, but like Wallman’s other maps—of mathematics, physics, and chemistry—this one provides us with an impressive visual overview of the field that is both broad and specific, and that we likely wouldn’t encounter anywhere else.
As with his other maps, Walliman has made this the Map of Computer Science available as a poster, perfect for dorm rooms, living rooms, or wherever else you might need a reminder.
Related Content:
Free Online Computer Science Courses
Watch Breaking the Code, About the Life & Times of Alan Turing (1996)
The Map of Mathematics: Animation Shows How All the Different Fields in Math Fit Together
The Map of Physics: Animation Shows How All the Different Fields in Physics Fit Together
The Map of Chemistry: New Animation Summarizes the Entire Field of Chemistry in 12 Minutes
Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness
I would suggest that you have given a good “overview/summary”.
As a map, not so much:
.….“the map is not the territory”
Nice overview.
The statement about the undecidability of the halting problem is a bit misleading. There are no specific programs for which it is impossible to decide wether they hold or not but there is no algorithm that does it in general.
I see all the details but I don’t see the soul or the big ideas.
Hi and good day. Informative. Thanks much, and have a great day.
What changes or additions would you suggest to improve the overall quality of the post?