The natural career of a software developer is: junior programmer, senior programmer, technical lead / team leader, optionally architect and then it turns into management. There"s something paradoxical about this path: the career that started with writing code ends in not writing code at all anymore. After all, how can you keep up with all the new and shiny stuff that appears in technology each year?
A new type of career has surfaced in the recent years, one that"s much more interesting. We will look in this article at people who don"t fit this profile, at those developers who still write code and can help others even if they are 40, 50 or 60 years old. Robert C. Martin. Michael Feathers. Rebecca Wirfs-Brock. How are they different? How can they keep up with changes?
When was unit testing invented? Test Driven Development? Use of abstractions for changeable software design? SOLID principles?
They all sound like shiny new things. After all, the core books were published in the past 10 years or so. But are they really new?
Barbara Liskov had a keynote at QCon 2013 entitled "The Power of Abstraction". She talks about the initial conversations about changeable design that the small community of developers had back in the 1970s. Around the same time, she gave the name of one of the SOLID principles, "The Liskov Substitution Principle", after a casual remark made on a bulletin board. That"s 40 years ago! 40 years since one of the core OOP design principles was invented, a principle that millions of programmers use every single day. 40 years since abstractions were introduced to allow changeable design.
But maybe other things are revolutionary. Web services are a new idea, aren"t they? How about REST architecture?
It"s true that a few things had to happen for web services to appear. First, the business need. Second, the standardization. Third, the expansion of the web. However, if you look at web services, they are based on the following idea: Compose complex functionality out of small components that do one thing well and communicate through text messages. Strangely enough, this idea is in the UNIX design principles that was defined in the 1970s:
"Rule of Modularity: Write simple parts connected by clean interfaces.
Rule of Composition: Design programs to be connected to other programs.
Rule of Separation: Separate policy from mechanism; separate interfaces from engines.
Rule of Parsimony: Write a big program only when it is clear by demonstration that nothing else will do."
How about REST services? Exposing resources using the limited operations provided by the http protocol? This surely had to appear later on. Well, let"s take a look at what Alan Kay, one of the founders of Object Oriented Programming had to say about this paradigm:
"I thought of objects being like biological cells and/or individual computers on a network, only able to communicate with messages (so messaging came at the very beginning -- it took a while to see how to do messaging in a programming language efficiently enough to be useful)"
"every object should have a URL"
The second quote is from a talk he did in 1993. That"s REST services, in one short sentence, 20 years ago.
If technology fundamentals don"t change that much, then what does?
20 years ago, if a programmer had to make two objects communicate over a network, a lot of wizardry was involved. A now (hopefully) forgotten technology called CORBA was the standard way to do it. In order to make it work, you had to understand it, write code in a specific way, figure out the problems etc. It took a lot of man days to fix and make it work, unless you were one of those people who could visualize the bytes moving between processes.
Today, the standard way is to write a web service with a defined interface, something that any programmer can do in a few hours (we"re assuming simple functionality here). The programmer has no idea how the communication happens (unless interested in the subject), just that it works when the code is written in a certain way. Debugging can still take some time, but it"s easier with specialized tools.
15 years ago, writing a small program required knowledge of the way memory is allocated, something that generated many man days of seeking the source of an error with the "helpful" message: "memory corruption error: #FFFFFF". Today, most developers have forgotten about pointers and dynamic memory allocation because the programming language and platform takes care of it.
The difference is not in the fundamentals. It"s in the implementation. And implementation gets easier and easier as time goes by. But if implementation is easier, why do we keep having problems in software development?
Our own definition of architecture is "when programming meets the real world". In programming, everything is clean, repeatable, reliable. The computer doesn"t give two different answers to the same question, unless programmed to do so.
The real world is different. Not everyone uses the same date format, calendar or alphabet. Time can change depending on timezone or the relativistic speed. Servers fail. Networks go down.
The fundamental difficulty of programming has always been to translate ambiguous requirements to very precise code that is resilient to the lack of dependability of the real world. Yet, for many years, this fundamental difficulty has been hidden beneath implementation issues. Programmers had enough challenges related to memory allocation and networking communication that they couldn"t face the real world. Therefore, many of them were shielded from it.
Once the implementation was simplified, the fundamental difficulty of programming became visible. We talk less about communication between services and more about changeable design, because change is part of the real world. We talk less about memory allocation and more about unit testing, because everyone makes mistakes.
And this brings us to the conclusion:
Technologies change. Yet the fundamentals of programming haven"t changed in the past 20 years or so. They probably won"t change dramatically in the next 10 years. If you want to be in touch with programming for many years from now on, like Robert C. Martin, Michael Feathers or Rebecca Wirfs-Brock, here"s what you need to do:
Master the skills and the fundamentals of programming, not only the technology you work in today
This doesn"t mean you shouldn"t know the platform you"re using, just to understand that it"s merely a tool that helps you do what you do: turn ambiguous requirements into changeable, working, ready to face the real world and precise source code.
We mentioned in the article a few technology independent skills. Here"s a list that doesn"t claim to be complete, but we think it is a good start:
Programming language features:
Dealing with code you"re afraid to change:
These skills are independent of technology. Once you master them, you can use them in a completely new technology. This will allow your career to grow, no matter the technology changes.
The fundamental difficulty of programming has always been to translate ambiguous requirements to very precise code that is resilient to the lack of dependability of the real world. The answer to this challenge has been developed in the past 50 years and hasn"t changed that much. The only thing that changes is the implementation, usually by getting easier.
Mastering the fundamentals of programming and the associated skills are your best bet for a strong career, independent of future technology changes. The "Skills over technology" mantra doesn"t mean you shouldn"t know the technology, just that the skills are more important in the long run.
If you want to go on this path, you are not alone. Software craftsmanship communities in your city can help (for example the Agile Works community in Romania - http://agileworks.ro) and craftsmanship conferences such as I TAKE Unconference - http://itakeunconf.com are organized with this purpose in mind. Join them and grow your career as a software craftsman, not as a future manager.
by Larisa Gota
by Ambrus Oszk
by Alpar Torok
by Sorin Pânca
by Tudor Trișcă
by Chris Frei
by Diana Ciorba
by Bogdan Oros