Software Development

Meta-cycles in technology choices

I’ve been working on my keynote for QCon Beijing and looking at technology trends and choices since the 1950s.

One of the interesting tensions that I’ve seen in IT is the tensions between “getting it right” and “doing it quick”.

Most of enterprise/business IT is about making good business decisions. Business people who make those decisions need information that’s mostly right, not exact. Yes, there are parts of IT that deal with transactions around money where making a 1 penny mistake is not acceptable, but the vast majority of what we do with enterprise IT is to help business people make informed choices.

My QCon talk is going to be about the ascendancy of big, institutionalized IT around the IBM 360/370 that was obliterated by the Apple // and VisiCalc.

But I see this kind of thing play out over and over.

You have big, important, “have to be right” IT projects that fail 50% of the time and of the not-failed projects, 75% of the remaining are late or materially differ from the original scope.

On the other side, you have stuff that a business person can do in a spreadsheet, a project that a couple of developers can “knock together” in a few weeks using this year’s version of Delphi/ PowerBuilder/VB/JSP/Apex.

It’s interesting to see the cycles in both of the above approaches. With the “have to have right” projects… when they fail or are materially behind schedule, the “get it done fast” contingent comes in and, well, gets it done fast. I have a friend who worked at a big bank and made a great career out of swooping into big C++/CORBA projects that were behind schedule and over the course of 1-4 months with a team of 3 people getting 90% of the functionality done.

The get it done fast projects in isolation seem to be a better choice. But what happens when you have a ton of slash and burn, do what it takes to get it done projects is as a whole, they are unmaintainable and cannot easily be migrated off the hand-provisioned machines they were originally deployed on. A symptom of this is that Windows XP won’t die because there are millions of projects running on XP machines that can’t be moved. More globally, when you have a database with hundreds of tables that are mostly referential islands, you’ve done the “getting it fast” thing too agressively.

My thoughts

After almost 40 years watching the cycle… and participating in the Apple // building cheap and fast software as a high school kid that cost less in total of hardware and software than it took to study the problem and come up with a framework for an estimate to do it on DEC mini-computers thing for FEMA as my first paying gig… here’s what I think.

You absolutely need a rock solid set of core infrastructure that can be trusted. The source of truth must be trustworthy, maintainable, well designed, well documented, testable and well tested, and all the other things we see from traditional centralized high cost, slow moving IT departments. They are high cost and slow moving in exchange for building very resiliant, maintainable, predictable systems.

But those rock solid systems need to expose data and APIs that are super simple to hang quickly built, one-off systems from. The quickly built systems may be spreadsheet-based, may be JavaScript based, etc. They are projects that are meant to be throw away… although in reality, they become the walking dead that lumber around organizations for decades.

The saavy IT executive will allocated resources to both kinds of systems and act as a balance between the two. If a project is meant to be used by a small percent of the over-all company/customer base, then do it quick and trade maintainability for “cheap enough to re-write when the specs change materially.” The small projects should not be intersecting/dependant on one-another. They in general should not share DB tables with each other. They should be isolated so that they can change, break, be migrated without worrying about a dependency that only on person knows and that person left the company two years ago… or even worse, somebody who has so many dependencies in their head that they are holding the company hostage.

Projects that don’t meet the above criteria should be put on the slow, process-laden route because those projects are bigger than an individual and will have impact to the organization over time. The cost of predictability and maintainability outweigh the the development and speed costs.

There’s no formula for the above trade-off… just a seat of the pants… or maybe I’m just a crappy manager who can’t articulate a solid set of rules.

Avoiding the “yesterday’s technology” issues

One thing to keep in mind is technology choices. It’s important to adopt reasonable early mainstream technologies for both the large projects and the small ones.

In both cases, at the beginning of a system lifecycle, choosing technologies that have been adopted by the early mainstream means that the technologies will grow over the lifetime of the project and in some cases that’ll be a 20+ year lifetime.

A company that chose Java/JSP for the quick projects in the last millenium made a great choice. Java/JSP as early mainstream and replaced VB and PowerBuilder for small to medium projects. But that choice (or Java/Play! or something similar) would be a bad choice today. On the other hand, JavaScript/Node for the “do it fast” projects makes lots of sense. Yes, server-side JavaScript developers are more expensive than Java/JSP developers today, but the costs will come inline with other mainstream technologies in a few years. Look to the Java/JSP developers 15 years ago and the Ruby/Rails developers 8 years ago vs. today for pricing trends.

There are less clear choices for large IT projects. I think that’s one of the reasons that microservices are becoming popular. Nobody can figure out the right language/framework combination for large IT projects, so the projects are being decoupled and the technology choices are being abdicated. Sadly, this means fewer sources of truth in large projects… but I digress.

Know the difference

The key take-away is knowing the difference between the core systems that must be maintainable over the next 20 years and the one-offs that need to get done quickly. A good tech executive will allocate resources to both kinds of teams and make sure that projects get allocated to both based on factors that include need for long term updating and inter-dependencies. Neither the “getting it right” nor the “doing it fast” side is always right. It’s important for each side to recognize the value of the other and to play well with the other side.

Reference: Meta-cycles in technology choices from our JCG partner David Pollak at the DPP’s Blog blog.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
Back to top button