Noone nowadays would call electricity or telephone cutting-edge technologies. Yet, at some points in time, they were. Being in information technology business means being in a sector where things change quickly, because if it’s not new, then it’s not technology. How then should an technology business cope with such constantly changing paradigms? Well, for starter, it should go back to basics that a technology business is first and foremost a business, which just happens to be making money out of technology. It should use whichever technologies that maximise profit.

Let’s look at some examples on choosing best technologies for software business

User interface

Tabs make it easier to organise a lot of information. Photo by Hilde Vanstraelen

User interface paradigms for enterprise software has changed several times to maximise interactivity based on the technology of the time. When computing power was scarce and expensive, presenting complex object like Account Receivable as series of single questions to user was the best way to go. When computers got a little bit more powerful, engineers were able to fill a screen with multiple questions or data fields like customer code, due date, currency, all navigable linearly via keyboard shortcuts. Then came Graphical User Interface (GUI), with ability to switch between virtual forms and to point at any location on the display using mouse. GUI enabled engineers to cram even more information into the display by organising them into clickable tabs, analogous to how tabs are used in physical folders to organise large amount of information.

As users become more and more sophisticated in using the mouse to navigate between forms on computer display, engineers become bolder in cramming even more information into the display. Instead of using the tabs analogy, where clicking on each tab will reveal some data out of the whole lot, engineers nowadays can use collapsible sections, where users can show all information on one screen if they so choose, or show only information he/she needs and collapse the rest.

Database design

Photo by Sanja Gjenero

There have been few new approaches to database design lately. One of it is the use of surrogate keys instead of natural keys. There are quite a few objective comparison of the 2 approaches, for example by Decipher Information Systems or in Wikipedia. However, for software business, the author still feels that natural key is the way to go for the reasons below:

  • the IT department of the customers are going to be happier with meaningful tables. Since in any business, including software business, we are supposed to be delighting the customers, this becomes an important point. Jeffrey Palermo, CIO of Headspring Systems, famously said in, “I think humans should use natural keys, and computers should use machine-generated surrogate keys.”
  • surrogate keys are wasteful
    • separate unique keys must be set up to handle uniqueness, instead of just doing it in one go using natural primary keys
    • on some RDBMS with automatic index on primary key, this index is wasted as it is useless
    • any query on child table is going to return less meaningful information unless joined to its parent table(s)
  • changes to natural keys are so rare in my 10+ years experience dealing with enterprise software that it’s not worth wasting resources for the majority of the software life time by adopting surrogate keys. And even in cases where such thing happen, a cascaded update in newer RDBMS can handle most requirements


DogBitesCat()? No. In OOP world, it has to be “instantiated” as Pluto.bite() and Tom.scream(). Photo by Vicki Reixach

Back in the days when procedural programming, the natural way to program, was dominant, we would write DogBitesCat() function, with all necessary subfunctions underneath it, and just execute that function wherever necessary. Nowadays in object-oriented programming (OOP) world, if a dog bites a cat in the front yard, we have to name the dog, say Pluto, and the cat, say Tom, and there will be separate functions for Pluto.bite() and Tom.scream(). On top of that, if the similar sequence of event has to happen somewhere else, say in the public park, then we should name the dog differently, say Blacky, and the cat differently, say Kitty, and execute Blacky.bite() and Kitty.scream(). Great.

There are aspects of object-oriented programming which are hugely beneficial for sure, but not all of them. There are many other authors who are more knowledgeable and reputable than this author who wrote eloquent essays on the short coming of OOP, for example:

This author believes that the best path is hybrid procedural/object-oriented languages like Visual Basic or PHP