Software development for todays mobile would benefit from something which would surprise you !

With the current interest in the “Internet of things” and the next generation of mobile devices getting smaller and smaller, as a long time software developer I can see one unconventional way of producing faster and more efficient software for such devices. Let me preface my discussion to the readers of this article with a simple “you won’t like it”, when I tell you !

The advantage of being an older developer

To better appreciate what I am going to say, let me first comment on one advantage an old timer like myself has over the current generation of software developers. My first experience with a computer was in 1975 in an advanced math in high school. In the 1980’s, long before most high schools began to teach any computer science, I started learning how to program using a Texas Instruments TI-99/4A home computer and then a Commodore 64 computer. By todays standards it may sound a bit ancient, but I can guarantee you that some of the concepts I learned back then have stuck with me and have been invaluable. For a moment imagine trying to write software for a computer with only 64 kilobytes of memory (some of that already used by ROM), a single sided floppy disk which was terribly slow and a CPU with an amazing speed of only 1 mhz (yes that is megahertz) . Totally unusuable by todays standards.

I did a lot with that Commodore 64. While most equate the C64 with old fashioned interpreted BASIC, I was actually using a real compiler for developing software, made by Abacus. Compared to interpreted BASIC it produce very fast running software. But it wasn’t fast enough for me, so I learned 6502 machine language using Richard Mansfields book “Machine Language for Beginners” which was an absolutely perfect book for learning 6502 machine language for the Commodore computer. I then wrote my own compiler. I used the Abacus compiler to write my own low level compiler, which had a language more akin to part assembler and part BASIC. I wrote my first family friendly video game and sold it to the Compute Gazette magazine and it was published in the October 1987 issue of the magazine.

In time I began writing custom software for businesses. One of my early clients was a local video rental store who were using a Kaypro II computer  which ran CPM ( before DOS became popular) and it had an interpreted BASIC language called GWBasic. It had two single sided floppy drives (180 KB disk space each) , a 2.5 megahertz Zilog Z80 CPU and 64 KB memory (not 640, but only 64).  What could anyone do with that ?

I wrote a complete rental software system which tracked all their renting of videos. Yes, some real world business software could run on such a minimal machine and quite well. More clients came along and I wrote software for the Kaypro for a quality control department in a manufacturing plant nearby. The software interfaced with a testing machine in the plant and downloaded all the test data and then I did statistal analysis on the data and printed out bar charts to a dot mattrix printer. All of this written in interpreted BASIC and some machine language.

Now as new computers came along, I used different tools for programming, such as Microsofts Quick Basic 4.1 and PDS 7.1 (professional basic). I wrote some libraries for Quick basic in assembler too. I wrote software for a number of local “mom and pop” operations such as a lumber supply store (accounting software), a custom machine shop, a transmission repair shop, auto repair shop and even the local sheriff’s department. I wrote engineering software for an engineer who did work for the local phone company and through him even did a project for Dupont corporation in Puerto Rico (he moved there and we communicated via computers over the phone, before the internet). Albeit to say I have had a lot of experience designing real world software used in business.

Lessons learned

While I am a big fan of RAD (Rapid Application Development), there are some important lessons I learned over the years which would benefit modern day programming, especially for smaller mobile devices with minimal memory, which may seem counter productive to most programmers today. Old timers like myself have the advantage of knowing how to squeeze every cycle out of a computer, despite minimal hardware. Surely if we can write software for a computer with onlt 64 KB memory and 1 mhz CPU, then todays computers should be a “piece of cake”. Sadly, though they aren’t and here is why.

Old timers were around before all the hype about OOP (Object Oriented Programming) and have the advantage of seeing the before and the after of where OOP has taken us. I know todays programmers don’t want to hear this, but simply put, OOP has added unnecessary complexity and bloat to software, as well as degraded performance. RAD is not bad, but RAD is not the same thing as OOP. Rapid software development has been around for a long time and old timer programmers know a thing or two about RAD. The two key RAD approaches I used in my early days were modular software design (aka. Libraries, not Objects) and code generation (automated code generators). In my Quick Basic programming days I came across a code generation tool called Soft Code which was amazing. You could design your apps screens (text based) and then generate a complete application from it. Sadly, the Soft Code templates for Quick Basic were not that good in my opinion, but fortunately the tool had a template definition language which would allow you to create your own templates. I spent months working on mine, but once done it could generate Quick Basic applications which had scrolling, drop down screens (kind of like a modern Listview control as a popup window) with built in calculations (like Excel) and a multi-user database which supported mirror imaging databases (split work between multiple servers). So RAD is good ! But now to OOP.

Especially from the view of a programmer like myself with some machine language experience, OOP adds bloat to software and from experience I find its modular benefits are not the panacea many thought it would be. It adds complexity rather than decreases it. A more procedural style of coding can produce more readable code with much better performance which is well suited to the needs of modern day mobile computers. You see, us old timers were building apps for devices with 1/1000 the power of todays devices years ago and did so successful. How ? By writing software which was efficient and closer to the native hardware. We knew when to use assembler and we knew when to use a higher level language. So by now you are likely saying this is all nonsense. Ok, well the proof is in the pudding.

Learning how to tap into the WIN32 API

I started learning how to work with the WIN32 API when it was not fashionable anymore. When most programmers were using VB.net and C#, I was learning how to work with the native WIN32 API. The power of this native API is amazing and there is so much to learn and so much untapped power it is surprising. Obviously Microsoft felt the native API was too hard to learn, which is why they created MFC, ATL and then later dropped all of that in favor of dot.net. But as Herb Sutter says in his talk “Why C++ ?” , the decade where dot.net ruled as a development platform is when software performance suffered greatly. This is why he obviously is pushing a resurgence in using C++ over managed languages, especially with the needs of mobile. Well, I will go a step further and suggest that object oriented programming has probably done more damage to software performance than even managed languages have. In my opinion, the WIN32 API did not go far enough. Rather than replace it with heavy COM interfaces, the WIN32 flat API should have been expanded to a higher level.

In old timer terms, I would call this a three tiered modular design. That is what the WIN32 really needs. What is this ?

The original WIN32 API is a very low level set of flat API’s. In the old days, a programmer with RAD needs would have built a second and third tier on top of the WIN32 API. Three tiered simple means, Low level functionality, Medium level functionality and then High level functionality. When it comes to debugging, one rarely would have to gove more then three levels deep to find problems. Unlike object inheritance which when too deep can cause complex debugging problems. Tracking code execution flow is vital to proper debugging and a classless (and objectless) purely functional style of coding (flat API) using a API with basically three tiers of functionality (low, medium, high) can produce easier to read code, easier to debug code and better performing code. There are more and more programmers who are taking notice of the problems with OOP and who see the value in a more traditional style of coding.

Why am I convinced ?

I had an interesting conversation with a friend one day. This friend used to work in the software industry years ago. He was asking me about what I do and when I explained to him that I develop a tool for programmers he was curious. He said to me that of course I use OOP and when I said no, he was surprised. Personally, I feel OOP has its place, but I have been programming in a certain style for so long, it makes little sense to do something different just because everyone else does. I coded in the flat procedular style and as long is it worked, why not use it. But my friends response to something I said, told me that maybe the old time way of coding still had its benefits. I started to explain to my friend what the software library I wrotes does. I told him it does “this, this, this, this”. You know the typical verbal chart of features all of us programmers like to list when explaining what our software does. Most software manufacturers still do this today in advertisements for their software. They have the bulleted list of features of what there thir software does compared to the other guys software. So I started listing what my own software did and where I got the biggest surprise was when I said that the entire library is only about 1 megabyte in size (the core only being 700 KB). My friend quickly corrected me saying something  like “you mean 10 megabytes” right ! “No”, I said, I mean one megabyte. He was obviously correcting me, because he was basing his view on the extensive feature set I had just mentioned to him. His reply was “that is impossible” !

Why impossible ? Likely because he felt that the feature set did not match the size of the library. In his mind, I mentioned too many features for such a small library. Now it dawned on me, what the real issue was. The real issue was old time procedural program design versus OOP. My friend had come from an OOP background. Native coding (WIN32) combined with a purely procedural style of coding is the reason I was able to build such a small library with such an extensive feature set. But remember, where I started. I was writing apps in the days when a computer had a 1 mhz CPU and 64 KB memory. Surely, I got use to building as much as possible with as little hardware resources as possible.  This actually showed up in my development of my companies primary product over the years (each new version came out in about a 3 year cycle). The core runtime library (DLL) of version 1.0 over 12 years ago was only 122 KB. version 2.0 was 186 KB. Version 3.0 was 325 KB. Version 4.0 was a whopping 515 KB and version 5.0 was a fat 700 KB. Ok, I say fat in jest.  700 Kilobytes is tiny by todays standards of an extensive UI library, but when I was developing it I kept complaining to myself “its too big”. Why was I my worse critic when it came to the size of my software library ? Because I was trained and learned at a time when every byte counted. If it didn’t fit on a single floppy disk it was “huge”. It was a mindset that was ingrained in me.

Maybe this is why I liked the programming language I have used for the last 14 years. Its developer, Bob Zale, who sadly died in 2012, used to have this saying he posted in his offices of “small faster, smaller faster, smaller faster”. I guess being an old timer programmer I had the same thing burned into my programmers mindset.

There is something about “smaller, faster”

So with all the interest in smaller mobile devices (some even the size of a watch), maybe us old timers have something we can teach the current generation of programmers. That is how to build software which is smaller and faster. Trust me when I say, we old timers know how to squeeze every cycle out of a CPU. We know how to work with minimal memory. We learned the “tricks of the trade” at time when we had little choice. I even wrote my own compiler at a time when I was desperate for more speed (on the Commodore 64). Who wouldn’t when faced with a 1 megahertz processor. How do you get performance out of that ?

So you young programmers today, don’t laugh when an old timer tells you how they used to (or maybe still do) code for performance. Maybe they know something you missed along the way. Who wouldn’t want smaller, faster software today ?

Now before you young programmers start posting about how this “old guy” has lost his mind, I am still coding today. I have been a native coder (WIN32) for over 14 years now and I do plenty of low level stuff today, including writing custom WIN32 controls from scratch, working with pixels writing graphics engines such as a 2D Sprite engine, image rotation and scaling, image filters and also working with 3D using OpenGL. But like many an old timer programmer, I haven’t forgotten the lessons of my early years.  I am still coding with the goal of small and faster. Amazingly now with all the interest in mobile (meaning smaller devices with less resources), small and faster is now in fashion again and even necessary.