HadronZoo: Bespoke Software Developers
Meet the Founder

My name is Russell Ballard and as founder, I'm here to explain the thinking behind HadronZoo. Since that really means my thinking, here is some blurb about me. First off, I make no pretence of being normal, I'm an outright geek. I never grew out of taking things apart and putting them together again, and there are too many things that never cease to fascinate me. I find the binary chop algorithum for example, every bit as thrilling today as the day I first learned of it. It's not about little things pleasing little minds. I have always been driven to understand the how and why of the world, often at the expense of enjoying what it has to to offer. Asking how led me to science and engineering. Asking why led me to psychology, philosophy, economics, geopolitics and history. I'm at degree level or above on several subjects, not because I am a genius but because of the ridiculous amount of time I've spent researching things!

In fact I have learning difficulties. I'm dyslexic, dyspraxic, and a poor rote learner. Sometimes it just clicks but when it does not, no amount of rocking back and forth, reciting over and over, will make it click. I have to look at it from all angles and develop a first principles understanding, which takes time. This led to a consistent pattern: I was slow on the uptake, but would eventually overtake. There wasn't always the opportunity for the latter at work though! Nor is learning the only problem. I either run on adrenaline, or on autopilot. I have a long attention span and a sharp focus, if the challenge and/or stimulation is there. I'm equally settled with tasks that allow my mind to freewheel. My driving endurance for example, is well over 1,000Km. What I struggle with, are tedious tasks that require attention. For these, I practically need to be held at gunpoint.

Apart from the odd false start, I've had quite a career. I graduated in physics in 1982 and started out as an embedded programmer, working on such things as automated PCB drilling machines. Back then, particularly with embedded, you had to be good at counting bytes and clock cycles. Towards the late '80s I moved over to data processing and carried this with me. This cast the die. From then on the mainstay, all the way through to the turn of the century, was circumventing performance bottlenecks. By the late '90s I had reached the top of this game. I was designing proprietary database systems for a data warehouse, had 'rock star' status and for the first time ever, had billions of bytes to count!

For the first 10 years it was all C, thereafter all C++. All this work was real time, but none of it involved the internet. So in 2000, I rented a server (linux of course), invited friends and colleagues to join in, and hired an intern to teach me HTML, CSS, JavaScript and LAMP. One aim was to improve my appalling scripting skills. I cracked JavaScript but in spite of ongoing use, I'm far from fluent two decades on! I made limited progress with PHP but none whatsoever with Perl, and what little I've seen of Python tells me it's probably another Perl. My real objective, knowing from the outset that I'd never excel with scripts, was to find ways I could get involved with the internet as a real time C++ developer.

Real Time; Big Program; C++ Throughout.

I mention the above because it helps explain the software. I'm on the autistic spectrum but much of what I say chimes with people who don't match that description. The ethos of the software is "Real time; Big program; C++ throughout". That says performance is paramount, everything in C++ and nothing but C++, but it also signals rejection of the 'small program culture', in which systems consist entirely of small applets (or worse, scripts). Small program is about writing deliberately small programs, but big program isn't about writing deliberately big programs. It's about aligning program boundaries to data boundaries, with the program size and complexity being whatever it needs to be to accomplish this. Data boundaries are as dictated by data distribution or resilience considerations, or more commonly, by use of a microservice that avails a shared data resource.

It's how things were done in my day so all I am doing here, is sticking with what I know. I'm entitled to be old fashioned because I'm old, but I'd be lying if I didn't admit that one reason I have not followed the pack, is because I found it very difficult to do so. Evolution of methods and practices has moved many aspects of software development, into the realm of 'tedium requiring attention'. Java was a case in point. I tried to learn it but I lost patience with the endless list of things I had to install!

The necessary, the whole necessary, and nothing but the necessary.

There were also sound reasons for not following the pack. One being the programs I specialize in, which mostly test for adjacency accross a multitude of memory resident maps. It's how scheduling systems work, how trading systems work, and how much of AI works. RAM occupancy is often in the gigabytes so 'small program' simply doesn't come into it! Another far more important reason, is the principle of OCCAM's razor: Non sunt multiplicanda entia praeter necessitatem; Entities are not to be multiplied beyond necessity; or as I prefer, the necessary, the whole necessary, and nothing but the necessary. One golden rule I absolutely insist upon, is never run a program (particularly a server program), without the source code and a clear understanding of it! This is rarely mentioned these days, but it was instilled in me and unless I have missed something, it still applies. Implicit in this rule is the application of OCCAM's razor. It isn't good enough to know that imported components are doing what they say on the tin. You have to know what else they are doing.

When this rule was explained to me, it was easy to follow. Ideally one would fully understand the source code and be able to identify data islands in the data model, and all excess functionality. However one could eliminate the most serious problems, simply by isolating the socket functionality. Following the rule today has become much harder. The bewildering install scripts have not helped, and with the code reuse culture and subsequent rise of imported functionality, adherence has all but been abandoned. Sure, most of the time it will be OK, but is it within your control to fix it if it isn't?

I'm real time, big program, C++ throughout if you please. And because I'm not prepared to get myself into that sort of difficulty, I'm real time, big program, C++ throughout if you don't please. There will be no bewildering install scripts. Just a tar file that you can untar, type make and run the program at will. Of course I'll import functionality, but only where it is absolutely necessary, and only where I can monitor all communication between the third party software and my own!

The Rules of the House

Ventures can do worse than run out of money. They can become a Frankenstein's monster, causing the proprietor to run out of time! Given this concern, the last thing I want to do is create dependency. It is said that the customer is always right but with respect, the customer is not always right. Many want bells and whistles they don't need. Software vendors, particularly those who do want to create dependency, see this as extra money, so they smile sweetly and go along with every whim and fancy. I will push back on this because I want my customers to be as pathologically independent as I am and I want them to shave with OCCAM. I will sell you bells and whistles if you insist, but you are going to have to insist!

Thanks for listening. Enjoy HadronZoo!