Automatic Real Time Systems are Scary.

This is a long post. Bear with me.

We live in an increasingly automated world. And this makes me nervous.

Computers are stupid. They will only do what we tell them to. Which leads to the obvious conclusion that any given computer program is only as good as the person or people who wrote it.

I could focus on any system with this blog post. I’ll talk about the Google Self Driving Car project. It’s one I’ve been watching fairly closely, as I put more thought into how automated our world is becoming.

On the surface of it, the self driving car is something of a success story. The last monthly report shows that over the course of 1,268,108 miles in autonomous mode, no accidents were reported. Which is pretty awesome. It’s only getting better all the time, as the geniuses at Google get on the case.

I get nervous at the thought of giving up control to a system that was programmed by people. I know that in large part is down to my own arrogance – part of me is obviously of the mindset that I am better at driving a car than a piece of software programmed by a fellow human being.

Here’s where my non-arrogant issues with this come in to the equation. Let’s consider the human element in a real-time system. Things happen in real-time. Things that a piece of software may not know how to deal with – only because it’s bloody difficult to anticipate edge case scenarios, and tell the software it may happen – which requires the person or people doing the coding to anticipate this. The likelihood of such events – i.e. a meteor landing in the road, etc – is very small. But the probability is still there.

The reaction speed of a human is slower than that of a machine in a given situation, which could render this whole issue moot. But a human still beats out a machine when it comes to the ability to react. If I was a passenger in a car when the hypothetical meteor crashes into the road in front of me, I’d be hoping that the person driving the car was in full control. Not the computer of the car.

My next issue is the removal of personal and human responsibility in a machine that has often proven capable of killing in worst case scenarios. These scenarios are often down to human stupidity. The solution to this would be to make all cars self-drive. But this then also leads to what Google call in their report the ‘Hands off problem‘. You still have the issue whereby a human ability to react to the unexpected will be necessary in select cases. The ‘Hands off’ problem, and the google report, estimate that it takes between 5 and 8 seconds for a human to regain control. In real-time, this is an absolute age. Even a full second would be too long in many cases of an unexpected event.

Next, let’s talk the bug to code ratio. I only have some books and blog posts published a few years ago to go on here (after a brief google search). Dan Mayer published an interesting blog post back in 2012 which discussed bug to code ratios. It’s worth a read. In it, he quotes a book called Code Complete by Steve McConnell, which states the following:

Industry Average: “about 15 – 50 errors per 1000 lines of delivered code.”

That’s quite a scary number. Now, an error could be something fairly minor. It could be a spelling mistake in the user interface, for instance. It could also be something more serious. Here’s the thing – I suspect that this number may only have gotten larger, even in the face of better coding standards. As we work to improve an older system, for example, the system gets larger, more complex, and more interconnected with its component parts to satisfy an ever-growing demanding market that wants better features.

I’m sure that Google have very stringent code quality processes and guidelines, and as such their code to bug ratio is incredibly low. But I doubt that they can guarantee 100% clean code, without a single error. If any part of their code base meets the above quoted industry average, then I actually find that pretty frightening. And that is feasible, given that in any software development, you will have different people/teams working on different modules. In addition, how much can any published test metrics be trusted when it’s public knowledge that said test metrics can very easily be faked? All it takes is one person to either get it wrong or to abandon scruples to meet a deadline. I’m sure that the people working at Google are all fine and upstanding. Will every person who gets their hands on this kind of software be the same?

Here’s the even bigger issue. People can be clever arseholes, to put in bluntly. Cars relying too much on software have already been hacked. So even if we go with the assumption that everyone working on the software is an awesome and outstanding citizen, that’s not to say that everyone in the world is.

That aside, my conclusion is that automated systems can only work when we all give up all personal responsibility over to the system. Where does this stop? As an adult, I take pride in being responsible for my own welfare. The idea of handing any part of that responsibility over to a system – and therefore, to the people who are in control of that system – sends chills down my spine.

Don’t get me wrong – the technology is cool. It’s one of the reasons I’ve been following the news on it closely. But the ramifications of something like this being adopted worldwide is something that I feel probably hasn’t been thought about fully. Bug ratios, inability to anticipate the random, or malicious intervention aside, just who do you think should be in control of your car? You? Google? Or any governing body that takes over the software at some point in the distant future?

6 thoughts on “Automatic Real Time Systems are Scary.

  1. I’ve been taking a keen interest in driving standards for a while. I’ve passed the Institute of Advanced Motorist test and been an Observer for them. That said I don’t drive the 18 – 20,000 miles a year using the system anymore.

    Standards have definitely declined. People rush from place to place often with too little regard for the road conditions. They drive more aggressively than when I first learned to drive at the age of 19. They use telephones, smoke, have more complicated in car systems and in my opinion take the art of driving much less seriously and curteously than they need to.

    Roads are more complicated, there are more regulated zones, more complex road layouts and complicated overloading of roads (bus lanes, lanes for car sharing, hard shoulders repurposed for live lanes at busy times, contra-flows and so on) than there were 27 years ago (gulp).

    The elderly are living longer and are actually I’ve longer and very often are an absolute menace on the road.

    So do I believe that machines could drive better than some humans? I think in some cases absolutely, I really do. I think we might have to reduce the complexity of our road systems or run simplified systems in parallel with regular roads.

    What I would worry about in the case of the elderly and infirm is them interrupting the car’s control system and making bad decisions of their own.


    1. For the most part, I can agree. However, all these people have had to – at some point – demonstrate some ability in order to pass a driving test. While I’m sure the software in this case could easily be programmed to pass the test as well, I think we’re still far off from a system that can be 100% trusted with our driving. 🙂


  2. Actually there are many still living that only had to apply for a license and didn’t sit a test. These drivers will be in their eighties and nineties.

    I’m assuming that Google would be opening itself up to very large liabilities if the system wasn’t at least as safe as a human. I’ve seen a video of a google car using live roads in the US and whilst the driver was freaking out the car appeared to do a good job.

    What bothers me more is the possibility of hackers breaching the cars defences and deliberately operating the controls before the driver could take over again.


    1. True! I’ve encountered them often myself. They usually seem to wear hats, to! Anyhow, the hackability of such a system is a major concern, but I’m still not convinced by current software standards – even if it is google who are doing the programming. I’ve yet to see one of these cars dealing with adverse weather conditions, or true randomness that can pop up on many busy roads. At the moment, I would say it is almost impossibly difficult to design a system that can deal with all the possible edge case scenarios. So, the human element is still necessary, and a 5-8 second ‘hands on’ period is far too long in a situation that would require quick reflexes. Granted, not all people would have said reflexes, but they still have a better chance of dealing with the unpredictable.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s