This is a long post. Bear with me.
We live in an increasingly automated world. And this makes me nervous.
Computers are stupid. They will only do what we tell them to. Which leads to the obvious conclusion that any given computer program is only as good as the person or people who wrote it.
I could focus on any system with this blog post. I’ll talk about the Google Self Driving Car project. It’s one I’ve been watching fairly closely, as I put more thought into how automated our world is becoming.
On the surface of it, the self driving car is something of a success story. The last monthly report shows that over the course of 1,268,108 miles in autonomous mode, no accidents were reported. Which is pretty awesome. It’s only getting better all the time, as the geniuses at Google get on the case.
I get nervous at the thought of giving up control to a system that was programmed by people. I know that in large part is down to my own arrogance – part of me is obviously of the mindset that I am better at driving a car than a piece of software programmed by a fellow human being.
Here’s where my non-arrogant issues with this come in to the equation. Let’s consider the human element in a real-time system. Things happen in real-time. Things that a piece of software may not know how to deal with – only because it’s bloody difficult to anticipate edge case scenarios, and tell the software it may happen – which requires the person or people doing the coding to anticipate this. The likelihood of such events – i.e. a meteor landing in the road, etc – is very small. But the probability is still there.
The reaction speed of a human is slower than that of a machine in a given situation, which could render this whole issue moot. But a human still beats out a machine when it comes to the ability to react. If I was a passenger in a car when the hypothetical meteor crashes into the road in front of me, I’d be hoping that the person driving the car was in full control. Not the computer of the car.
My next issue is the removal of personal and human responsibility in a machine that has often proven capable of killing in worst case scenarios. These scenarios are often down to human stupidity. The solution to this would be to make all cars self-drive. But this then also leads to what Google call in their report the ‘Hands off problem‘. You still have the issue whereby a human ability to react to the unexpected will be necessary in select cases. The ‘Hands off’ problem, and the google report, estimate that it takes between 5 and 8 seconds for a human to regain control. In real-time, this is an absolute age. Even a full second would be too long in many cases of an unexpected event.
Next, let’s talk the bug to code ratio. I only have some books and blog posts published a few years ago to go on here (after a brief google search). Dan Mayer published an interesting blog post back in 2012 which discussed bug to code ratios. It’s worth a read. In it, he quotes a book called Code Complete by Steve McConnell, which states the following:
Industry Average: “about 15 – 50 errors per 1000 lines of delivered code.”
That’s quite a scary number. Now, an error could be something fairly minor. It could be a spelling mistake in the user interface, for instance. It could also be something more serious. Here’s the thing – I suspect that this number may only have gotten larger, even in the face of better coding standards. As we work to improve an older system, for example, the system gets larger, more complex, and more interconnected with its component parts to satisfy an ever-growing demanding market that wants better features.
I’m sure that Google have very stringent code quality processes and guidelines, and as such their code to bug ratio is incredibly low. But I doubt that they can guarantee 100% clean code, without a single error. If any part of their code base meets the above quoted industry average, then I actually find that pretty frightening. And that is feasible, given that in any software development, you will have different people/teams working on different modules. In addition, how much can any published test metrics be trusted when it’s public knowledge that said test metrics can very easily be faked? All it takes is one person to either get it wrong or to abandon scruples to meet a deadline. I’m sure that the people working at Google are all fine and upstanding. Will every person who gets their hands on this kind of software be the same?
Here’s the even bigger issue. People can be clever arseholes, to put in bluntly. Cars relying too much on software have already been hacked. So even if we go with the assumption that everyone working on the software is an awesome and outstanding citizen, that’s not to say that everyone in the world is.
That aside, my conclusion is that automated systems can only work when we all give up all personal responsibility over to the system. Where does this stop? As an adult, I take pride in being responsible for my own welfare. The idea of handing any part of that responsibility over to a system – and therefore, to the people who are in control of that system – sends chills down my spine.
Don’t get me wrong – the technology is cool. It’s one of the reasons I’ve been following the news on it closely. But the ramifications of something like this being adopted worldwide is something that I feel probably hasn’t been thought about fully. Bug ratios, inability to anticipate the random, or malicious intervention aside, just who do you think should be in control of your car? You? Google? Or any governing body that takes over the software at some point in the distant future?