I like C.G.P. Grey. He's a professional educator and he's done a mess of very fine videos. He generally keeps to straightforward, direct subjects and as he does his homework, I appreciate him highly. A little more than a week ago, however, he released a 15 minute video that . . . well, watch the video in its entirety before I continue. It's well done and interesting, and shouldn't have any problem holding your attention.
As the youtube video presently has 17,741 comments, there's little point in my commenting there (though I did). If I'm going to address my problems with the video, there's little point in doing it anywhere except my own space.
Besides, I can stretch out here and write as much as I want.
Allow me to begin with a few stipulations. Everything that Grey says about technology is - to the best of my knowledge - true. 'Baxter' will replace jobs, people don't care where they get their coffee from, robots are getting cheaper and faster, there is no rule that states that better technology is guaranteed to make more jobs for humans, the statistics quoted are accurate, present-day 'unions' will not prevent the spread of technology, the stock market has been vastly rewritten by 'bots,' bots do research, bots write music, bots write books, computers are able to be 'creative' and technology is able to replace many 'talents.' And yes, the roboticization of culture is a problem.
I have a few quibbles. I'm only going to mention one. The 'original music' turned out by the program Emily Howell is boring. Insidiously boring. If you listen to some of it, you'll find it fits with Grey's and other assessments - it is complex and original and momentarily interesting. If you listen to it for more than five minutes, however, you'll quickly find yourself pushing it out of the front of your mind, while after an hour you'll realize you haven't been listening to it at all for nearly an hour. In this way, it is very like most of the human-made music you listen to, but it is 100% unlike the human music you listen to that you like. Music that you really, really like is infectious and dopamine-rich. I weep for the poor dumb bastard who gets a dopamine rush from Emily Howell. Actually, I weep for the bastard's family, friends and co-workers who have to live anywhere near him.
Very well, let's look at the 'problem' that Grey proposes: robots are cheaper and just a tiny bit better technically than humans, so they will replace human jobs, leaving many humans jobless and therefore possessing no way to make a living. There will be nothing for them to do, and therefore they will simply have to go. Like horses.
For those who haven't seen the video (though you should), Grey employs an analogy regarding the replacement of horses by the automobile, in which millions of horses were killed because they ceased to be a functional part of society. As humans cease to be a functional part, Grey alludes, they too will find themselves at risk of being treated as horses. As the point is stress several times, it's quite clear that the theme behind the video is to promote FEAR of technology. It tries to promote this fear very rationally and reasonably, suggesting that we need to deal with this issue before something really, really terrible happens. Because it will. Be afraid. Be very afraid.
There are a few relevant ways in which humans are very different from horses that are completely ignored in this analogy, however. The first is that the horses themselves had no actual part in the decision-making process surrounding the getting rid of horses. Horses did not invent the cars, they did not buy the cars, they did not find the cars superior and they were not in anyway empowered to resist or improve themselves in order to adapt to the sudden existence of cars.
Of course, some humans won't be empowered to stop the plans and progress of other humans, either. Some humans will make robots, other humans will buy robots, then implement robots, while many humans will not be considered or permitted an opinion on the matter. Of course, I say 'permitted' colloquially. 'Permitted' is a limited word that exists in a limited context, but for the time being let's just say that within the law, and according to the principles of respect for property and employment, workers will be fired and no one will care.
Unlike horses, however, these humans will understand what's happening. They will have a very clear understanding of the reason why they're now unemployed and why they and their families are starving. This is the second way in which humans are very different from horses. With horses, when we 'fire' them, we let them keep working while trying to sell them - and when that doesn't work, we take them into a nice building and kill them.
We don't do this with people. We inform them that they're no longer permitted to draw a paycheque and then we calmly expect them to wander away and figure it out for themselves.
That's where the whole horse/human thing breaks down - because one of the ways that huge numbers of unemployed, starving people figure it is to destroy everything and anything that contributed to their being unemployed and starving. And NONE of these people will give a shit about how many robots they destroy. Or what else they destroy in the process.
Finally, there's a third missing point in Grey's horse analogy. Horses are not consumers. We are already living in a society where very poor people are awarded ridiculous amounts of credit because it is the only way we have to incentivize a population that has lower and lower wages - and therefore less money to spend on products being made. When we are replacing all these humans in order to make more products more cheaply, what will these humans buy these products with? More credit?
It's interesting that Grey completely ignores this, despite drawing the connection between the prospective unemployment (45%) with the unemployment during the Great Depression (25%) - failing utterly to remember that the rich and powerful (the ones who would be replacing us with robots) had to be saved by a war - where the government bought things from them, even things the government didn't need, in order to stabilize the economy (J.K. Galbraith, economist during the Roosevelt administration, makes some fun points about that in this video).
And where did the government get that money? Taxes. Which large corporations - the kind to implement robots on a grand scale - don't pay. What are we going to do when the vast population has no taxes to offer to pay a government to give welfare to the rich to stimulate an economy based on consumers that have no money?
Well, they'll create more money from the air, of course. That always goes to good places.
Thus we have several valves here that must be considered, none of which are accounted for in Grey's video and none of which involve the replacement of humans with technology. What we have is a possible attempt by bean-counters with very short vision to replace humans with robots, only to be surprised horribly when these humans turn around and begin to destroy the country and every bean-counter in it.
Humans, see, are smarter than horses. We compete for survival much better than horses do. This is evident by the fact that horses are our slaves. IF the robots become the next form of competition against the survival of humans, anywhere, then the solution to the 'problem' becomes self-evident. Humans survive, the very dumb and lack-of-awareness robots do not. It isn't a question of, do humans win over robots, but how much pain and suffering do we plan to go through before the inevitable balance is reached?
We're already running pell-mell towards that balance - and by all accounts, at the way government is allowing business to run the system, we are already planning for a great deal of inevitable pain and suffering. Robots may be here right now as Grey says in his matter-of-fact statements (no argument there), but the coming war isn't going to be human vs. robot. It is going to be between humans that want robots and humans that only see robots as a threat.
Grey has it wrong. The humans aren't like horses. The robots are. The robots will be the ones that quietly die when we decide that for them. Unlike Skynet or whatever the hell the robots in the Matrix called themselves, robots are not going to suddenly 'become aware' and have a say. We'll experiment with them for awhile and then we'll decide just how much of them we're willing to tolerate.