As with pretty much any technology, AI is simultaneously disruptive, helpful, amazing, and problematic. We have been working with word processing technology since the 1970s (I remember learning how to use a Wang system in the early 1980s). Now, we (most of us) start typing or speaking into our phones and we are prompted with the most likely completions of the words or phrases we begin. Sometimes really helpful; sometimes laughable, sometimes just a pain.
Accounting and other programs transfer data so that an increasing amount of mundane/routine work formerly done by clerks (remember them?) are now automated. As AI capabilities grow, their impact is moving up the labor market ladder. “Expert systems” (really just huge databases running through very fast processors) are proving to be better diagnosticians than most doctors. “Coding” (i.e., writing software) was once done by a small group of Silicon Valley geeks. Then it was “off-shored” to lesser paid geeks in India. Now, computers are enhancing the ability of the geeks to write code faster using the same kind of auto-complete suggestions we know from our phone messaging. Soon, the databases will be filled with enough experience that they will cause most of that work to be done in a fully automated way.
Military drones are developing along similar lines. There is an emerging ethical debate about whether a human must be in the final decision loop about whether to take out some terrorist/undesirable, lest we too quickly fall into the sci-fi world of robots choosing to take out all humans on their road to evolutionary triumph. Most of this debate is just noise. Someone (human) had to program the software for the flying car/Quicken/terminator. There are certainly issues about the criteria to be used in deciding whether to launch a missile against some jeep convoy in Western Afghanistan; but that’s not a new ethical question (ask any Special Forces sniper!). There are some issues about the quality of the human programming so that the drone is set to do the “right” thing, but that’s not an ethical issue either. Still, it makes for a juicy/ominous media story, so it gets hyped-up.
Whether AI will offer us new types of capabilities and actually present us with fundamentally new questions seems doubtful. In 1854, Dr. John Snow compiled information about the location of Cholera victims in London. He used the then emerging capabilities of statistical analysis to develop a theory of what might be causing the deadly outbreak. It was an early example of applied information processing, using cutting-edge techniques and data structures; and it changed the world in important and continuing ways.
Perhaps a more interesting question is whether computing power/AI will reach the level of being able to match economic markets as processors of information. Traditionally, I expressed my interest in buying a box of cereal by going to the market and determining whether I wish to spend $4 a box for Kellogg’s or $3 for the Safeway house brand. The “market” (the generic market, not Safeway the grocer) sees my action and Kellogg and Safeway adjust their output accordingly. Now, Facebook tells Safeway how long I linger over their ad for cereal and Safeway can (pre-market) translate that into their plans for production down the road. Soon, someone will be able to tell that men of a certain age who look at buying a Tesla and go see Hamilton will drink more whiskey in the next month. It’s no wonder that Google, Amazon, etc., are investing billions in figuring out how best to make AI work.
When you overlay the feasibility of governmental access to and consolidation of all this info, it’s possible to imagine that the concept projected by early 20C Marxists and other Socialists will be realized. There was a big debate among global intellectuals at the time. The triumph of liberalism was premised on the belief that the market was the best mechanism for operating an economy; that it was too complex to be rationalized and managed by humans and politics. The Soviet process (5-year plans and all that) was designed to replace the market with intelligent conscious decisions made by the State. Market substitution didn’t work back then, but what if that failure was only because Communist theory outpaced computational realities? The Soviet Union collapsed under its “internal contradictions,” and market capitalism went on its (our?) merry way. I’m sure there is a think-tank in Beijing figuring out how an economy could be operated (and controlled) by Chairman Xi; even if Marshal Stalin didn’t have the petaflops of computing power at his disposal when the Socialist paradise was projected. Could an AI-driven State replace a mid-21C market mechanism?
It’s not just the economy. Steven Spielberg’s Minority Report (2002) used AI capabilities to foresee and prevent crimes; a field in which we are now functioning at an aggregate level, even if we can’t ID specific perps in advance. Other sci-fi authors have written of future wars in which two combatants’ computers face-off to see who triumphs. And, of course, there’s the “holodeck” on Star Trek: TNG in which immersive gaming and entertainment is supercharged.
The later 20C saw all sorts of claims/fears that automation and computers would replace people. It’s coming to pass, even if a few decades later than and in not quite the same way that some pundits forecast. It’s a good bet that most of our prognostications will be off a bit as well. Perhaps an AI-based prediction algorithm will do better....