In a premodern world, power (governmental positions/largesse, mercantile advantage) was typically distributed to those who were related to (or at least part of the same social set as) royal families, aristocracy and nobility. Kings’ younger brothers were made admirals and generals willy-nilly. Land was granted to the favorites of Queens (or mistresses). One of the most famous examples from late 19C Britain was the Cabinet appointment of Arthur Balfour by his uncle/Prime Minister Robert Cecil, Marquis of Salisbury. This elevation of his nephew gave rise to the term “nepotism,” (and also, the catch-all explanation of mysterious developments: “Bob’s your uncle”).
Increased professionalization of the military and the civilian bureaucracy in 18/19C Europe (and, e.g., the first US Civil Service Reform Act was in 1883), marked the shift to more professional management (of course, for centuries, China’s famous imperial service was based on a rigorous examination for wanna-be mandarins). As corporations increased in size and distance from their investors (late 19C-mid 20C) they, too, responded to competitive pressures by professionalizing their management. Standardized testing for academic admissions (the SATs began in 1926) expanded the scope of “merit”-based advancement in society.
From this perspective, it’s hard to see “meritocracy” as any sort of bad thing. As a marker of social progress, a manifestation of modern organized and rationally-based decision-making, both public and private sectors seemed poised to benefit from moving away from “who do you know?” as the standard of hiring/promotion in increasingly large and complex activities.
Still, there is no progress without downsides. Here, the idea that “merit” could be measured set up several problems. First was the conflation of the measurable with ability or value. People like simple answers and formal, statistically-based, well-organized decision-making structures fill the bill, regardless of whatever edges of judgment and insight are carved off in the process. The other was the other frequent human response of “gaming the system,” most famously manifest in test-cramming courses in several East Asian countries. If objective criteria are established, they can be targeted for success. This fosters an environment in which the selection process is more important than the underlying values for which the selection is made. Individuality is suppressed in favor of conformance. Resources are applied to secure success at the earliest possible logical step in the selection process (e.g., competition for private kindergarten schools in Manhattan) and carried to pathetic extremes (e.g., the recent college admissions scandals, perpetrated by very privileged/rich families).
The most insidious problem, however, was that people started to believe that the “merit” system was definitive (instead of a rough bureaucratic approximation). In other words, “merit” for purposes of selection/advancement was conflated with moral worth from a social perspective. This has had all sorts of pernicious effects. For example, most people would think that someone who scored well on academic tests or met thoughtfully-designed hiring/promotion criteria was better than someone who’s interests and values were not susceptible to the measurability/bureaucratic mentality. (why is it, exactly, that a being a lawyer is better than being a carpenter?) This is particularly true of those with “merit,” who tend to believe their own version of the “chosen people/elect of God” self-justifying worldview. Such an approach profoundly distorts society, government, business, employment, and personal life choices. [You can get a more detailed exploration of this issue from Michael Sandel, a pretty-insightful Harvard professor, here.]
Then, there is a rather large question of whether “merit,” even if it were an accurate assessment of moral value or individual capability, is fairly determined in our society. The standards for “merit” have been determined by a society with a long history of domination by white males from wealthy families and extraordinary opportunities. It can’t be surprising that their standards of “merit” reflect their own criteria of quality. But other people have other perspectives and it is possible (likely?) that the embedded structures are highly discriminatory.
Moreover, it seems pretty clear that “merit” is more a result of nurture than nature. Growing up in a stable, supportive family, access to attentive, challenging, and enriching educational and cultural experiences, physical and environmental health all are highly causative of “merit,” but they are not equally distributed in society. So, it’s no wonder that prototypical elite college freshmen have most of these advantages. We have to ask whether their “merit” is really theirs? [Full disclosure: I had all of these and a lot of them.]
Does that mean that those with “merit” aren’t relatively smart/capable? No. But it does undercut, in terms of both salaries and societal esteem, the idea that they “earned” it. It does undercut the idea that our society really believes in equality of opportunity as we often proclaim. It does undercut the idea that our standards for value and “merit” are really as objective as we might like to seem.
I’ve consistently put “merit” in quotes throughout this essay as a reminder of how easy it is to forget that it is a social construct. If we really believe in the quality of the individual matters, then we are losing out of a lot of talent. If we insist that “merit” is moral value, we are losing out on reframing our society to reflect a broader sense of value and ethics. We are not just harming those with less “merit,” we are harming ourselves.