Steve Harris
  • Home
  • About
  • Courses
  • Research
  • Other Sites
  • Contact
  • Condemned to Repeat It

Existentialism

10/29/2021

2 Comments

 
This is NOT about Sartre, or Kierkegaard, or any number of other Euro philosophers who wrestled with issues of the meaning of human existence (except incidentally and tangentially).

It is about what some people do when they feel their existence is at stake.

I have been struck by the juxtaposition of the stance of two large slices of the human race who perceive such a threat, both of which I have discussed from time to time in this series.

* There are those—in the US and elsewhere—who feel that their world is crumbling: that the components and structures by which they define themselves are going away. This group encompasses those who have traditionally held significant power in society, often based on race, gender, or economic status, as well as those who may lack any such significant power, but still feel their identity is tied up with the societal and epistemological status quo.
* There are those—in the US and elsewhere—whose world is crumbling: that the current levels and momentum of human disruption of the planetary environment will lead to the deaths of large swaths of humans, as well as other species over the next decades or century.

The first group is not (uniquely) physically at risk; but their conception of who they are and how they fit into the world is being pressured by societal changes, including demographics, unfamiliar language and semantics, new social values, and competition for power/wealth/status. The acceleration of change—a hallmark of modernity for the past 250 years—is bewildering and unsettling.

The second group is (although not uniquely) physically at risk. Many in this group understand, at least in broad terms, the threat to the planet and the species. They accept the premises of science and generally have confidence in the processes and methods by which a dismal forecast might be proffered.  Some in this group, for a wide variety of reasons, are unaware of the probable outcome of humanity’s current course; sometimes due to lack of education or current information, sometimes (which includes much of the first group) because of nescience (i.e., intentional ignorance or the closed mindedness of denial).

Many in the first group are marshaling their considerable resources, including physical demonstrations and political engagement,  to hold back the tide of social change. They seem to be ready to dispose of the nominally bedrock norms of behavior, including respect for law, fairness, due process and the essential social glue of democratic societies. They are “all in.” Within their worldview, this makes sense. The Constitution (the social contract) is not a suicide pact. Lesser norms must be sacrificed to preserve survival. In a particular and tightly-framed way, I admire their clarity of thought and their willingness to take action to preserve themselves (even as I disagree with most of their premises and fears).

Of the second group, at least those who are aware and are in a political and financial position to take more than cursory or minimal actions in response, only a few seem to have the same “fire in the belly.” Most of them, even as they recognize the risk and proclaim their rationalism and their humanism, do little…or nothing.

Those in the first group misperceive (in my view) the nature and extent of the risk they face; calling it existential. But, having done so, they act drastically, and, arguably, “improperly,” “illegally” (with due citations to the ideas of Jefferson and the practice of Lincoln), and “wrongly.” But at least they act to defend themselves and the world as they perceive it.

Most in the (aware portion of) second group perceives (accurately in my view) the nature and extent of the risk they (we!) face, calling it existential. But, having done so, act incrementally, in accordance with political niceties and legal standards in the hope that it will all work out or, at least, that the damage will be limited to others or that they will be dead (by natural causes) before things get brutally bad.

I’m not sure exactly how to conclude this discussion of two groups; but I am struck by their intimate parallels. It would be a fine thing if both groups were to change their stances. History, however, doesn’t give us lots of encouraging examples in this regard. Indeed, of such impasses, revolutions are made.

2 Comments

Monopoly Game

10/22/2021

1 Comment

 
There’s much brewing in Washington these days about “Big Tech.” Are Amazon, Google, and Facebook (yes, the perennials: Microsoft and Apple are there, too) mistreating consumers or competitors? Should the government, either by new legislation, or enforcement actions taken by the DoJ, Federal Trade Commission, FCC or other agency, rein them in?

Having worked for that old champion monopolist—AT&T—some years back, I am wary of the effectiveness of such actions, government is always going to play “catch-up” in trying to manage rapidly changing industries. But beyond the particulars of antitrust law, it’s important to understand the broad history of regulation and what it’s for.

Speaking broadly and roughly, the theory of antitrust, developed at the end of the 19C is a counterpart to a much longer theory of monopolies and government regulation of powerful businesses. Formal, legal monopolies were set up by the State as a means of coopting private businessmen to engage in an enterprise which the State could nominally control but wasn’t very good at managing. The English East India Company, for example, was set up in the 17C to trade between England and the Far East and they were granted a monopoly on this trade, i.e., the EIC was the only legal importer of spices, etc. back into England. In return, the State took a (often substantial) cut of the action. In addition, certain “natural monopolies” e.g., ferry boats or bridges were authorized or regulated as necessary enterprises whose unbridled market power could gouge the ordinary citizenry. This last type eventually evolved into our modern regulation of utilities, such as transportation, power and water companies and, until recently, communications companies (especially my former employer).

Modern antitrust grew in response to the creation of cartels and other forms of business combination (sometimes structured as a “trust”). While grounded in economic theories of distortion of markets (e.g. monopolization or price-fixing) and enacted to meet public outcries against abusive practices, to me the essence of anti-trust has always been about power.

The State is conceived as the fundamental crystallization of society and its power structure. The standard definition of a State is an entity which plausibly claims a monopoly on the legitimate use of force in a society. But it’s power goes well beyond physical coercion (armies, police) to encompass the supreme authority in a society. Unlike normal policing, or concerns with domestic unrest/terrorism, with antitrust the State is recognizing that power works in ways beyond physical force and it wants to ensure that it keeps all its competitors in the power game under its control.

In this way, antitrust works in conjunction with traditional regulation of railroads (late 19C under the Interstate Commerce Commission), banks, and utilities. The mother of all anti-trust cases was against Standard Oil, the Rockefeller vehicle for dominating the oil and gas industry. In 1911, the Supreme Court found that Standard Oil had violated the Sherman Act (1890) and the courts ordered it split into seven different major companies (some of which subsequently recombined).

The break-up of AT&T (1982) was a similar situation, complicated by the existence of regulatory structures and the beginnings of rapidly changing telecommunications technology. It, too, was broken up into seven major companies (one of which hired me in 1983), all of which eventually recombined into the new AT&T and Verizon. Most antitrust cases are not of this magnitude and significance. Price fixing among auction houses, while a glamorous story at the time, only affected the small group of people who are in the market for a Picasso painting.
The same cannot be said of today’s Big Tech companies. Their behavior affects billions every day and, more importantly, the uncertainty over the evolution of the advertising, retail, entertainment, and media industries in which they operate makes their potential market power scary. Their high profitability make them juicy targets from a PR perspective. However, as with every aspect of governmental intervention into society, its political, bureaucratic and procedural constraints make regulating private companies difficult. This is especially so for Big Tech since their products and markets are evolving rapidly. It’s challenging to know clearly what kind of products and practices are problematic since by the time the legal and administrative processes take their course, the market will have moved on, with new services and new competitors. (Imagine spending a lot of time and effort to control AOL or Yahoo!).

So, what’s really going on? Some of the governmental effort is grandstanding: how much fun it is to watch Mark Zuckerberg being raked over the coals in a Congressional hearing. Some is signaling/deterrence (“Don’t take any more aggressive steps in your markets, Google, or we will really come down hard on you.”). And some use antitrust mechanisms to beat up on companies for being too big and amassing too much effective power, even if there is no clear antitrust violation going on. As you might suspect, these three modes usually bleed into one another. Companies also use antitrust laws to go after competitors and suppliers when they can’t win in the market; it’s relatively cheap to do so, generates good PR, and you might get lucky; but it’s also pretty difficult to sympathize with either side in the various battles of the corporate giants, whether it’s MCI vs. ATT, high-tech patent battles, or Apple vs. Microsoft.

None of this is to say that Big Tech is innocent of wrongdoing and exploitation of market power. While corporations are not charities, fat profits make for arrogance and blindness in how corporate actions affect suppliers, competitors, and customers. It’s easy to rationalize abusive behavior when your business model  increases margins.

Still, at bottom, much of the regulatory/Antitrust game is about big government making sure nobody else in society gets uppity and forgets who is really in charge.

1 Comment

Me First

10/15/2021

0 Comments

 
Me First

Two of the central premises of our modern world over the past 300 years have been economic growth/capitalism and democracy. Each can be seen as part of the historical dialectic which posits change from the prior (ancien) regime: stasis and monarchy/oligarchy, respectively. Each positioned itself has a positive change (aka “progress”) and with some plausibility. As you know from my earlier postings, it’s hard to argue against democracy, either theoretically or, in the long term, as to its practical benefits. Economic growth, too, yoked to technology and higher living standards has carried the day (at least until recently when Nature/climate has begun to submit its bill for damages deferred).

However laudable these twin touchstones might be from a generic, human/global perspective, their implementation have been tainted by a sense that noble aspirations have been appropriated by elites along the way. These groups, who were relatively advantaged at the time, have maintained or expanded their disproportionate exercise of the benefits of capitalism and democracy. One can characterize their posture as: “Yes, but me first.”

Their ability to preserve economic and political power, respectively, have been enhanced by the fact that these are mutually reinforcing.

Their economic argument is relatively easy to see; captured in the well-known theory of “trickle-down” economics. “Let me (the merchant/lawyer etc.) get rich by reducing the governments burdens on me, and benefits will—in due course—arrive on the doorstep of the not-so-well-off.” Repeatedly demolished over the past century, both as to its theoretical underpinnings and the historical record (most lately by Paul Krugman), “trickle-down” retains a political vitality explainable only by a combination of disingenuity and naivete hidden under a rubric of anti-governmental “liberalism.” As I have noted previously, there’s plenty of reason to be wary of gratuitous government involvement which has repeatedly demonstrated that it is no panacea for all of the world’s ills. Some classic “liberals” have advanced their views based on such concerns and some solid philosophical points. However, it’s hard to believe that many (less philosophical) advocates of lower government burdens are not more mindful of their personal benefits than the common weal. Noble policies with disproportionate effects have been a  frequent result.

In fact, as shown by many economic historians (most recently Thomas Piketty), economic inequality has increased over time (generally as a result of governmental policies). In fact, the principal means by which inequality has been reduced have been wars and other catastrophes or the economic disruption of new factors of production over which neither the state nor society has had much control.

This is true both on a domestic basis and globally. The implicit rationale of imperialism was economic aggrandizement of metropoles  (and their elites), relegating any incidental benefits (if that) for the peripheries. Comparable “charitable” stances were taken by elites in most “advanced” countries over the past few centuries, spurred either by guilt or fear of revolt.

The push for democracy has also featured more arguments by and benefits for elites (as against a monarch or smaller elite) than attention to the power of the mass of people. Magna Carta (1215) often cited as an essential text of the Western (especially English) democratic tradition, was pretty much all about the Barons who negotiated with King John. Again, there was no thought at the time that distributing political power to ordinary folks was a good or likely plan. It was only later, as a wider group developed the economic foundation to challenge the embedded political power of their time (17-18C), that it became a useful precedent, this time used against an aristocracy now fully embedded in the political/economic/cultural power structure of that age.

Even then, in the heyday of revolution, the dispersion of political power was incremental. Indeed, it is striking to compare the broad political structures of ancient Athens with those of the US Constitution; neither had room for women, slaves, or foreigners (i.e., most folks) and the US system usually also excluded free white males who had little or no property. The entire premise of the Electoral College is that the election of the President cannot be entrusted to ordinary citizens; they should rely on their more knowledgeable (well-off) colleagues to make the choice.

Britain, France, and the US—the leading “liberal” states of the 19C, all sharply restricted voting power. In various forms and methods, the ancien regime—traditional elites—persisted in power. For all its good PR, mass democracy is pretty much of a 20C phenomenon (and it is still very much a work-in-progress).

As with the economic angle, the arguments for slowing down democracy have been advanced by those with some power—intellectual elites, generals, the “rich”—to provide enough cover for their continued domination of their societies. However, in a sense, the history of democracy (at least in the last two hundred years) can be seen as aligning with the economic “trickle-down” model. Once we moved past the “divine right” theory of kingship, monarchies could only be a function of the structure of human power. Traditional arguments against democracy—the ignorance and volatility of the mob—that had been in place since Plato, were tempered by increasing literacy and the insertion of mediating representatives into the governmental structure.

The pace of these changes and the continuation of political inequality testify that, at least in this context, “trickle-down” takes generations. This is not surprising in terms of the pace of human social change generally, but it makes it even more difficult to see the economic version as anything other than propaganda designed to placate the bulk of society sufficiently to avoid more disruptive (not to say revolutionary) demands.  One might say that the design of modern society, particularly the rise of the “middle class,” which could claim some degree of economic power and at least the form of political power, has evolved consistent with the needs/interests of elites to retain most of their own.

Overall, it’s hard to discern the specific causes of the dispersal of economic and political power. How much was taken? How much was ceded? Are there any coherent actors or just rough rubrics and groupings? Social scientists propose neat explanatory models. Historians insist that the variety of human situations (different societies, different eras, different mentalités) is too vast and complex to fit into a single story.

0 Comments

The Pace of Science

10/8/2021

1 Comment

 
The journal Science reported in September that footprints in New Mexico are believed to be well over 20,000 years old, providing evidence that humans were in the Western Hemisphere several millennia earlier than previously thought; a significant revision of the conventional anthropological story of globalization. In reading the coverage of this development in the Times, however, I was struck more by the nature and process of the discovery of these footprints than their implications for understanding human evolution. The article noted that the footprints had first been discovered in 2009. Beneath the article, the Times included some links to earlier stories on related topics, including one from March, 2018 entitled “Earliest Known Human Footprints in North American Found on Canadian Island,” reporting on a discovery of footprints from 13,000 years ago.

So, even though the 23,000-year-old New Mexico evidence had been known since 2009, the 2018 Times reported the 13,000-year-old footprints as the oldest on the continent.

This tells us some important things about the nature of science and highlights some of our current difficulties with interpreting scientific findings. As I noted last year, “science” is not a “thing;” there is no single organization, nor an official spokesperson like the President of the United States. Science is a process of thinking about and trying to understand the natural world (including humans and their societies). Lots of people are part of millions of projects to figure things out. Science (i.e., the mass of those people, over time) aspires to “truth,” but, unlike most religions, does not claim to have done more than ‘the best job we can’ at getting close to it.

A twelve-year lag from discovery to publication isn’t all that unusual in science. After all, when dealing with events from thousands of years ago, what’s a decade? Part of the time is taken up with confirmation, verification, theorizing, contextualization, and all that’s before ploughing through the formal publication process, with its own hurdles and delays. History isn’t all that different. Normally, it’s no big deal.

The pandemic has fundamentally changed how we see this process, however. Not only is each small step along the way immediately publicized, but our anxiety over the impact of COVID has led us, as a global society, to expect/demand instantaneous results and alignment. The ordinary, incremental, and conservative/prudent course taken by researchers, Big Pharma, and the FDA/CDC/NIH can come across as bureaucratic and uncaring. I mean: “What’s with this Pfizer booster shot? The Administration (self-proclaimed champions of “science” says: go! (even before the test results are in)). Pfizer says: go!, then it’s a while until the FDA Advisory committee says: go! Then the FDA has to “make it official.” Then the CDC has to tweak and the answer to fit into an implementation plan. Then we all need to hear the benediction from Dr. Fauci. The “normal” scientific process values methods that produce a high degree of confidence in its results; it doesn’t aim at a binary yes/no, go/no go decision; it would rather be right (i.e. highly confident), than fast.

But this looks weird under the media spotlight amplified by global anxieties. What we have seen over the past 18 months is pretty miraculous on several levels and we should all be grateful. The science that normally plods along, unseen by more than a few hundred people, is now ”BREAKING NEWS.” The folks that do the work should perhaps be cut some slack at being slightly discombobulated by all the attention and the expectations of not only miracles, but immediate and definitive miracles to boot.

Nor should we be surprised when “Science” changes “it’s” mind. We learn. We learn methodically. We then adapt to new learnings (hopefully). We all remember when we were scrubbing down boxes of soy milk from the market and dousing cantaloupes in a mild detergent rinse. We learned that COVID is basically an air-borne disease and adjusted. When the next curve ball comes at us, we shouldn’t be surprised. None of this means that “Science” was wrong, since it never claimed to be “right.” It only claimed to give us its best thinking at the time.

This lack of certainty inherent in the scientific process unfortunately creates an opening for those who, for various reasons, either want to attack “Science” or who think that since, it’s not perfect, anybody can do it and come up with their own explanations that are just as good. This is a breeding ground for anti-Vaxxers, anti-masketeers, and (my recent favorite) a woman apparently suing a hospital in Bakersfield to require them to give Ivermectin (an antiparasitic drug) to her ICU-bound, COVID-inflicted husband, despite a total absence of “scientific” evidence of effectiveness.

We are seeing the same set of reactions to the science of climate change, to even more dire results. There, as with COVID, the projections of scientists may prove to be incorrect. In the 18 and 19C, most intellectual elites thought that there was an invisible “aether” through which many physical phenomena operated, most doctors that “bleeding” of patients was a great therapeutic regimen. All those smart guys were wrong, or at least we no longer think their answers make sense. As I have said in other contexts, science, like democracy, is about humility and our collective ability/willingness to allow for the possibility of error. So, while there is no room for smugness, neither is there room for burying our collective head in the sand. Intelligent people operate on the basis of the best information/ideas/theories that they have available. In terms of climate, it’s only prudent to take significant action immediately.

In terms of the anthropological peopling of the Western Hemisphere, there’s no great rush to confirm that the New Mexico footprints are 23,000 years old. If the 2018 Times report omitted the 2009 discovery of those footprints, we can all live with that. The stakes are lower. Most of the time, if scientists take twelve years to publish their startling data and insights, that’s OK; it’s just science at work.

1 Comment

Names and Places

10/1/2021

0 Comments

 
I imagine several of you have been fortunate to have traveled to Beijing; but it’s not likely any of you have been to Peking. The difference is the change in the English translation of the Chinese word which was instituted by Mao Zedong (aka Mao Tse-Tung) in the aftermath of the Chinese Communist victory in their civil war in 1949. It wasn’t until the 1970s that the government insisted on the usage, at which point the “Beijing” moniker became more-or-less standard.

The practice of not using the “home” language name for a country or city and using instead a different version from the language of the (foreign) speaker has struck me as culturally significant. I suspect it has something to do with Western imperialistic attitudes or even a broader arrogance towards foreigners.

The situation is complicated (as in our first example) by the use of non-Roman alphabetic or other character sets (e.g., Chinese, Japanese, Thai, Russian, Arabic, Greek) which then must be transcribed/transliterated into a Roman alphabetic form.

Other examples of  countries insisting on using the home language spelling include India’s shift from Bombay to Mumbai (1995) and Calcutta to Kolkata (2001), Burma became Myanmar and Rangoon became Yangon in 1989. In both cases, these were delayed reactions to decolonization and the withdrawal of the influence of the British who ran both countries until the mid-20C.

My point here is about language.  The British had a protectorate in what they called Swaziland from 1906 to 1968, it took another fifty years for the Swazi’s to change their country’s name to Eswatini (2018). More substantive name changes (e.g., Gold Coast to Ghana in 1957, Upper Volta to Burkina Faso in 1984) which often speak to a post-imperial identity, raise a similar but broader point. By the same token, this is different than just a matter of translating from a foreign language. Milk (Eng.) = lait (Fr.) = leche (Sp.) If two French speakers were discussing Wisconsin’s principal agricultural product, they would speak of “lait.” A bi-lingual discussion could use either one or both terms and if two English speakers were talking about French cows, they would speak of “milk” (except if they were ordering some concoction at Starbucks).

It seems to me that proper names are different, however. The practice is all over the map (so to speak). Why do English speakers use “Germany” (and French speakers “Allemagne”), not Deutschland? Is the capital of Italy best referred to as “Rome” or “Roma”? (I guess if we’re not in Roma, we don’t do as the Romans do!) The French take the Chunnel to Londres, not London.

Norge, Sverige, Suomi: These three Scandinavian countries have lots of etymological history with continental Europeans, so it’s no wonder that the latter had developed their own names for these countries despite the usage of the natives. I suspect that we could map cultural dominance over time by the common global usage of names of other places.

But, I’m more concerned with how most folks don’t seem to respect other’s choices as to names. There’s no particular reason to use “lait” in Wisconsin (especially given the predominance of Germanic language immigrants). In this case, English speakers derived “ from the Germanic “milch.” Regardless, there is an etymology for “milk,” just as there is for “blue” or “curry.” I guess I don’t understand why there is an etymology for “Roma” or “España.” Why should foreigners get to choose the name of a place in my country (or vice-versa)?

There’s an interesting parallel with the current controversies/confusion about respecting others’ choice of names and personal pronouns; which is, in my view, a matter of respecting an individual’s choices for how they want to be seen in society. The issue of names comes up pretty frequently in an academic environment (such as SF State) where we have students and employees from many heritages and students from a plethora of countries where (amazingly enough) they don’t speak English. It seems problematic to expect them to adopt “English” names just to “fit in.” Why should a Korean girl born “Shin-hye” be expected to be called “Susie” or some-such? “Rabindranath” does not equal “Robby,” any more than we would expect Andrea Bocelli to go by “Andy.” To be sure, names should be a matter of personal choice and a foreign student should be free to choose “Susie” if she wishes. At the same time, I am wary of situations where such students choose an English name out of embarrassment at seeing an instructor struggle to pronounce their actual name in a language with which the instructor is unfamiliar. It’s not too hard to make an effort; even if we come from the dominant language/culture on the planet.

It’s but another small step to the more fraught issue of gender identity. I think the same principle applies—the owner chooses—but personal pronouns are deeply embedded in our language and their recent proliferation across a multitude of gender identifications raises a bunch of challenges: remembering, new terms/new modes of self-identification, pace of social change, and generosity of spirit in dealing with others that I won’t dive into here. Our society is in flux and our language is flexing with it.

This set of issues illustrates some of the themes I have talked about over the past year, such as globalization and the complexity and dynamism of modernity. From a historical perspective, you may wonder what it was like adjust to the novel calendar adopted by the French Revolutionaries in 1793 or to be a female voting advocate in 1910. These were the commonplaces of historical change. You may be off to visit the biggest city in Italia or India, or talking to someone who’s name you have to consciously pronounce or referring to someone who looks like a “normal guy” who asks to be referred to as “they.” This is our version of social change. The phrase: “Gay Paris” no longer has the currency it once had; but perhaps we can at least revive the 'Paris' part to rhyme with hairy, rather than Harris.

0 Comments

    Condemned to Repeat It --
    Musings on history, society, and the world.

    I don't actually agree with Santayana's famous quote, but this is my contribution to my version of it: "Anyone who hears Santayana's quote is condemned to repeat it."

    Archives

    March 2023
    February 2023
    January 2023
    December 2022
    November 2022
    October 2022
    September 2022
    August 2022
    July 2022
    June 2022
    May 2022
    April 2022
    March 2022
    February 2022
    January 2022
    December 2021
    November 2021
    October 2021
    September 2021
    August 2021
    July 2021
    June 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    October 2020
    September 2020

      Sign up for alerts when there's a new post

      Enter your email address and click 'subscribe.'
    Subscribe

    RSS Feed

Proudly powered by Weebly