Steve Harris
  • Home
  • About
  • Courses
  • Research
  • Other Sites
  • Contact
  • Condemned to Repeat It

Territorial Imperative

8/27/2021

0 Comments

 
I was recently reading about the plight of certain Pacific Island nations whose lands (atolls, sand) are at risk of permanent immersion due to rising sea levels. We in the US might lose a few hundred square miles of land (mostly along the Atlantic and Gulf coasts) and, while expensive, we could handle it. If Tonga loses 300 square miles, they’re all on stilts and snorkels.

So, some in these countries (in addition to screaming loudly about global warming) are trying to imagine what their country would be like in such a scenario. One leader bought about eight square miles in another island country to move a bunch of his people. Others are looking to former colonial masters for a right of refuge. Such migrations would be wrenching to both individuals and cultures.

But, what happens to the country? If Tuvalu’s islands sink, is there a country anymore? Under international law, a country’s right to control a large chunk of ocean adjacent to its territory (the “exclusive economic zone”) presumes that there is some land to be adjacent to, …. hmmm. What about the government? During WWII the Poles, Norwegians, Dutch, and French (among others) established a “government-in-exile” (usually in London) to hold the flag up until the Nazis could be defeated. That was fine for a five-year gig, but anyone care about the government of Kiribati if half the Kiribati’s were living in New Zealand, the rest scattered and no hope of returning for centuries?

In the sci-fi environmental disaster film “The Day After Tomorrow,” the US (and much of the northern hemisphere) is buried under massive glaciation and the US government sets up (more-or-less) permanently in Mexico. Somewhere between these scenarios, it’s feasible to see millions of climate refugees moving across borders on a (more-or-less) permanent basis.

From my politico-legal-historical perspective, this raises two sets of questions:  First, will this require some change to international law to set standards for the treatment of minorities/migrants who would be arriving in unheard of numbers. Second, to whom would those individuals (former) citizens of Bangladesh or Costa Rica or Ethiopia or Norway look for their government? The flip side of which is whether other continuing governments would recognize a “permanent” government in exile with no territory?

The history of international law is replete with instances (mostly from the 19 and early 20C) of efforts to secure formal legal structures for minorities within the legal systems of their ‘historical’ countries. European powers leaned on China, Turkey, and several Latin American countries to secure these rights as part of their informal empires. In the aftermath of WWI, with a host of new boundaries crating large chunks of “alien minorities on the “wrong side,” extensive provisions were made to protect, e.g. Magyars in Rumania or Germans in Poland. So, the precedents exist, but they’re not auspicious. The first the product of imperial coercion, the latter fell apart as the League of Nations’ system collapsed in the 1930s. But, human rights thinking has become more firmly established since then, so there will certainly be a push for some organized standards of treatment.

The question of the continuation of countries (and governments) raises a different set of problems. The modern system of states (since the 17C Peace of Westphalia) has been premised on territorial control. You can’t be a country without some land; land which you control exclusively. Historically, this led to the development of much more detailed thinking about and enforcement of boundaries which previously had been kind of squishy (in the SW Alps, one couldn’t be quite sure if you were in Italy or France). By the 20C, with much more thinking in terms of clearly defined and cartographically demarked national jurisdictions, we only expect border controls (customs and immigration).

States have become increasingly jealous of their status and quite wary of the risks of relaxing the standards of being a state, lest the Catalans (or Scots or Tigrayans or Texans) try to split off. They’re not likely to be too accommodating of an idea that you can have a country without land and they’re quite sure that they’re not giving up any of theirs.

The Pacific Island countries that are most at risk range from small to tiny by any measure, so they’re not likely to be able to swing very much weight in getting the big rich countries let them continue. Even if Australia were of a mind to give 100 square miles of land to Tonga, it’s not clear how “independent” the Tongans would then be. Moreover, since there is likely to be aboriginal claims to such a chunk of land, Australia wouldn’t want to forcibly oust any current residents/claimants; lest they re-create the Palestinian problem in the Outback.

A set of re-glaciated Scandinavian countries would have a better chance, given their wealth, European links (and whiteness). They’re not likely to get any land since Europe is already pretty crowded. Still, one can imagine a “virtual Sweden” housed in an office complex somewhere in Berlin. With electronic connections and cultural outreach, perhaps they could preserve a sense of nationhood while sponsoring visits to the “homeland” now buried under a mile of ice? Would the Swedish diaspora continue without a “real” home and critical mass of community or would the last real remnant just be a set of “embassies” housed in each Ikea store around the world?

The internet/telecommunications revolution has empowered all sorts of communities, creating new institutions and ways people interact and feel connected. The nation-state model of global organization that has dominated the world for the past several hundred years is feeling pressure from many directions, as I have discussed previously. Perhaps the plight of Tongans or Maldivians will spur new modes of thinking along this vector, as climate change is certainly pushing us to rethink so many other aspects of how we live.

0 Comments

Too Early to Tell

8/20/2021

0 Comments

 
In 1971, Chinese Premier Chou En-Lai was asked for his opinion about the impact of the French Revolution on Western Civilization. He replied: “It’s too early to tell.” At least that’s the version I first heard. Given all that followed from the events of 1789, it seemed a sage and profound assessment from an urbane world statesman looking back over the previous 180 years.

It turns out that while Chou was quoted accurately (thus distinguishing the comment from the raft of apocryphal quotes too often bandied about), the originating inquiry may have been lost in translation. It seems that Chou thought the question was referring not to the Bastille etc., but to the riots and demonstrations that had paralyzed Paris in 1968; a recent event, noteworthy in its own right, but one that has resonated less deeply across the years.

The (mis-) quote is, nonetheless, too juicy to constrain it to the facts of the 1971 discussion.

Indeed, the essence of the quote is at the heart of what historians do. We interpret and reinterpret (and reinterpret, and reinterpret…) the known facts about historical actors and developments. It’s called “revisionism,” and while it may connote a disregard for the “true” (i.e. original) interpretation, revisionism is really nothing more than a long-running debate about how to make sense of the past.

The French Revolution (1789 version) is itself the best exemplar of this. Debates about the relative weight of cultural, social, intellectual, economic, and political causes and effects constituted a notable percentage of French historical output for over 200 years. Ditto for assessing the causes of WWI, or the reasons for the US defeat in Vietnam (or Afghanistan), or (literally) thousands of other historical questions.

Given that History shows us that human events twist and turn in all sorts of unpredictable directions, the question of whether a particular event was beneficial or evil depends on when you are asking (among other things). The responses to such a question about the French Revolution would have garnered different answers in 1789, 1791, 1793, 1799, 1815, 1820, 1848, 1871, and so on at least until late in the 20C (the French love to argue!).

For a more current example, the current death/illness toll from COVID seems appalling (and we’re quite a way from getting things “under control”). That’s the way it seems in the summer of 2021. Still, it’s easy to imagine a scenario in which our collective COVID experience teaches us how better to be prepared for pandemics so that the next one, instead of causing 100M deaths, only causes 10M. A History written fifty years from now might well say that COVID19 wasn’t such a bad thing, after all.

I was thinking about how and when we make judgments as I was recently reading a discussion of the impact of Western Modernity on the world: the Renaissance, Scientific Revolution, Enlightenment program and all that. Lots of important discoveries and inventions: Rembrandt, flush toilets, optical perspective, electricity, representative democracy, emojis.

The mainstream, “conventional” thinking has been that this has been a good thing; enabling more people to live longer, healthier lives with greater freedom. To be sure, there has long been an undercurrent of dissent, from Pope Pius IX’s encyclical Qui pluribus to Nietzsche and a range of other spiritual and popular thinkers. Still, most of the PR for modernity has been good.

The recent spate of floods and wildfires have highlighted the environmental and global costs of our cultural religion of “progress,” development, and modernity. It’s not so clear how well Modernity will fare in the Histories written a hundred years from now (or whenever we recover/escape from the climate crisis).

At another level, the whole concept of “too early to tell” begs the questions of who is making the judgments and why they need to make judgments about the meaning or moral significance of historical events/developments in the first place. Of course, it’s the winners who write the history and who is a “winner” at any particular point will vary (ask the French about their fights with the Germans in 1870, 1914, and 1940).  Assessments will differ by political outlook, economic status, gender, nationality, etc. I’ve written before about historical judgmentalism, so I won’t repeat that commentary here.

If, as the 20C English Historian E.H. Carr wrote, “history is a dialogue between the past and the present,” then it’s never “too early tell.” We are where we are, and we have to frame and understand the past from our own perspective. It’s not a little hubristic to think that we are uniquely possessed of the true understanding of, e.g., the French Revolution

The flip side is that it’s always “too early to tell.”  Just as we look back on the assessments of those events made in the bourgeois Third French Republic at the turn of the 20C, or the differing mid-20C characterization that reflected the influence of the French Revolution on the Russian Revolution of 1917, in order to criticize their historians’ take on history; so, too, will 22C historians look back on our (post-fall-of-communism) early 21C interpretations and sigh. In other words, future generations will make their own judgments in light of their own concerns and intervening developments. (Those of a slightly cynical nature might perceive that all this is merely to justify a “Historians’ Full Employment Act.)

In this way, History isn’t all that different from Science. In 1903, the Nobel prize-winning physicist Albert Michaelson said “The more important fundamental laws and facts of physical science have all been discovered, and … the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote.” He was saying, in essence, that for physics it wasn’t “too early to tell.” This was, of course, before we had relativity, quantum mechanics, black holes, etc. Oops.
0 Comments

The Complexity of Liberty

8/13/2021

1 Comment

 
Claims by anti-vaxxers that government public health mandates infringe their liberties have been a recurring feature of the 2021 pandemic story. Lock-downs, injections, masks, and social distancing have been the tools of choice of epidemiologists trying to limit the COVID carnage. I’m not going to get into the (non-) science of the debate here, nor delve into the pretty amazing moves by Govs. DeSantis and Abbott to prevent local officials from protecting local public health. Neither shall I linger over the delicious irony of such protest statements as: “Keep your government hands off my Medicare.”

Rather, my focus is on the evolving nature of liberty in the modern world.

Now, as Isaiah Berlin pointed out (“Two Concepts of Liberty,” 1958), there are many meanings and shadings of the term “liberty” (and its conceptual sibling: freedom).  Berlin focused on two essential aspects of the idea in his essay; distinguishing “negative” liberty, i.e. the ability to do something without interference, from “positive” liberty, i.e., the ability not to be constrained. These can easily be in conflict. My freedom to sleep peacefully sits juxtaposed with my neighbor’s freedom to play dance music at 2 AM. Limiting her freedom will likely enhance mine. But they are not always so. For example, limiting my freedom to ride my motorcycle without a helmet does not apparently enhance anyone else’s freedom to ride/drive.

One can trace the development of these concepts (as Berlin does) over time by reference to the usual canon of political thinkers over the past several hundred years. But it struck me, listening to the anti-vaxxers, that the nature of modern human society and, in particular, our proclivity for science and analysis has made both flavors of liberty more problematic over the same centuries, without concerning myself with their intellectual history.

Our (modern Western) society has become vastly more complex over the past 500 years. Global supply chains are a fine example of the relationship between, e.g., the price of tea in China and the availability of the latest iPhones in Chicago. Whether conceived in terms of transportation, communications, media, markets, or otherwise, more people (and the factors that affect their lives) are connected to my life (and the things, ideas, foods, weather that affect mine) than would have been the case 100, 300, or 600 years ago.

There is a second angle to this complexity which multiplies these (mostly phenomenological) effects. The advancement of science and analysis has radically increased the awareness of the number and nature of these causal chains. In the 16C, the weather in (what is now) Indonesia had visible effects on the price of pepper in Amsterdam. Our tools, however, for understanding (or even measuring) that weather were pretty limited then. Nor were we able to connect the presence of monsoons in the Indian Ocean with the size of peppercorns.

Now, we understand (and measure and observe in detail) the extent to which cutting down Indonesian forests affects local rainfall and local agricultural productivity. Now, we understand how rubber prices or consumer demand for palm oil drives the need to clear Indonesian forests which affect, in turn, the local weather and agriculture. Prior to the discovery of germ theory in the 19C, there was no connection between coughing and communicable diseases. Now, we understand how viruses move from one person to another. Did the elimination of lead from housepaint result in a drop in criminal activity when boys raised in ‘cleaner’ homes grew up unaffected by that poison?

Science has enabled us to see and understand these connections. We even have a conceptual model for such relationships, although we can’t figure out the specifics. The “butterfly effect” tells us that the flapping of a single butterfly’s wings in the Himalaya will have some impact on rainfall in the Andes some time later.

The pandemic and the climate crisis have both slammed us in the face with the complexities of the world. Even if we solve both, we won’t easily ‘unlearn’ the nature of those connections and their appearance in all sorts of circumstances.

This new realization is hard to digest. It is, literally, mind-numbing. And, when overwhelmed, it’s unsurprising that we deny and reject; and then construct a world which is simpler and more manageable. This explains, in my view, much about anti-vaxxers and climate deniers, both on specific topics and science in general.  It hurts and they just want it to STOP!

This deeper, richer knowledge also inevitably makes liberty/freedom a smaller, more difficult claim. More difficult because complexity smashes simplistic categories (e.g., race, gender) and understanding these complexities requires brain power and attention. Smaller because, as we learn of the greater range of effects of our actions on others and theirs on us, the choices that preserve/maximize liberty are more constrained. Should I be free to raise peanuts in my backyard? Even if my downwind neighbor suffers anaphylactic shock as a result? Should my freedom to farm be constrained to enable his freedom to breathe?

Stated simply, science (i.e. increased knowledge of causes and effects) is at war with liberty. Awareness of interconnections and interdependency means we have to think more about how our actions fit into the world. And a modicum of conscience means that we will choose not to do somethings we previously saw ourselves as free to do. Alternatively, society will choose to tell us not to do them as it balances our liberty with that of others.

In either case, complexity makes simple models of liberty useless. We are far from the mythic “state of nature” in which there was unlimited freedom. There is no point in blaming anyone for this, or to attack government or science, or to blame “socialists” for valuing community or philosophers for pointing all this out. We will just have to juggle the headache of complexity as best we can.

1 Comment

AI, vay!

8/6/2021

0 Comments

 
Artificial Intelligence (“AI”) seems on the cusp of changing everything. At one level, it’s not really clear how AI is different from the “computing” which has been a part of our lives for the last fifty years. At another level, however, the ability of computers to function based on multiple, deeper levels of programming feels different to us human users of their capabilities.

As with pretty much any technology, AI is simultaneously disruptive, helpful, amazing, and problematic. We have been working with word processing technology since the 1970s (I remember learning how to use a Wang system in the early 1980s). Now, we (most of us) start typing or speaking into our phones and we are prompted with the most likely completions of the words or phrases we begin. Sometimes really helpful; sometimes laughable, sometimes just a pain.

Accounting and other programs transfer data so that an increasing amount of mundane/routine work formerly done by clerks (remember them?) are now automated. As AI capabilities grow, their impact is moving up the labor market ladder. “Expert systems” (really just huge databases running through very fast processors) are proving to be better diagnosticians than most doctors. “Coding” (i.e., writing software) was once done by a small group of Silicon Valley geeks. Then it was “off-shored” to lesser paid geeks in India. Now, computers are enhancing the ability of the geeks to write code faster using the same kind of auto-complete suggestions we know from our phone messaging. Soon, the databases will be filled with enough experience that they will cause most of that work to be done in a fully automated way.

Military drones are developing along similar lines. There is an emerging ethical debate about whether a human must be in the final decision loop about whether to take out some terrorist/undesirable, lest we too quickly fall into the sci-fi world of robots choosing to take out all humans on their road to evolutionary triumph. Most of this debate is just noise. Someone (human) had to program the software for the flying car/Quicken/terminator. There are certainly issues about the criteria to be used in deciding whether to launch a missile against some jeep convoy in Western Afghanistan; but that’s not a new ethical question (ask any Special Forces sniper!). There are some issues about the quality of the human programming so that the drone is set to do the “right” thing, but that’s not an ethical issue either. Still, it makes for a juicy/ominous media story, so it gets hyped-up.

Whether AI will offer us new types of capabilities and actually present us with fundamentally new questions seems doubtful. In 1854, Dr. John Snow compiled information about the location of Cholera victims in London. He used the then emerging capabilities of statistical analysis to develop a theory of what might be causing the deadly outbreak. It was an early example of applied information processing, using cutting-edge techniques and data structures; and it changed the world in important and continuing ways.

Perhaps a more interesting question is whether computing power/AI will reach the level of being able to match economic markets as processors of information. Traditionally, I expressed my interest in buying a box of cereal by going to the market and determining whether I wish to spend $4 a box for Kellogg’s or $3 for the Safeway house brand. The “market” (the generic market, not Safeway the grocer) sees my action and Kellogg and Safeway adjust their output accordingly. Now, Facebook tells Safeway how long I linger over their ad for cereal and Safeway can (pre-market) translate that into their plans for production down the road. Soon, someone will be able to tell that men of a certain age who look at buying a Tesla and go see Hamilton will drink more whiskey in the next month. It’s no wonder that Google, Amazon, etc., are investing billions in figuring out how best to make AI work.

When you overlay the feasibility of governmental access to and consolidation of all this info, it’s possible to imagine that the concept projected by early 20C Marxists and other Socialists will be realized. There was a big debate among global intellectuals at the time. The triumph of liberalism was premised on the belief that the market was the best mechanism for operating an economy; that it was too complex to be rationalized and managed by humans and politics. The Soviet process (5-year plans and all that) was designed to replace the market with intelligent conscious decisions made by the State. Market substitution didn’t work back then, but what if that failure was only because Communist theory outpaced computational realities? The Soviet Union collapsed under its “internal contradictions,” and market capitalism went on its (our?) merry way. I’m sure there is a think-tank in Beijing figuring out how an economy could be operated (and controlled) by Chairman Xi; even if Marshal Stalin didn’t have the petaflops of computing power at his disposal when the Socialist paradise was projected. Could an AI-driven State replace a mid-21C market mechanism?

It’s not just the economy. Steven Spielberg’s Minority Report (2002) used AI capabilities to foresee and prevent crimes; a field in which we are now functioning at an aggregate level, even if we can’t ID specific perps in advance. Other sci-fi authors have written of future wars in which two combatants’ computers face-off to see who triumphs. And, of course, there’s the “holodeck” on Star Trek: TNG in which immersive gaming and entertainment is supercharged.

The later 20C saw all sorts of claims/fears that automation and computers would replace people. It’s coming to pass, even if a few decades later than and in not quite the same way that some pundits forecast. It’s a good bet that most of our prognostications will be off a bit as well. Perhaps an AI-based prediction algorithm will do better....

0 Comments

    Condemned to Repeat It --
    Musings on history, society, and the world.

    I don't actually agree with Santayana's famous quote, but this is my contribution to my version of it: "Anyone who hears Santayana's quote is condemned to repeat it."

    Archives

    May 2025
    April 2025
    March 2025
    February 2025
    January 2025
    December 2024
    November 2024
    October 2024
    September 2024
    August 2024
    July 2024
    June 2024
    May 2024
    April 2024
    March 2024
    February 2024
    January 2024
    December 2023
    November 2023
    October 2023
    September 2023
    August 2023
    July 2023
    June 2023
    May 2023
    April 2023
    March 2023
    February 2023
    January 2023
    December 2022
    November 2022
    October 2022
    September 2022
    August 2022
    July 2022
    June 2022
    May 2022
    April 2022
    March 2022
    February 2022
    January 2022
    December 2021
    November 2021
    October 2021
    September 2021
    August 2021
    July 2021
    June 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    October 2020
    September 2020

      Sign up for alerts when there's a new post

      Enter your email address and click 'subscribe.'
    Subscribe

    RSS Feed

Proudly powered by Weebly