A Trip to the Movies

According to the lists of movies that I keep at the Internet Movie Database (IMDb), I have thus far seen 2,434 feature-film releases from 1920-2017. That number does not include such forgettable fare as the grade-B westerns, war movies, and Bowery Boys comedies that I saw on Saturdays, at two-for-a-nickel, during my pre-teen years.

I have assigned ratings (on IMDb’s 10-point scale) to 2,131 of the 2,434 films. (By the time I got around to assigning ratings at IMDb when I joined in 2001, I didn’t remember 303 films well enough to rate them.) I have given 689 (32 percent) of the 2,131 films a rating of 8, 9, or 10. The proportion of high ratings does not indicate low standards on my part; rather, it indicates the care with which I have tried to choose films for viewing. (More about that, below.)

I call the 689 highly rated films my favorites. I won’t list all them here, but I will mention some of them — and their stars — as I assess the ups-and-downs (mostly downs) in the art of film-making.

I must first admit two biases that have shaped my selection of favorite movies. First, my list of films and favorites is dominated by American films starring American actors. But that dominance is merely numerical. For artistic merit and great acting, I turn to foreign films as often as possible.

A second bias is my general aversion to silent features and early talkies. Most of the directors and actors of the silent era relied on “stagy” acting to compensate for the lack of sound — a style that persisted into the early 1930s. There were exceptions, of course. Consider Charlie Chaplin, whose genius as a director and comic actor made a virtue of silence; my list of favorites from the 1920s and early 1930s includes three of Chaplin’s silent features: The Gold Rush (1925), The Circus (1928), and City Lights (1931). Perhaps a greater comic actor (and certainly a more physical one) than Chaplin was Buster Keaton, with six films on my list of favorites of the same era: Our Hospitality (1923), The Navigator (1924), Sherlock Jr. (1924), The General (1926), The Cameraman (1928), and Steamboat Bill Jr. (1928). Harold Lloyd, in my view, ranks with Keaton for sheer laugh-out-loud physical humor. My seven Lloyd favorites from his pre-talkie oeuvre are Grandma’s Boy (1922), Dr. Jack (1922), Safety Last! (1923), Girl Shy (1923), Hot Water (1924), For Heaven’s Sake (1926), and Speedy (1928). My list of favorites includes only nine other films from the years 1920-1931, among them F.W. Murnau’s Nosferatu the Vampire (1922) and Fritz Lang’s Metropolis (1927) — the themes of which (supernatural and futuristic, respectively) enabled them to transcend the limitations of silence — and such early talkies as Whoopee! (1930), and Dracula (1931).

In summary, I can recall having seen only 51 feature films that were released in 1920-1931. Of the 51, I have rated 50, and 25 of them (50 percent) rank among my favorites. But given the relatively small number of films from 1920-1931 in my personal catalog, I will say no more here about that era. I will focus, instead, on movies released from 1932 to the present — which I consider the “modern” era of film-making.

My inventory of modern films comprises 2,383 titles, 2,081 of which I have rated, and 664 of those (32 percent) at 8, 9, or 10 on the IMDb scale. But those numbers mask vast differences in the quality of modern films, which were produced in three markedly different eras:

  • Golden Age (1932-1942) — 238 films seen, 208 rated, 117 favorites (56 percent)
  • Abysmal Years (1943-1965) — 370 films seen, 289 rated, 110 favorites (38 percent)
  • Vile Epoch (1966-present) — 1,775 films seen, 1,584 rated, 437 favorites (28 percent)

There is a so-called Golden Age of Hollywood, but it is defined by the structure of the industry, not the quality of output. What made my Golden Age golden, and why did films go from golden to abysmal to vile? Read on.

To understand what made the Golden Age golden, let’s consider what makes a great movie: a novel or engaging plot, dialogue that is fresh (and witty, if the film calls for it), and strong performances (acting, singing, and/or dancing), a “mood” that draws the viewer in, excellent production values (locations, cinematography, sets, costumes, etc.), and historical or topical interest. (A great animated feature may be somewhat weaker on plot and dialogue if the animations and sound track are first-rate.) The Golden Age was golden largely because the advent of sound fostered creativity — plots could be advanced through dialogue, actors could deliver real dialogue, and singers and orchestras could deliver real music. It took a few years to fully realize the potential of sound, but movies hit their stride just as the country was seeking respite from the cares of a lingering and deepening depression.

Studios vied with each other to entice movie-goers with new plots (or plots that seemed new when embellished with sound), fresh and often wickedly witty dialogue, and — perhaps most important of all — captivating performers. The generation of super-stars that came of age in the 1930s consisted mainly of handsome men and beautiful women, blessed with distinctive personalities, and equipped by their experience on the stage to deliver their lines vibrantly and with impeccable diction.

What were the great movies of the Golden Age, and who starred in them? Here’s a sample of the titles: 1932 — Grand Hotel; 1933 — Dinner at Eight, Flying Down to Rio, Morning Glory; 1934 — It Happened One Night, The Thin Man, Twentieth Century; 1935 — Mutiny on the Bounty, A Night at the Opera, David Copperfield; 1936 — Libeled Lady, Mr. Deeds Goes to Town, Show Boat; 1937 — The Awful Truth, Captains Courageous, Lost Horizon; 1938 — The Adventures of Robin Hood, Bringing up Baby, Pygmalion; 1939 — Destry Rides Again, Gunga Din, The Hunchback of Notre Dame, The Wizard of Oz, The Women; 1940 — The Grapes of Wrath, His Girl Friday, The Philadelphia Story; 1941 — Ball of Fire, The Maltese Falcon, Suspicion; 1942 — Casablanca, The Man Who Came to Dinner, Woman of the Year.

And who starred in the greatest movies of the Golden Age? Here’s a goodly sample of the era’s superstars, a few of whom came on the scene toward the end: Jean Arthur, Fred Astaire, John Barrymore, Lionel Barrymore, Ingrid Bergman, Humphrey Bogart, James Cagney, Claudette Colbert, Ronald Colman, Gary Cooper, Joan Crawford, Bette Davis, Irene Dunne, Nelson Eddy, Errol Flynn, Joan Fontaine, Henry Fonda, Clark Gable, Cary Grant, Jean Harlow, Olivia de Havilland, Katharine Hepburn, William Holden, Leslie Howard, Allan Jones, Charles Laughton, Carole Lombard, Myrna Loy, Jeanette MacDonald, Joel McCrea, Merle Oberon, Laurence Olivier, William Powell, Ginger Rogers, Rosalind Russell, Norma Shearer, Barbara Stanwyck, James Stewart, and Spencer Tracy. There were other major stars, and many popular supporting players, but it seems that a rather small constellation of superstars commanded a disproportionate share of the leading roles in the best movies of the Golden Age.

Why did movies go into decline after 1942’s releases? World War II certainly provided an impetus for the end of the Golden Age. The war diverted resources from the production of major theatrical films; grade-A features gave way to low-budget fare. And some of the superstars of the Golden Age went off to war. (Two who remained civilians — Leslie Howard and Carole Lombard — were killed during the war.) With the resumption of full production in 1946, the surviving superstars who hadn’t retired were fading fast, though their presence still propelled many films of the Abysmal Years.

Stars come and go, however, as they have done since Shakespeare’s day. The decline into the Abysmal Years and Vile Epoch have deeper causes than the dimming of old stars:

  • The Golden Age had deployed all of the themes that could be used without explicit sex, graphic violence, and crude profanity — none of which become an option for American movie-makers until the mid-1960s.
  • Prejudice got significantly more play after World War II, but it’s a theme that can’t be used very often without becoming trite. And trite it has become, now that movies have become vehicles for decrying prejudice against every real or imagined “victim” group under the sun.
  • Other attempts at realism (including film noir) resulted mainly in a lot of turgid trash laden with unrealistic dialogue and shrill emoting — keynotes of the Abysmal Years.
  • Hollywood productions often sank to the level of TV, apparently in a misguided effort to compete with that medium. The use of garish technicolor — a hallmark of the 1950s — highlighted the unnatural neatness and cleanliness of settings that should have been rustic if not squalid. Sound tracks became lavishly melodramatic and deafeningly intrusive.
  • The transition from abysmal to vile coincided with the cultural “liberation” of the mid-1960s, which saw the advent of the “f” word in mainstream films. Yes, the Vile Epoch brought more more realistic plots and better acting (thanks mainly to the Brits). But none of that compensates for the anti-social rot that set in around 1966: drug-taking, drinking and smoking are glamorous; profanity proliferates to the point of annoyance; sex is all about lust and little about love; violence is gratuitous and beyond the point of nausea; corporations and white, male Americans with money are evil; the U.S. government (when Republican-controlled) is in thrall to that evil; etc., etc. etc.

To be sure, there have been outbreaks of greatness since the Golden Age. During the Abysmal Years, for example, aging superstars appeared in such greats as Life With Father (Dunne and Powell, 1947), Key Largo (Bogart and Lionel Barrymore, 1948), Edward, My Son (Tracy, 1949), The African Queen (Bogart and Hepburn, 1951), High Noon (Cooper, 1952), Mr. Roberts (Cagney, Fonda, Powell, 1955), The Old Man and the Sea (Tracy, 1958), Anatomy of a Murder (Stewart, 1959), North by Northwest (Grant, 1959), Inherit the Wind (Tracy, 1960), Long Day’s Journey into Night (Hepburn, 1962), Advise and Consent (Fonda and Laughton, 1962), The Best Man (Fonda, 1964), and Othello (Olivier, 1965). A new generation of stars appeared in such greats as The Lavender Hill Mob (Alec Guinness, 1951), Singin’ in the Rain (Gene Kelly, 1952), The Bridge on the River Kwai (Guiness, 1957), The Hustler (Paul Newman, 1961), Lawrence of Arabia (Peter O’Toole, 1962), and Dr. Zhivago (Julie Christie, 1965).

Similarly, the Vile Epoch — in spite of its seaminess — has yielded many excellent films and new stars. Some of the best films (and their stars) are A Man for All Seasons (Paul Scofield, 1966), Midnight Cowboy (Dustin Hoffman, 1969), MASH (Alan Alda, 1970), The Godfather (Robert DeNiro, 1972), Papillon (Hoffman, Steve McQueen, 1973), One Flew over the Cuckoo’s Nest (Jack Nicholson, 1975), Star Wars and its sequels (Harrison Ford, 1977, 1980, 1983), The Great Santini (Robert Duvall, 1979), The Postman Always Rings Twice (Nicholson, Jessica Lange, 1981), The Year of Living Dangerously (Sigourney Weaver, Mel Gibson, 1982), Tender Mercies (Duvall, 1983), A Room with a View (Helena Bonham Carter, Daniel Day Lewis 1985), Mona Lisa (Bob Hoskins, 1986), Fatal Attraction (Glenn Close, 1987), 84 Charing Cross Road (Anne Bancroft, Anthony Hopkins, Judi Dench, 1987), Dangerous Liaisons (John Malkovich, Michelle Pfeiffer, 1988), Henry V (Kenneth Branagh, 1989), Reversal of Fortune (Close and Jeremy Irons, 1990), Dead Again (Branagh, Emma Thompson, 1991), The Crying Game (1992), Much Ado about Nothing (Branagh, Thompson, Keanu Reeves, Denzel Washington, 1993), Trois Couleurs: Bleu (Juliette Binoche, 1993), Richard III (Ian McKellen, Annette Bening, 1995), Beautiful Girls (Natalie Portman, 1996), Comedian Harmonists (1997), Tango (1998), Girl Interrupted (Winona Ryder, 1999), Iris (Dench, 2000), High Fidelity (John Cusack, 2000), Chicago (Renee Zellweger, Catherine Zeta-Jones, Richard Gere, 2002), Master and Commander: The Far Side of the World (Russell Crowe, 2003), Finding Neverland (Johnny Depp, Kate Winslet, 2004), Capote (Philip Seymour Hoffman, 2005), The Chronicles of Narnia: The Lion, the Witch, and the Wardrobe (2005), The Painted Veil (Edward Norton, Naomi Watts, 2006), Breach (Chris Cooper, 2007), The Curious Case of Benjamin Button (Brad Pitt, 2008), The King’s Speech (Colin Firth, 2010), Saving Mr. Banks (Thomson, Tom Hanks, 2013), and Brooklyn (Saoirse Ronan, 2015).

But every excellent film produced during the Abysmal Years and Vile Epoch has been surrounded by outpourings of dreck, schlock, and bile. The generally tepid effusions of the Abysmal Years were succeeded by the excesses of the Vile Epoch: films that feature noise, violence, sex, and drugs for the sake of noise, violence, sex, and drugs; movies whose only “virtue” is their appeal to such undiscerning groups as teeny-boppers, wannabe hoodlums, resentful minorities, and reflexive leftists; movies filled with “bathroom” and other varieties of “humor” so low as to make the Keystone Cops seem paragons of sophisticated wit.

In sum, movies have become progressively worse since the end of the Golden Age — and I have the numbers to prove it.

First, I should establish that I am picky about the films I choose to watch:


Note: These averages are for films designated by IMDb as “English-language”: about 78,000 in all.

The next graph illustrates three points:

  • I watched just as many (or more) films of the 1930s than of the 1940s. So I didn’t rate films of the 1930s more highly than those of the 1940s because I was more selective in choosing films of the 1930s. Further, there is a steady downward trend in my ratings, which began long before the “bulge” in my viewing of movies released from mid-1980s to about 2010. The downward trend continued despite the relative paucity of titles released after 2010. (It is plausible, however, that the late uptick is due to heightened selectivity in choosing recent releases.)
  • IMDb users, on the whole, have overrated the films of the early 1940s to mid-1980s and mid-1990s to the present. The ratings for films released since the mid-1990s — when IMDb came on the scene — undoubtedly reflect the dominance of younger viewers who “grew up” with IMDb, who prefer novelty to quality, and who have little familiarity with earlier films. I have rated almost 900 films that were released in 1996-2015, but almost 1,200 films from 1932-1995.)
  • My ratings, based on long experience and exacting standards, indicate that movies not only are not better than ever, but are generally getting worse as the years roll on.

Another indication that movies are generally getting worse is the increasing frequency of what I call unwatchable films. These are films that I watch just long enough to evaluate as trash, which earns them my rating of 1 (the lowest allowed by IMDb). The trend is obvious:

Will the Vile Epoch End? I’d bet against it, but I’ll keep watching nonetheless. There’s an occasional nugget of gold in the sea of mud.

__________

* This is my interpretation of IMDb’s 10-point scale:

1 = So bad that I quit watching after a few minutes.

2 = I watched the whole thing, but wish that I hadn’t.

3 = Barely bearable; perhaps one small, redeeming feature (e.g., a cast member).

4 = Just a  shade better than a 3 — a “gut feel” grade.

5 = A so-so effort; on a par with typical made-for-TV fare.

6 = Good, but not worth recommending to anyone else; perhaps because of a weak cast, too-predictable plot, cop-out ending, etc.

7 = Enjoyable and without serious flaws, but once was enough.

8 = Superior on at least three of the following dimensions: mood, plot, dialogue, music (if applicable), dancing (if applicable), quality of performances, production values, and historical or topical interest; worth seeing twice but not a slam-dunk great film.

9 = Superior on several of the above dimensions and close to perfection; worth seeing at least twice.

10 = An exemplar of its type; can be enjoyed many times.

The “Probability” That Something Will Happen

In which the author addresses “probability” from various angles in a probably futile attempt to put an end to misuse of the concept.

A SINGLE EVENT DOESN’T HAVE A PROBABILITY

A believer in single-event probabilities takes the view that a single flip of a coin or roll of a dice has a probability. I do not. A probability represents the frequency with which an outcome occurs over the very long run, and it is only an average that conceals random variations.

The outcome of a single coin flip can’t be reduced to a percentage or probability. It can only be described in terms of its discrete, mutually exclusive possibilities: heads (H) or tails (T). The outcome of a single roll of a die or pair of dice can only be described in terms of the number of points that may come up, 1 through 6 or 2 through 12.

Yes, the expected frequencies of H, T, and and various point totals can be computed by simple mathematical operations. But those are only expected frequencies. They say nothing about the next coin flip or dice roll, nor do they more than approximate the actual frequencies that will occur over the next 100, 1,000, or 10,000 such events.

Of what value is it to know that the probability of H is 0.5 when H fails to occur in 11 consecutive flips of a fair coin? Of what value is it to know that the probability of rolling a  7 is 0.167 — meaning that 7 comes up every 6 rolls, on average — when 7 may not appear for 56 consecutive rolls? These examples are drawn from simulations of 10,000 coin flips and 1,000 dice rolls. They are simulations that I ran once, not simulations that I cherry-picked from many runs. (The Excel file is at https://drive.google.com/open?id=1FABVTiB_qOe-WqMQkiGFj2f70gSu6a82. Coin flips are at the first tab, dice rolls are at the second tab.)

Let’s take another example, one that is more interesting and has generated much controversy of the years. It’s the Monty Hall problem,

a brain teaser, in the form of a probability puzzle, loosely based on the American television game show Let’s Make a Deal and named after its original host, Monty Hall. The problem was originally posed (and solved) in a letter by Steve Selvin to the American Statistician in 1975…. It became famous as a question from a reader’s letter quoted in Marilyn vos Savant’s “Ask Marilyn” column in Parade magazine in 1990 … :

Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice

Vos Savant’s response was that the contestant should switch to the other door…. Under the standard assumptions, contestants who switch have a 2/3 chance of winning the car, while contestants who stick to their initial choice have only a 1/3 chance.

Vos Savant’s answer is correct, but only if the contestant is allowed to play an unlimited number of games. A player who adopts a strategy of “switch” in every game will, in the long run, win about 2/3 of the time (explanation here). That is, the player has a better chance of winning if he chooses “switch” rather than “stay”.

Read the preceding paragraph carefully and you will spot the logical defect that underlies the belief in single-event probabilities: The long-run winning strategy (“switch”) is transformed into a “better chance” to win a particular game. What does that mean? How does an average frequency of 2/3 improve one’s chances of winning a particular game? It doesn’t. Game results are utterly random; that is, the average frequency of 2/3 has no bearing on the outcome of a single game.

I’ll try to drive the point home by returning to the coin-flip game, with money thrown into the mix. A $1 bet on H means a gain of $1 if H turns up, and a loss of $1 if T turns up. The expected value of the bet — if repeated over a very large number of trials — is zero. The bettor expects to win and lose the same number of times, and to walk away no richer or poorer than when he started. And for a very large number of games, the better will walk away approximately (but not necessarily exactly) neither richer nor poorer than when he started. How many games? In the simulation of 10,000 games mentioned earlier, H occurred 50.6 percent of the time. A very large number of games is probably at least 100,000.

Let us say, for the sake of argument, that a bettor has played 100,00 coin-flip games at $1 a game and come out exactly even. What does that mean for the play of the next game? Does it have an expected value of zero?

To see why the answer is “no”, let’s make it interesting and say that the bet on the next game — the next coin flip — is $10,000. The size of the bet should wonderfully concentrate the bettor’s mind. He should now see the situation for what it really is: There are two possible outcomes, and only one of them will be realized. An average of the two outcomes is meaningless. The single coin flip doesn’t have a “probability” of 0.5 H and 0.5 T and an “expected payoff” of zero. The coin will come up either H or T, and the bettor will either lose $10,000 or win $10,000.

To repeat: The outcome of a single coin flip doesn’t have an expected value for the bettor. It has two possible values, and the bettor must decide whether he is willing to lose $10,000 on the single flip of a coin.

By the same token (or coin), the outcome of a single roll of a pair of dice doesn’t have a 1-in-6 probability of coming up 7. It has 36 possible outcomes and 11 possible point totals, and the bettor must decide how much he is willing to lose if he puts his money on the wrong combination or outcome.

In summary, it is a logical fallacy to ascribe a probability to a single event. A probability represents the observed or computed average value of a very large number of like events. A single event cannot possess that average value. A single event has a finite number of discrete and mutually exclusive possible outcomes. Those outcomes will not “average out” in that single event. Only one of them will obtain, like Schrödinger’s cat.

To say or suggest that the outcomes will average out — which is what a probability implies — is tantamount to saying that Jack Sprat and his wife were neither skinny nor fat because their body-mass indices averaged to a normal value. It is tantamount to saying that one can’t drown by walking across a pond with an average depth of 1 foot, when that average conceals the existence of a 100-foot-deep hole.

It should go without saying that a specific event that might occur — rain tomorrow, for example — doesn’t have a probability.

WHAT ABOUT THE PROBABILITY OF PRECIPITATION?

Weather forecasters (meteorologists) are constantly saying things like “there’s an 80-percent probability of precipitation (PoP) in _____ tomorrow”. What do such statements mean? Not much:

It is not surprising that this issue is difficult for the general public, given that it is debated even within the scientific community. Some propose a “frequentist” interpretation: there will be at least a minimum amount of rain on 80% of days with weather conditions like they are today. Although preferred by many scientists, this explanation may be particularly difficult for the general public to grasp because it requires regarding tomorrow as a class of events, a group of potential tomorrows. From the perspective of the forecast user, however, tomorrow will happen only once. A perhaps less abstract interpretation is that PoP reflects the degree of confidence that the forecaster has that it will rain. In other words, an 80% chance of rain means that the forecaster strongly believes that there will be at least a minimum amount of rain tomorrow. The problem, from the perspective of the general public, is that when PoP is forecasted, none of these interpretations is specified.

There are clearly some interpretations that are not correct. The percentage expressed in PoP neither refers directly to the percent of area over which precipitation will fall nor does it refer directly to the percent of time precipitation will be observed on the forecast day Although both interpretations are clearly wrong, there is evidence that the general public holds them to varying degrees. Such misunderstandings are critical because they may affect the decisions that people make. If people misinterpret the forecast as percent time or percent area, they maybe more inclined to take precautionary action than are those who have the correct probabilistic interpretation, because they think that it will rain somewhere or some time tomorrow. The negative impact of such misunderstandings on decision making, both in terms of unnecessary precautions as well as erosion in user trust, could well eliminate any potential benefit of adding uncertainty information to the forecast. [Susan Joslyn, Nimor Nadav-Greenberg, and Rebecca M. Nichols, “Probability of Precipitations: Assessment and Enhancement of End-User Understanding“, Journal of the American Meteorological Society, February 2009, citations omitted]

The frequentist interpretation is close to be correct, but it still involves a great deal of guesswork. Rainfall in a particular location is influenced by many variables (e.g., atmospheric pressure, direction and rate of change of atmospheric pressure, ambient temperature, local terrain, presence or absence of bodies of water, vegetation, moisture content of the atmosphere, height of clouds above the terrain, depth of cloud cover). It is nigh unto impossible to say that today’s (or tomorrow’s or next week’s) weather conditions are like (or will be like) those that in the past resulted in rainfall in a particular location 80 percent of the time.

That leaves the Bayesian interpretation, in which the forecaster combines some facts (e.g., the presence or absence of a low-pressure system in or toward the area, the presence or absence of a flow of water vapor in or toward the area) with what he has observed in the past to arrive at a guess about future weather. He then attaches a probability to his guess to indicate the strength of his confidence in it.

Thus:

Bayesian probability represents a level of certainty relating to a potential outcome or idea. This is in contrast to a frequentist probability that represents the frequency with which a particular outcome will occur over any number of trials.

An event with Bayesian probability of .6 (or 60%) should be interpreted as stating “With confidence 60%, this event contains the true outcome”, whereas a frequentist interpretation would view it as stating “Over 100 trials, we should observe event X approximately 60 times.”

And thus:

The Bayesian approach to learning is based on the subjective interpretation of probability. The value of the proportion p is unknown, and a person expresses his or her opinion about the uncertainty in the proportion by means of a probability distribution placed on a set of possible values of p.

It is impossible to attach a probability — as properly defined in the first part of this article — to something that hasn’t happened, and may not happen. So when you read or hear a statement like “the probability of rain tomorrow is 80 percent”, you should mentally translate it into language like this:

X guesses that Y will (or will not) happen at time Z, and the “probability” that he attaches to his guess  indicates his degree of confidence in it.

The guess may be well-informed by systematic observation of relevant events, but it remains a guess. As most Americans have learned and relearned over the years, when rain has failed to materialize or has spoiled an outdoor event that was supposed to be rain-free.

BUT AREN’T SOME THINGS MORE LIKELY TO HAPPEN THAN OTHERS?

Of course. But only one thing will happen at a given time and place.

If a person walks across a shooting range where live ammunition is being used, his is more likely to be killed than if he walks across the same patch of ground when no one is shooting. And a clever analyst could concoct a probability of a person’s being shot by writing an equation that includes such variables as his size, the speed with which he walks, the number of shooters, their rate of fire, and the distance across the shooting range.

What would the probability estimate mean? It would mean that if a very large number of persons walked across the shooting range under identical conditions, approximately S percent of them would be shot. But the clever analyst cannot specify which of the walkers would be among the S percent.

Here’s another way to look at it. One person wearing head-to-toe bullet-proof armor could walk across the range a large number of times and expect to be hit by a bullet on S percent of his crossings. But the hardy soul wouldn’t know on which of the crossings he would be hit.

Suppose the hardy soul became a foolhardy one and made a bet that he could cross the range without being hit. Further, suppose that S is estimated to be 0.75; that is, 75 percent of a string of walkers would be hit, or a single (bullet-proof) walker would be hit on 75 percent of his crossings. Knowing the value of S, the foolhardy fellow offers to pay out $1 million dollars if he crosses the range unscathed — one time — and claim $4 million (for himself or his estate) if he is shot. That’s an even-money bet, isn’t it?

No it isn’t. This situation is exactly analogous to the $10,000 bet on a single coin flip, discussed above. But I will dissect this one in a different way, to the same end.

The bet should be understood for what it is, an either-or-proposition. The foolhardy walker will either lose $1 million or win $4 million. The bettor (or bettors) who take the other side of the bet will either win $1 million or lose $4 million.

As anyone with elementary reading and reasoning skills should be able to tell, those possible outcomes are not the same as the outcome that would obtain (approximately) if the foolhardy fellow could walk across the shooting range 1,000 times. If he could, he would come very close to breaking even, as would those who bet against him.

To put it as simply as possible:

When an event has more than one possible outcome, a single trial cannot replicate the average outcome of a large number of trials (replications of the event).

It follows that the average outcome of a large number of trials — the probability of each possible outcome — cannot occur in a single trial.

It is therefore meaningless to ascribe a probability to any possible outcome of a single trial.

A Not-So-Stealthy Revolution

Barack Hussein Obama’s promise to fundamentally transform America was, it seems, only a promise to do more damage to the economy in the usual way of Democrats. But BHO and his henchpersons nevertheless had a big hand in the cultural transformation of the country and the subversion of the rule of law. (You can read about some of it at the links found here.)

BHO is merely the most prominent acolyte of the revolutionaries who came of age in the 1960s. What happened then? A lefty of my acquaintance, who also came of age in the 1960s, claims that the riotous, anti-establishment protests of the 1960s signified that “people began thinking for themselves”. That isn’t literally true, of course. People have always thought for themselves, but usually with more deliberation and consideration — and less emotion — than the mode of “thought” that became common in the 1960s: the orchestrated tantrum.

What my acquaintance meant is that people — young ones, especially — became more outspoken than their predecessors about their dislike of social norms and government policies. But they had no difficulty, then and later, with the imposition of norms and edicts agreeable to them. Their successors on the campuses of today — students, faculty, and administrators — carry on the tradition of dressing “correctly”, reacting with violent hostility persons and ideas that they are expected to hate, and supporting draconian punishments for infractions of their norms and edicts.

So what really happened in the 1960s, and continues to this day? I will begin with the underlying causes.

The first cause is a failure of nerve by many members of the “greatest generation”, who taught their children (the “flower children” of the 1960s) that instant gratification was theirs to be had for a tantrum. And once the “greatest” caved in to the tantrum-throwers, it became harder for their successors in positions of influence to do so — thanks to precedent and the new norm of instant gratification. And, of course, many of the successors came from the ranks of tantrum-throwers, and went on to shape the minds of legions of “educators”, lawyers, politicians, bureaucrats, and (sigh!) businesspersons.

The second cause is captured in the phrase “spoiled children of capitalism”. Before the onset of the welfare state in the 1930s, there were two ways to survive: work hard or accept whatever charity came your way. And there was only one way for most persons to thrive: work hard. That all changed after World War II, when power-lusting politicians sold an all-too-willing-to-believe electorate a false and dangerous bill of goods, namely, that government is the source of prosperity. It is not, and never has been.

The causes coalesced in the 1960s. Specifically, I mark 1963 as the Year Zero. If, like me, you were an adult when John F. Kennedy was assassinated, you may think of his death as a watershed moment in American history. I say this not because I’m an admirer of Kennedy the man (I am not), but because American history seemed to turn a corner when Kennedy was murdered. To take the metaphor further, the corner marked the juncture of a sunny, tree-lined street (America from the end of World War II to November 22, 1963) and a dingy, littered street (America since November 22, 1963).

Changing the metaphor, I acknowledge that the first 18 years after V-J Day were by no means halcyon, but they were the spring that followed the long, harsh winter of the Great Depression and World War II. Yes, there was the Korean War, but that failure of political resolve was only a rehearsal for later debacles. McCarthyism, a political war waged (however clumsily) on America’s actual enemies, was benign compared with the war on civil society that began in the 1960s and continues to this day. The threat of nuclear annihilation, which those of you who were schoolchildren of the 1950s will remember well, had begun to subside with the advent of JFK’s military policy of flexible response, and seemed to evaporate with the resolution of the Cuban Missile Crisis. And for all of his personal faults, JFK was a paragon of grace, wit, and charm — a movie-star president — compared with his many successors, with the possible exception of Ronald Reagan, who had been a real movie star.

What follows is an impression of America since November 22, 1963, when spring became a long, hot summer, followed by a dismal autumn and another long, harsh winter — not of deprivation, and perhaps not of war, but of rancor and repression.

This petite histoire begins with the Vietnam War and its disastrous mishandling by LBJ, its betrayal by the media, and its spawning of the politics of noise. “Protests” in public spaces and on campuses are a main feature of the politics of noise. In the new age of instant and sympathetic media attention to “protests”, civil and university authorities often refuse to enforce order. The media portray obstructive and destructive disorder as “free speech”. Thus do “protestors” learn that they can, with impunity, inconvenience and cow the masses who simply want to get on with their lives and work.

Whether “protestors” learned from rioters, or vice versa, they learned the same lesson. Authorities, in the age of Dr. Spock, lack the guts to use force, as necessary, to restore civil order. (LBJ’s decision to escalate gradually in Vietnam — “signaling” to Hanoi — instead of waging all-out war was of a piece with the “understanding” treatment of demonstrators and rioters.) Rioters learned another lesson — if a riot follows the arrest, beating, or death of a black person, it’s a “protest” against something (usually white-racist oppression, regardless of the facts), not wanton mayhem. After a lull of 21 years, urban riots resumed in 1964, and continue to this day.

LBJ’s “Great Society” marked the resurgence of FDR’s New Deal — with a vengeance — and the beginning of a long decline of America’s economic vitality. The combination of the Great Society (and its later extensions, such as Medicare Part D and Obamacare) with the rampant growth of regulatory activity has cut the rate of economic growth from 5 percent to 2 percent.  The entrepreneurial spirit has been crushed; dependency has been encouraged and rewarded; pension giveaways have bankrupted public treasuries across the land. America since 1963 has been visited by a perfect storm of economic destruction that seems to have been designed by America’s enemies.

The Civil Rights Act of 1964 unnecessarily crushed property rights, along with freedom of association, to what end? So that a violent, dependent, Democrat-voting underclass could arise from the Great Society? So that future generations of privilege-seekers could cry “discrimination” if anyone dares to denigrate their “lifestyles”? There was a time when immigrants and other persons who seemed “different” had the good sense to strive for success and acceptance as good neighbors, employees, and merchants. But the Civil Rights Act of 1964 and its various offspring — State and local as well as federal — are meant to short-circuit that striving and to force acceptance, whether or not a person has earned it. The vast, silent majority is caught between empowered privilege-seekers and powerful privilege-granters. The privilege-seekers and privilege-granters are abetted by dupes who have, as usual, succumbed to the people’s romance — the belief that government represents society.

Presidents, above all, like to think that they represent society. What they represent, of course, are their own biases and the interests to which they are beholden. Truman, Ike, and JFK were imperfect presidential specimens, but they are shining idols by contrast with most of their successors. The downhill slide from the Vietnam and the Great Society to Obamacare and lawlessness on immigration has been punctuated by many shameful episodes; for example:

  • LBJ — the botched war in Vietnam, repudiation of property rights and freedom of association (the Civil Rights Act)
  • Nixon — price controls, Watergate
  • Carter — dispiriting leadership and fecklessness in the Iran hostage crisis
  • Reagan — bugout from Lebanon, rescue of Social Security
  • Bush I — failure to oust Saddam when it could have been done easily, the broken promise about taxes
  • Clinton — bugout from Somalia, push for an early version of Obamacare, budget-balancing at the cost of defense, and perjury
  • Bush II — No Child Left Behind Act, Medicare Part D, the initial mishandling of Iraq, and Wall Street bailouts
  • Obama — stimulus spending, Obamacare, reversal of Bush II’s eventual success in Iraq, naive backing for the “Arab spring,”  acquiescence to Iran’s nuclear ambitions, unwillingness to acknowledge or do anything about the expansionist aims of Russia and China, neglect or repudiation of traditional allies (especially Israel), and refusal to take care that the immigration laws are executed faithfully.

Only Reagan’s defense buildup and its result — victory in the Cold War — stands out as a great accomplishment. But the victory was squandered: The “peace dividend” should have been peace through continued strength, not unpreparedness for the post 9/11 wars and the resurgence of Russia and China.

The war on defense has been accompanied by a war on science. The party that proclaims itself the party of science is anything but that. It is the party of superstitious, Luddite anti-science. Witness the embrace of extreme environmentalism, the arrogance of proclamations that AGW is “settled science”, unjustified fear of genetically modified foodstuffs, the implausible doctrine that race is nothing but a social construct, and on and on.

With respect to the nation’s moral well-being, the most destructive war of all has been the culture war, which assuredly began in the 1960s. Almost overnight, it seems, the nation was catapulted from the land of Ozzie and Harriet, Father Knows Best, and Leave It to Beaver to the land of the free- filthy-speech movement, Altamont, Woodstock, Hair, and the unspeakably loud, vulgar, and violent offerings that are now plastered all over the air waves, the internet, theater screens, and “entertainment” venues.

Adherents of the ascendant culture esteem protest for its own sake, and have stock explanations for all perceived wrongs (whether or not they are wrongs): racism, sexism, homophobia, Islamophobia, hate, white privilege, inequality (of any kind), Wall  Street, climate change, Zionism, and so on.

Then there is the campaign to curtail freedom of speech. This purported beneficiaries of the campaign are the gender-confused and the easily offended (thus “microagressions” and “trigger warnings”). The true beneficiaries are leftists. Free speech is all right if it’s acceptable to the left. Otherwise, it’s “hate speech”, and must be stamped out. This is McCarthyism on steroids. McCarthy, at least, was pursuing actual enemies of liberty; today’s leftists are the enemies of liberty.

There’s a lot more, unfortunately. The organs of the state have been enlisted in an unrelenting campaign against civilizing social norms. We now have not just easy divorce, subsidized illegitimacy, and legions of non-mothering mothers, but also abortion, concerted (and deluded) efforts to defeminize females and to neuter or feminize males, forced association (with accompanying destruction of property and employment rights), suppression of religion, absolution of pornography, and the encouragement of “alternative lifestyles” that feature disease, promiscuity, and familial instability. The state, of course, doesn’t act of its own volition. It acts at the behest of special interests — interests with a “cultural” agenda….  They are bent on the eradication of civil society — nothing less — in favor of a state-directed Rousseauvian dystopia from which morality and liberty will have vanished, except in Orwellian doublespeak.

If there are unifying themes in this petite histoire, they are the death of common sense and the rising tide of moral vacuity — thus the epigrams at the top of the post. The history of the United States since 1963 supports the proposition that the nation is indeed going to hell in a handbasket.

In fact, the speed at which it is going to hell seems to have accelerated since the Charleston church shooting in 2015, it’s a stealth revolution (e.g., this) piggy-backing on mass hysteria. Here’s the game plan:

  • Focus on racism — mainly against blacks, but also against Muslims and Latinos. (“Racism” covers a lot of ground these days.)
  • Thrown in sexism and gender bias (i.e., bias against gender-confused persons).
  • Pin it all on conservatives.

If a left-wing Democrat (is there any other kind now?) returns to the White House and an aggressive left-wing majority controls Congress — both quite thinkable, given the fickleness of the electorate — freedom of speech, freedom of association, and property rights will become not-so-distant memories. “Affirmative action” will be enforced on an unprecedented scale of ferocity. The nation will become vulnerable to foreign enemies while billions of dollars are wasted on the hoax of catastrophic anthropogenic global warming and “social services” for the indolent. The economy, already buckling under the weight of statism, will teeter on the brink of collapse as the regulatory regime goes into high gear and entrepreneurship is all but extinguished by taxation and regulation.

All of that will be secured by courts dominated by left-wing judges — from here to eternity.

And most of the affluent white enablers dupes of the revolution will come to rue their actions. But they won’t be free to say so.

Thus will liberty — and prosperity — die in America. Unless … the vast, squishy “center” takes heart from Trump’s efforts to restore prosperity (and a semblance of constitutional governance) and votes against a left-wing resurgence. The next big test of the center’s mood will occur on November 6, 2018.

The Folly of Pacifism

Winston Churchill said, “An appeaser is one who feeds the crocodile, hoping that it will eat him last.” I say that a person who promotes pacifism as state policy is one who offers himself and his fellow citizens as crocodile food.

Bryan Caplan, an irritating twit who professes economics at George Mason University, is an outspoken pacifist. He is also an outspoken advocate of open borders.

Caplan, like Linus of Peanuts, loves mankind; it’s people he can’t stand. In fact, his love of mankind isn’t love at all, but rather a kind of utilitarianism in which the “good of all” somehow outweighs the specific (though by no means limited) harms caused by lying down at an enemy’s feet or enabling illegal immigrants to feed at the public trough.

As Gregory Cochran puts it in the first installment of his review of Caplan’s The Case Against Education,

I don’t like Caplan. I think he doesn’t understand – can’t understand – human nature, and although that sometimes confers a different and interesting perspective, it’s not a royal road to truth. Nor would I want to share a foxhole with him: I don’t trust him.

That’s it, in a nutshell. Caplan’s pacifism reflects his untrustworthiness. He is a selective anti-tribalist:

I identify with my nuclear family, with my friends, and with a bunch of ideas.  I neither need nor want any broader identity.  I was born in America to a Democratic Catholic mother and a Republican Jewish father, but none of these facts define me.  When Americans, Democrats, Republicans, Catholics, and Jews commit misdeeds – as they regularly do – I feel no shame and offer no excuses.  Why?  Because I’m not with them.

Hollow words from man who, in large part, owes his freedom and comfortable life to the armed forces and police of the country that he disdains. And — more fundamentally — to the mostly peaceful and productive citizens in whose midst he lives, and whose taxes support the armed forces and police.

Caplan is a man out of place. His attitude toward his country would be justified if he lived in the Soviet Union, Communist China, North Korea, Cuba, or any number of other nation-states past and present. His family, friends, and “bunch of ideas” will be of little help to him when, say, Kim Jong-un (or his successor) lobs an ICBM in the vicinity of Washington, D.C., which is uncomfortably close to Caplan’s residence and workplace.

In his many writings on pacifism, Caplan has pooh-poohed the idea that “if you want peace, prepare for war”:

This claim is obviously overstated.  Is North Korea really pursuing the smart path to peace by keeping almost 5% of its population on active military duty?  How about Hitler’s rearmament?  Was the Soviet Union preparing for peace by spending 15-20% of its GDP on the Red Army?

Note the weasel-word, “overstated”, which gives Caplan room to backtrack in the face of evidence that preparedness for war can foster peace by deterring an enemy. (The defense buildup in the 1980s is arguably such a case, in which the Soviet Union was not only deterred but also brought to its knees.) Weasel-wording is typical of Caplan’s method of argumentation. He is harder to pin down than Jell-O.

In any event, Caplan’s pronouncement only attests to the fact that there are aggressive people and regimes out there, and that non-aggressors are naive to believe that those people and regimes will not attack you if you are not armed against them.

The wisdom of preparedness is nowhere better illustrated than in the world of the internet, where every innocent user is a target for the twisted and vicious purveyors of malware. Think of the millions of bystanders (myself included) whose sensitive personal information has been scooped by breaches of massive databases. Internet predators differ from armed ones only in their choice of targets and weapons, not in their essential disregard for the lives and property of others.

Interestingly, although Caplan foolishly decries preparedness, he isn’t against retaliation (which seems a strange position for a pacifist):

[D]oesn’t pacifism contradict the libertarian principle that people have a right to use retaliatory force?  No. I’m all for revenge against individual criminals.  My claim is that in practice, it is nearly impossible to wage war justly, i.e., without trampling on the rights of the innocent.

Why is it “nearly impossible to wage war justly”? Caplan puts it this way:

1. The immediate costs of war are clearly awful.  Most wars lead to massive loss of life and wealth on at least one side.  If you use a standard value of life of $5M, every 200,000 deaths is equivalent to a trillion dollars of damage.

2. The long-run benefits of war are highly uncertain.  Some wars – most obviously the Napoleonic Wars and World War II – at least arguably deserve credit for decades of subsequent peace.  But many other wars – like the French Revolution and World War I – just sowed the seeds for new and greater horrors.  You could say, “Fine, let’s only fight wars with big long-run benefits.”  In practice, however, it’s very difficult to predict a war’s long-run consequences.  One of the great lessons of Tetlock’s Expert Political Judgment is that foreign policy experts are much more certain of their predictions than they have any right to be.

3. For a war to be morally justified, its long-run benefits have to be substantially larger than its short-run costs.  I call this “the principle of mild deontology.”  Almost everyone thinks it’s wrong to murder a random person and use his organs to save the lives of five other people.  For a war to be morally justified, then, its (innocent lives saved/innocent lives lost) ratio would have to exceed 5:1.  (I personally think that a much higher ratio is morally required, but I don’t need that assumption to make my case).

It would seem that Caplan is not entirely opposed to war — as long as the ratio of lives saved to lives lost is acceptably high. But Caplan gets to choose the number of persons who may die for the sake of those who may thus live. He wears his God-like omniscience with such modesty.

Caplan’s soul-accountancy implies  a social-welfare function, wherein A’s death cancels B’s survival. I wonder if Caplan would feel the same way if A were Osama bin Laden (before 9/11) and B were Bryan Caplan or one of his family members or friends? He would feel the same way if he were a true pacifist. But he is evidently not one. His pacifism is selective, and his arguments for it are slippery.

What Caplan wants, I suspect, is the best of both worlds: freedom and prosperity for himself (and family members and friends) without the presence of police and armed forces, and the messy (but unavoidable) business of using them. Using them is an imperfect business; mistakes are sometimes made. It is the mistakes that Caplan (and his ilk) cringe against because they indulge in the nirvana fallacy. In this instance, it is a belief that there is a more-perfect world to be had if only “we” would forgo violence. Which gets us back to  Caplan’s unwitting admission that there are people out there who will do bad things even if they aren’t provoked.

National defense, like anything less than wide-open borders, violates another of Caplan’s pernicious principles. He seems to believe that the tendency of geographically proximate groups to band together in self-defense is a kind of psychological defect. He refers to it as “group-serving bias”.

That’s just a pejorative term which happens to encompass mutual self-defense. And who better to help you defend yourself than the people with whom you share space, be it a neighborhood, a city-state, a principality, or even a vast nation? As a member of one or the other, you may be targeted for harm by outsiders who wish to seize your land and control your wealth, or who simply dislike your way of life, even if it does them no harm.

Would it be “group-serving bias” if Caplan were to provide for the defense of his family members (and even some friends) by arming them if they happened to live in a high-crime neighborhood? If he didn’t provide for their defense, he would quickly learn the folly of pacifism, as family members and friends are robbed, maimed, and killed.

Pacifism is a sophomoric fantasy on a par with anarchism. It is sad to see Caplan’s intelligence wasted on the promulgation and defense of such a fantasy.

Intelligence As a Dirty Word

Once upon a time I read a post, “The Nature of Intelligence”,  at a now-defunct blog named MBTI Truths. Here is the entire text of the post:

A commonly held misconception within the MBTI community is that iNtuitives are smarter than Sensors. They are thought to have higher intelligence, but this belief is misguided. In an assessment of famous people with high IQs, the vast majority of them are iNtuitive. However, IQ tests measure only two types of intelligences: linguistic and logical-mathematical. In addition to these, there are six other types of intelligence: spatial, bodily-kinesthetic, musical, interpersonal, intrapersonal, and naturalistic. Sensors would probably outscore iNtuitives in several of these areas. Perhaps MBTI users should come to see iNtuitives, who make up 25 percent of the population, as having a unique type of intelligence instead of superior intelligence.

The use of “intelligence” with respect to traits other than brain-power is misguided. “Intelligence” has a clear and unambiguous meaning in everyday language; for example:

The capacity to acquire, understand, and use knowledge.

That is the way in which I use “intelligence” in “Intelligence, Personality, Politics, and Happiness”, and it is the way in which the word is commonly understood. The application of “intelligence” to other kinds of ability — musical, interpersonal, etc. — is a fairly recent development that smacks of anti-elitism. It is a way of saying that highly intelligent individuals (where “intelligence” carries its traditional meaning) are not necessarily superior in all respects. No kidding!

As to the merits of the post at MBTI Truths, it is mere hand-waving to say that “Sensors would probably outscore iNtuitives in several of these” other types of ability. And what is naturalistic intelligence, anyway?

Returning to a key point of my post, “Intelligence, Personality, Politics, and Happiness”, the claim that iNtuitives are generally smarter than Sensors is nothing but a claim about the relative capacity of iNtuitives to acquire and apply knowledge. It is quite correct to say that iNtuitives are not necessarily better than Sensors at, say, sports, music, glad-handing, and so one. It is also quite correct to say that iNtuitives generally are more intelligent than Sensors, in the standard meaning of “intelligence”.

Other so-called types of intelligence are not types of intelligence, at all. They are simply other types of ability, each of them is (perhaps) valuable in its own way. But calling them types of intelligence is a transparent effort to denigrate the importance of real intelligence, which is an important determinant of significant life outcomes: learning, job performance, income, health, and criminality (in the negative).

It is a sign of the times that an important human trait is played down in an effort to inflate the egos of persons who are not well endowed with respect to that trait. The attempt to redefine or minimize intelligence is of a piece with the use of genteelisms, which Wilson Follett defines as

soft-spoken expressions that are either unnecessary or too regularly used. The modern world is much given to making up euphemisms that turn into genteelisms. Thus newspapers and politicians shirk speaking of the poor and the crippled. These persons become, respectively, the underprivileged (or disadvantaged) and the handicapped [and now -challenged and -abled: ED]. (Modern American Usage (1966), p. 169)

Finally:

Genteelisms may be of … the old-fashioned sort that will not name common things outright, such as the absurd plural bosoms for breasts, and phrases that try to conceal accidental associations of ideas, such as back of for behind. The advertiser’s genteelisms are too numerous to count. They range from the false comparative (e.g., the better hotels) to the soapy phrase (e.g., gracious living), which is supposed to poeticize and perfume the proffer of bodily comforts. (Ibid., p. 170)

And so it is that such traits as athleticism, musical virtuosity, and garrulousness become kinds of intelligence. Why? Because it is somehow inegalitarian — and therefore unmentionable — that some persons are smarter than others. It would be doubly inegalitarian — but likely true — that smarter persons also have genetic tendencies to greater health and physical attractiveness.

Life just isn’t fair, so get over it.

Intelligence, Personality, Politics, and Happiness

This is a popular article, but not the only one that you should read while you’re here. Check out the sidebar, where there’s a list of recent articles and a link to the archives.

Web pages that link to this post usually consist of a discussion thread whose participants’ views of the post vary from “I told you so” to “that doesn’t square with me/my experience” or “MBTI is all wet because…”.  Those who take the former position tend to be persons of above-average intelligence whose MBTI types correlate well with high intelligence. Those who take the latter two positions tend to be persons who are defensive about their personality types, which do not correlate well with high intelligence. Such persons should take a deep breath and remember that high intelligence (of the abstract-reasoning-book-learning kind measured by IQ tests) is widely distributed throughout the population. As I say below, ” I am not claiming that a small subset of MBTI types accounts for all high-IQ persons, nor am I claiming that a small subset of MBTI types is populated entirely by high-IQ persons.” All I am saying is that the bits of evidence which I have compiled suggest that high intelligence is more likely — but far from exclusively — to be found among persons with certain MBTI types.

The correlations between intelligence, political leanings, and happiness are admittedly more tenuous. But they are plausible.

Leftists who proclaim themselves to be more intelligent than persons of the right do so, in my observation, as a way of reassuring themselves of the superiority of their views. They have no legitimate basis for claiming that the ranks of highly intelligent persons are dominated by the left. Leftist “intellectuals” in academia, journalism, the “arts”, and other traditional haunts of leftism are prominent because they are vocal. But they comprise a small minority of the population and should not be mistaken for typical leftists, who seem mainly to populate the ranks of the civil service, labor unions, public-school “educators”, and the unemployed. (It is worth noting that public-school teachers, on the whole, are notoriously dumber than most other college graduates.)

Again, I am talking about general relationships, to which there are many exceptions. If you happen to be an exception, don’t take this post personally. You’re probably an exceptional person.

IQ AND PERSONALITY

Some years ago I found statistics about the personality traits of high-IQ persons (those who are in the top 2 percent of the population).* The statistics pertain to a widely used personality test called the Myers-Briggs Type Indicator (MBTI), which I have taken twice. In the MBTI there are four pairs of complementary personality traits, called preferences: Extraverted/Introverted, Sensing/iNtuitive, Thinking/Feeling, and Judging/Perceiving. Thus, there are 16 possible personality types in the MBTI: ESTJ, ENTJ, ESFJ, ESFP, and so on. (For an introduction to MBTI, summaries of types, criticisms of MBTI, and links to other sources, see this article at Wikipedia. A straightforward description of the theory of MBTI and the personality traits can be found here. Detailed descriptions of the 16 types are given here.)

In summary, here is what the statistics indicate about the correlation between personality traits and IQ:

  • Other personality traits being the same, an iNtuitive person (one who grasps patterns and seeks possibilities) is 25 times more likely to have a high IQ than a Sensing person (one who focuses on sensory details and the here-and-now).
  • Again, other traits being the same, an Introverted person is 2.6 times more likely to have a high IQ than one who is Extraverted; a Thinking (logic-oriented) person is 4.5 times more likely to have a high IQ than a Feeling (people-oriented) person; and a Judging person (one who seeks closure) is 1.6 times as likely to have a high IQ than a Perceiving person (one who likes to keep his options open).
  • Moreover, if you encounter an INTJ, there is a 22% probability that his IQ places him in the top 2 percent of the population. (Disclosure: I am an INTJ.) Next are INTP, at 14%; ENTJ, 8%; ENTP, 5%; and INFJ, 5%. (The next highest type is the INFP at 3%.) The  five types (INTJ, INTP, ENTJ, ENTP, and INFJ) account for 78% of the high-IQ population but only 15% of the total population.**
  • Four of the five most-intelligent types are NTs, as one would expect, given the probabilities cited above. Those same probabilities lead to the dominance of INTJs and INTPs, which account for 49% of the Mensa membership but only 5% of the general population.**
  • Persons with the S preference bring up the rear, when it comes to taking IQ tests.**

A person who read an earlier version of this post claims that “one would expect to see the whole spectrum of intelligences within each personality type.” Well, one does see just that, but high intelligence is skewed toward the five types listed above. I am not claiming that a small subset of MBTI types accounts for all high-IQ persons, nor am I claiming that a small subset of MBTI types is populated entirely by high-IQ persons.

I acknowledge reservations about MBTI, such as those discussed in the Wikipedia article. An inherent shortcoming of psychological tests (as opposed to intelligence tests) is that they rely on subjective responses (e.g., my favorite color might be black today and blue tomorrow). But I do not accept this criticism:

[S]ome researchers expected that scores would show a bimodal distribution with peaks near the ends of the scales, but found that scores on the individual subscales were actually distributed in a centrally peaked manner similar to a normal distribution. A cut-off exists at the center of the subscale such that a score on one side is classified as one type, and a score on the other side as the opposite type. This fails to support the concept of type: the norm is for people to lie near the middle of the subscale.

Why was it expected that scores on a subscale (E/I, S/N, T/F, J/P) would show a bimodal distribution? How often does one encounter a person who is at the extreme end of any subscale? Not often, I wager, except in places where such extremes are likely to be clustered (e.g., Extraverts in politics, Introverts in monasteries). The cut-off at the center of each subscale is arbitrary; it simply affords a shorthand characterization of a person’s dominant traits. But anyone who takes an MBTI (or equivalent instrument) is given his scores on each of the subscales, so that he knows the strength (or weakness) of his tendencies.

Regarding other points of criticism: It is possible, of course, that a person who is familiar with MBTI tends to see in others the characteristics of their known MBTI types (i.e., confirmation bias). But has that tendency been confirmed by rigorous testing? Such testing would examine the contrary case, that is, the ability of a person to predict the type of a person whom he knows well (e.g., a co-worker or relative).

The supposed vagueness of the descriptions of the 16 types arises from the complexity of human personality; but there are differences among the descriptions, just as there are differences among individuals. According to a footnote to an earlier version of the Wikipedia article about MBTI, half of the persons who take the MBTI are able to guess their types before taking it. Does that invalidate MBTI or does it point to a more likely phenomenon, namely, that introspection is a personality-related trait, one that is more common among Introverts than Extraverts? A good MBTI instrument cuts through self-deception and self-flattery by asking the same set of questions in many different ways, and in ways that do not make any particular answer seem like the “right” one.

IQ AND POLITICS

It is hard to find clear, concise analyses of the relationship between IQ and political leanings. I offer the following in evidence that very high-IQ individuals lean strongly toward libertarian positions.

The Triple Nine Society (TNS) limits its membership to persons with IQs in the top 0.1% of the population. In an undated survey (probably conducted in 2000, given the questions about the perceived intelligence of certain presidential candidates), members of TNS gave their views on several topics (in addition to speculating about the candidates’ intelligence): subsidies, taxation, civil regulation, business regulation, health care, regulation of genetic engineering, data privacy, death penalty, and use of military force.

The results speak for themselves. Those members of TNS who took the survey clearly have strong (if not unanimous) libertarian leanings.

THE RIGHT IS SMARTER THAN THE LEFT

I count libertarians as part of the right because libertarians’ anti-statist views are aligned with the views of the traditional (small-government) conservatives who are usually Republicans. Having said that, the results reported in “IQ and Politics” lead me to suspect that the right is smarter than the left, left-wing propaganda to the contrary notwithstanding. There is additional evidence for my view.

A site called Personality Page offers some data about personality type and political affiliation. The sample is not representative of the population as a whole; the average age of respondents is 25, and introverted personalities are over-represented (as you might expect for a test that is apparently self-administered through a website). On the other hand, the results are probably unbiased with respect to intelligence because the data about personality type were not collected as part of a study that attempts to relate political views and intelligence, and there is nothing on the site to indicate a left-wing bias. (Psychologists, who tend toward leftism, have a knack for making conservatives look bad, as discussed here, here, and here. If there is a strong association between political views and intelligence, it is found among so-called intellectuals, where the herd mentality reigns supreme.)

The data provided by Personality Page are based on the responses of 1,222 individuals who took a 60-question personality test that determined their MBTI types (see “IQ and Personality”). The test takers were asked to state their political preferences, given these choices: Democrat, Republican, middle of the road, liberal, conservative, libertarian, not political, and other. Political self-labelling is an exercise in subjectivity. Nevertheless, individuals who call themselves Democrats or liberals (the left) are almost certainly distinct, politically, from individuals who call themselves Republicans, conservatives, or libertarians (the right).

Now, to the money question: Given the distribution of personality types on the left and right, which distribution is more likely to produce members of Mensa? The answer: Those who self-identify as persons of the right are 15 percent more likely to qualify for membership in Mensa than those who self-identify as persons of the left. This result is plausible because it is consistent with the pronounced anti-government tendencies of the very-high-IQ members of the Triple Nine Society (see “IQ and Politics”).

REPUBLICANS (AND LIBERTARIANS) ARE HAPPIER THAN DEMOCRATS

That statement follows from research by the Pew Research Center (“Are We Happy Yet?”, February 13, 2006) and Gallup (“Republicans Report Much Better Health Than Others”, November 30, 2007).

Pew reports:

Some 45% of all Republicans report being very happy, compared with just 30% of Democrats and 29% of independents. This finding has also been around a long time; Republicans have been happier than Democrats every year since the General Social Survey began taking its measurements in 1972….

Of course, there’s a more obvious explanation for the Republicans’ happiness edge. Republicans tend to have more money than Democrats, and — as we’ve already discovered — people who have more money tend to be happier.

But even this explanation only goes so far. If one controls for household income, Republicans still hold a significant edge: that is, poor Republicans are happier than poor Democrats; middle-income Republicans are happier than middle-income Democrats, and rich Republicans are happier than rich Democrats.

Gallup adds this:

Republicans are significantly more likely to report excellent mental health than are independents or Democrats among those making less than $50,000 a year, and among those making at least $50,000 a year. Republicans are also more likely than independents and Democrats to report excellent mental health within all four categories of educational attainment.

There is a lot more in both sources. Read them for yourself.

Why would Republicans be happier than Democrats? Here’s my thought, Republicans tend to be conservative or libertarian (at least with respect to minimizing government’s role in economic affairs). Consider Thomas Sowell’s A Conflict of Visions:

He posits two opposing visions: the unconstrained vision (I would call it the idealistic vision) and the constrained vision (which I would call the realistic vision). As Sowell explains, at the end of chapter 2:

The dichotomy between constrained and unconstrained visions is based on whether or not inherent limitations of man are among the key elements included in each vision…. These different ways of conceiving man and the world lead not merely to different conclusions but to sharply divergent, often diametrically opposed, conclusions on issues ranging from justice to war.

Idealists (“liberals”) are bound to be less happy than realists (conservatives and libertarians) because idealists’ expectations about human accomplishments (aided by government) are higher than those of realists, and so idealists are doomed to disappointment.

All of this is consistent with findings reported by law professor James Lindgren:

[C]ompared to anti-redistributionists, strong redistributionists have about two to three times higher odds of reporting that in the prior seven days they were angry, mad at someone, outraged, sad, lonely, and had trouble shaking the blues. Similarly, anti-redistributionists had about two to four times higher odds of reporting being happy or at ease. Not only do redistributionists report more anger, but they report that their anger lasts longer. When asked about the last time they were angry, strong redistributionists were more than twice as likely as strong opponents of leveling to admit that they responded to their anger by plotting revenge. Last, both redistributionists and anti-capitalists expressed lower overall happiness, less happy marriages, and lower satisfaction with their financial situations and with their jobs or housework. [Northwestern Law and Economics Research Paper 06-29, “What Drives Views on Government Redistribution and Anti-Capitalism: Envy or a Desire for Social Dominance?”, March 15, 2011]

THE BOTTOM LINE

If you are very intelligent — with an IQ that puts you in the top 2 percent of the population — you are most likely to be an INTJ, INTP, ENTJ, ENTP, or INFJ, in that order. Your politics will lean heavily toward libertarianism or small-government conservatism. You probably vote Republican most of the time because, even if you are not a card-carrying Republican, you are a staunch anti-Democrat. And you are a happy person because your expectations are not constantly defeated by reality.


Related reading (listed chronologically):

Jeff Allen, “Conservatives: The Smartest (and Happiest) People in the Room“, Barbed Wire, February 20, 2014

James Thompson, “Election Special: Are Republicans Smarter than Democrats?”, The Unz Review, November 3, 2016

Dennis Prager, “Liberals and Conservatives are Unhappy for Different Reasons“, Townhall, February 13, 2018

John J. Ray, “Leftists Are Born Unhappy“, Dissecting Leftism, February 14, 2018


Related posts:

Intelligence and Intuition

Intelligence As a Dirty Word


Footnotes:

* I apologize for not having documented the source of the statistics that I cite here. I dimly recall finding them on or via the website of American Mensa, but I am not certain of that. And I can no longer find the source by searching the web. I did transcribe the statistics to a spreadsheet, which I still have. So, the numbers are real, even if their source is now lost to me.

** Estimates of the distribution of  MBTI types  in the U.S. population are given in two tables on page 4 of “Estimated Frequencies of the Types in the United States Population”, published by the Center for Applications of Psychological Type. One table gives estimates of the distribution of the population by preference (E, I, N, S, etc.). The other table give estimates of the distribution of the population among all 16 MBTI types. The statistics for members of Mensa were broken down by preferences, not by types; therefore I had to use the values for preferences to estimate the frequencies of the 16 types among members of Mensa. For consistency, I used the distribution of the preferences among the U.S. population to estimate the frequencies of the 16 types among the population, rather than use the frequencies provided for each type. For example, the fraction of the population that is INTJ comes to 0.029 (2.9 percent) when the values for I (0.507), N (0.267), T (0.402), and J (0.541) are multiplied. But the detailed table has INTJs as 2.1 percent of the population. In sum, there are discrepancies between the computed and given values of the 16 types in the population. The most striking discrepancy is for the INFJ type. When estimated from the frequencies of the four preferences, INFJs are 4.4 percent of the population; the table of values for all 16 types gives the percentage of INFJs as 1.5 percent.

Using the distribution given for the 16 types leads to somewhat different results:

  • There is a 31-percent probability that an INTJ’s IQ places him in the top 2 percent of the population. Next are INFJ, at 14 percent; ENTJ, 13 percent; and INTP, 10 percent. (The next highest type is the ENTP at 4 percent.) The  four types (INTJ, INFJ, ENTJ, AND INTP) account for 72 percent of the high-IQ population but only 9 percent of the total population. The top five types (including ENTPs) account for 78 percent of the high-IQ population but only 12 percent of the total population.
  • Four of the five most-intelligent types are NTs, as one would expect, given the probabilities cited earlier. But, in terms of the likelihood of having an IQ, this method moves INFJs into second place, a percentage point ahead of ENTJs.
  • In any event, the same five types dominate, and all five types have a preference for iNtuitive thinking.
  • As before, persons with the S preference generally lag their peers when it comes to IQ tests.

Killing the Keynesian Multiplier

There are a few economic concepts that are widely cited (if not understood) by non-economists. Certainly, the “law” of supply and demand is one of them. The Keynesian (fiscal) multiplier is another; it is

the ratio of a change in national income to the change in government spending that causes it. More generally, the exogenous spending multiplier is the ratio of a change in national income to any autonomous change in spending (private investment spending, consumer spending, government spending, or spending by foreigners on the country’s exports) that causes it.

The multiplier is usually invoked by pundits and politicians who are anxious to boost government spending as a “cure” for economic downturns. What’s wrong with that? If government spends an extra $1 to employ previously unemployed resources, why won’t that $1 multiply and become $1.50, $1.60, or even $5 worth of additional output?

What’s wrong is the phony math by which the multiplier is derived, and the phony story that was long ago concocted to explain the operation of the multiplier.

MULTIPLIER MATH

To show why the math is phony, I’ll start with a derivation of the multiplier. The derivation begins with the accounting identity  Y = C + I + G, which means that total output (Y) = consumption (C) + investment (I) + government spending (G). I could use a more complex identity that involves taxes, exports, and imports. But no matter; the bottom line remains the same, so I’ll keep it simple and use Y = C + I  + G.

Keep in mind that the aggregates that I’m writing about here — Y , C , I , G, and later S  — are supposed to represent real quantities of goods and services, not mere money. Keep in mind, also, that Y stands for gross domestic product (GDP); there is no real income unless there is output, that is, product.

Now for the derivation (right-click to enlarge this and later images):

Derivation of investment-govt spending multiplier

So far, so good. Now, let’s say that b = 0.8. This means that income-earners, on average, will spend 80 percent of their additional income on consumption goods (C), while holding back (saving, S) 20 percent of their additional income. With b = 0.8, k = 1/(1 – 0.8) = 1/0.2 = 5.  That is, every $1 of additional spending — let us say additional government spending (∆G) rather than investment spending (∆I) — will yield ∆Y = $5. In short, ∆Y = k(∆G), as a theoretical maximum. (Even if the multiplier were real, there are many things that would cause it to fall short of its theoretical maximum; see this, for example.)

How is it supposed to work? The initial stimulus (∆G) creates income (don’t ask how), a fraction of which (b) goes to C. That spending creates new income, a fraction of which goes to C. And so on. Thus the first round = ∆G, the second round = b(∆G), the third round = b(b)(∆G) , and so on. The sum of the “rounds” asymptotically approaches k(∆G). (What happens to S, the portion of income that isn’t spent? That’s part of the complicated phony story that I’ll examine in a future post.)

Note well, however, that the resulting ∆Y isn’t properly an increase in Y, which is an annual rate of output; rather, it’s the cumulative increase in total output over an indefinite number and duration of ever-smaller “rounds” of consumption spending.

The cumulative effect of a sustained increase in government spending might, after several years, yield a new Y — call it Y’ = Y + ∆Y. But it would do so only if ∆G persisted for several years. To put it another way, ∆Y persists only for as long as the effects of ∆G persist. The multiplier effect disappears after the “rounds” of spending that follow ∆G have played out.

The multiplier effect is therefore (at most) temporary; it vanishes after the withdrawal of the “stimulus” (∆G). The idea is that ∆Y should be temporary because a downturn will be followed by a recovery — weak or strong, later or sooner.

An aside is in order here: Proponents of big government like to trumpet the supposedly stimulating effects of G on the economy when they propose programs that would lead to permanent increases in G, holding other things constant. And other things (other government programs) are constant (at least) because they have powerful patrons and constituents, and are harder to kill than Hydra. If the proponents of big government were aware of the economically debilitating effects of G and the things that accompany it (e.g., regulations), most of them would simply defend their favorite programs all the more fiercely.

WHY MULTIPLIER MATH IS PHONY MATH

Now for my exposé of the phony math. I begin with Steven Landsburg, who borrows from the late Murray Rothbard:

. . . We start with an accounting identity, which nobody can deny:

Y = C + I + G. . . Since all output ends up somewhere, and since households, firms and government exhaust the possibilities, this equation must be true.

Next, we notice that people tend to spend, oh, say about 80 percent of their incomes. What they spend is equal to the value of what ends up in their households, which we’ve already called C. So we have

C = .8Y Now we use a little algebra to combine our two equations and quickly derive a new equation:

Y = 5(I+G) That 5 is the famous Keynesian multiplier. In this case, it tells you that if you increase government spending by one dollar, then economy-wide output (and hence economy-wide income) will increase by a whopping five dollars. What a deal!

. . . [I]t was Murray Rothbard who observed that the really neat thing about this argument is that you can do exactly the same thing with any accounting identity. Let’s start with this one:

Y = L + E

Here Y is economy-wide income, L is Landsburg’s income, and E is everyone else’s income. No disputing that one.

Next we observe that everyone else’s share of the income tends to be about 99.999999% of the total. In symbols, we have:

E = .99999999 Y

Combine these two equations, do your algebra, and voila:

Y = 100,000,000 LThat 100,000,000 there is the soon-to-be-famous “Landsburg multiplier”. Our equation proves that if you send Landsburg a dollar, you’ll generate $100,000,000 worth of income for everyone else.

The policy implications are unmistakable. It’s just Eco 101!! [“The Landsburg Multiplier: How to Make Everyone Rich”, The Big Questions blog, June 25, 2013]

Landsburg attributes the nonsensical result to the assumption that

equations describing behavior would remain valid after a policy change. Lucas made the simple but pointed observation that this assumption is almost never justified.

. . . None of this means that you can’t write down [a] sensible Keynesian model with a multiplier; it does mean that the Eco 101 version of the Keynesian cross is not an example of such. This in turn calls into question the wisdom of the occasional pundit [Paul Krugman] who repeatedly admonishes us to be guided in our policy choices by the lessons of Eco 101. (“Multiple Comments”, op. cit,, June 26, 2013)

It’s worse than that, as Landsburg almost acknowledges when he observes (correctly) that Y = C + I + G is an accounting identity. That is to say, it isn’t a functional representation — a model — of the dynamics of the economy. Assigning a value to b (the marginal propensity to consume) — even if it’s an empirical value — doesn’t alter that fact that the derivation is nothing more than the manipulation of a non-functional relationship, that is, an accounting identity.

Consider, for example, the equation for converting temperature Celsius (C) to temperature Fahrenheit (F): F = 32 + 1.8C. It follows that an increase of 10 degrees C implies an increase of 18 degrees F. This could be expressed as ∆F/C = k* , where k* represents the “Celsius multiplier”. There is no mathematical difference between the derivation of the investment/government-spending multiplier (k) and the derivation of the Celsius multiplier (k*). And yet we know that the Celsius multiplier is nothing more than a tautology; it tells us nothing about how the temperature rises by 10 degrees C or 18 degrees F. It simply tells us that when the temperature rises by 10 degrees C, the equivalent rise in temperature F is 18 degrees. The rise of 10 degrees C doesn’t cause the rise of 18 degrees F.

Similarly, the Keynesian investment/government-spending multiplier simply tells us that if ∆Y = $5 trillion, and if b = 0.8, then it is a matter of mathematical necessity that ∆C = $4 trillion and ∆I + ∆G = $1 trillion. In other words, a rise in I + G of $1 trillion doesn’t cause a rise in Y of $5 trillion; rather, Y must rise by $5 trillion for C to rise by $4 trillion and I + G to rise by $1 trillion. If there’s a causal relationship between ∆G and ∆Y, the multiplier doesn’t portray it.

PHONY MATH DOESN’T EVEN ADD UP

Recall the story that’s supposed to explain how the multiplier works: The initial stimulus (∆G) creates income, a fraction of which (b) goes to C. That spending creates new income, a fraction of which goes to C. And so on. Thus the first round = ∆G, the second round = b(∆G), the third round = b(b)(∆G) , and so on. The sum of the “rounds” asymptotically approaches k(∆G). So, if b = 0.8, k = 5, and ∆G = $1 trillion, the resulting cumulative ∆Y = $5 trillion (in the limit). And it’s all in addition to the output that would have been generated in the absence of ∆G, as long as many conditions are met. Chief among them is the condition that the additional output in each round is generated by resources that had been unemployed.

In addition to the fact that the math behind the multiplier is phony, as explained above, it also yields contradictory results. If one can derive an investment/government-spending multiplier, one can also derive a “consumption multiplier”:

Derivation of consumption multiplier

Taking b = 0.8, as before, the resulting value of kc is 1.25. Suppose the initial round of spending is generated by C instead of G. (I won’t bother with a story to explain it; you can easily imagine one involving underemployed factories and unemployed persons.) If ∆C = $1 trillion, shouldn’t cumulative ∆Y = $5 trillion? After all, there’s no essential difference between spending $1 trillion on a government project and $1 trillion on factory output, as long as both bursts of spending result in the employment of underemployed and unemployed resources (among other things).

But with kc = 1.25, the initial $1 trillion burst of spending (in theory) results in additional output of only $1.25 trillion. Where’s the other $3.75 trillion? Nowhere. The $5 trillion is phony. What about the $1.25 trillion? It’s phony, too. The “consumption multiplier” of 1.25 is simply the inverse of b, where b = 0.8. In other words, Y must rise by $1.25 trillion if C is to rise by $1 trillion. More phony math.

CAN AN INCREASE IN G HELP IN THE SHORT RUN?

Can an exogenous increase in G spending really yield a short-term, temporary increase in GDP? Perhaps, but there’s many a slip between cup and lip. The following example goes beyond the bare theory of the Keynesian multiplier to address several practical and theoretical shortcomings (some which are discussed  “here” and “here“):

  1. Annualized real GDP (Y) drops from $16.5 trillion a year to $14 trillion a year because of the unemployment of resources. (How that happens is a different subject.)
  2. Government spending (G) is temporarily and quickly increased by an annual rate of $500 billion; that is, ∆G = $0.5 trillion. The idea is to restore Y to $16 trillion, given a multiplier of 5 (In standard multiplier math: ∆Y = (k)(∆G), where k = 1/(1 – MPC); k = 5, where MPC = 0.8.)
  3. The ∆G is financed in a way that doesn’t reduce private-sector spending. (This is almost impossible, given Ricardian equivalence — the tendency of private actors to take into account the long-term, crowding-out effects of government spending as they make their own spending decisions. The closest approximation to neutrality can be attained by financing additional G through money creation, rather than additional taxes or borrowing that crowds out the financing of private-sector consumption and investment spending.)
  4. To have the greatest leverage, ∆G must be directed so that it employs only those resources that are idle, which then acquire purchasing power that they didn’t have before. (This, too, is almost impossible, given the clumsiness of government.)
  5. A fraction of the new purchasing power flows, through consumption spending (C), to the employment of other idle resources. That fraction is called the marginal propensity to consume (MPC), which is the rate at which the owners of idle resources spend additional income on so-called consumption goods. (As many economists have pointed out, the effect could also occur as a result of investment spending. A dollar spent is a dollar spent, and investment  spending has the advantage of directly enabling economic growth, unlike consumption spending.)
  6. A remainder goes to saving (S) and is therefore available for investment (I) in future production capacity. But S and I are ignored in the multiplier equation: One story goes like this: S doesn’t elicit I because savers hoard cash and investment is discouraged by the bleak economic outlook. Here is a more likely story: The multiplier would be infinite (and therefore embarrassingly inexplicable) if S generated an equivalent amount of I, because the marginal propensity to spend (MPS) would be equal to 1, and the multiplier equation would look like this: k = 1/(1 – MPS) = ∞, where MPS = 1.
  7. In any event, the initial increment of C (∆C) brings forth a new “round” of production, which yields another increment of C, and so on, ad infinitum. If MPC = 0.8, then assuming away “leakage” to taxes and imports, the multiplier = k = 1/(1 – MPC), or k = 5 in this example.  (The multiplier rises with MPC and reaches infinity if MPC = 1. This suggests that a very high MPC is economically beneficial, even though a very high MPC implies a very low rate of saving and therefore a very low rate of growth-producing investment.)
  8. Given k = 5,  ∆G = $0.5T would cause an eventual increase in real output of $2.5 trillion (assuming no “leakage” or offsetting reductions in private consumption and investment); that is, ∆Y = [k][∆G]= $2.5 trillion. However, because G and Y usually refer to annual rates, this result is mathematically incoherent; ∆G = $0.5 trillion does not restore Y to $16.5 trillion.
  9. In any event, the increase in Y isn’t permanent; the multiplier effect disappears after the “rounds” resulting from ∆G have played out. If the theoretical multiplier is 5, and if transactional velocity is 4 (i.e., 4 “rounds” of spending in a year), more than half of the multiplier effect would be felt within a year from each injection of spending, and about two-thirds would be felt within two years of each injection. It seems unlikely, however, that the multiplier effect would be felt for much longer, because of changing conditions (e.g., an exogenous boost in private investment, private reemployment of resources, discouraged workers leaving the labor force, shifts in expectations about inflation and returns on investment).
  10. All of this ignores that fact that the likely cause of the drop in Y is not insufficient “aggregate demand”, but a “credit crunch” (Michael D. Bordo and Joseph G. Haubrich in “Credit Crises, Money, and Contractions: A Historical View,” Federal Reserve Bank of Cleveland, Working Paper 09-08, September 2009). “Aggregate demand” doesn’t exist, except as an after-the-fact measurement of the money value of goods and services comprised in Y. “Aggregate demand”, in other words, is merely the sum of millions of individual transactions, the rate and total money value of which decline for specific reasons, “credit crunch” being chief among them. Given that, an exogenous increase in G is likely to yield a real increase in Y only if the increase in G leads to an increase in the money supply (as it is bound to do when the Fed, in effect, prints money to finance it). But because of cash hoarding and a bleak investment outlook, the increase in the money supply is unlikely to generate much additional economic activity.

So much for that.

THE THEORETICAL MAXIMUM

A somewhat more realistic version of multiplier math — as opposed to the version addressed earlier — yields a maximum value of k = 1:

More rigorous derivation of Keynesian multiplier

How did I do that? In step 3, I made C a function of P (private-sector GDP) instead of Y (usually taken as the independent variable). Why? C is more closely linked to P than to Y, as an analysis of GDP statistics will prove. (Go here, download the statistics for the post-World War II era from tables 1.1.5 and 3.1, and see for yourself.)

THE TRUE MULTIPLIER

In fact, a sustained increase in government spending will have a negative effect on real output — a multiplier of less than 1, in other words.

Robert J. Barro of Harvard University opens an article in The Wall Street Journal with the statement that “economists have not come up with explanations … for multipliers above one”. Barro continues:

A much more plausible starting point is a multiplier of zero. In this case, the GDP is given, and a rise in government purchases requires an equal fall in the total of other parts of GDP — consumption, investment and net exports….

What do the data show about multipliers? Because it is not easy to separate movements in government purchases from overall business fluctuations, the best evidence comes from large changes in military purchases that are driven by shifts in war and peace. A particularly good experiment is the massive expansion of U.S. defense expenditures during World War II. The usual Keynesian view is that the World War II fiscal expansion provided the stimulus that finally got us out of the Great Depression. Thus, I think that most macroeconomists would regard this case as a fair one for seeing whether a large multiplier ever exists.

I have estimated that World War II raised U.S. defense expenditures by $540 billion (1996 dollars) per year at the peak in 1943-44, amounting to 44% of real GDP. I also estimated that the war raised real GDP by $430 billion per year in 1943-44. Thus, the multiplier was 0.8 (430/540). The other way to put this is that the war lowered components of GDP aside from military purchases. The main declines were in private investment, nonmilitary parts of government purchases, and net exports — personal consumer expenditure changed little. Wartime production siphoned off resources from other economic uses — there was a dampener, rather than a multiplier….

There are reasons to believe that the war-based multiplier of 0.8 substantially overstates the multiplier that applies to peacetime government purchases. For one thing, people would expect the added wartime outlays to be partly temporary (so that consumer demand would not fall a lot). Second, the use of the military draft in wartime has a direct, coercive effect on total employment. Finally, the U.S. economy was already growing rapidly after 1933 (aside from the 1938 recession), and it is probably unfair to ascribe all of the rapid GDP growth from 1941 to 1945 to the added military outlays. [“Government Spending Is No Free Lunch”, The Wall Street Journal, January 22, 2009]

This is from a paper by Valerie A. Ramsey:

… [I]t appears that a rise in government spending does not stimulate private spending; most estimates suggest that it significantly lowers private spending. These results imply that the government spending multiplier is below unity. Adjusting the implied multiplier for increases in tax rates has only a small effect. The results imply a multiplier on total GDP of around 0.5. [“Government Spending and Private Activity”, National Bureau of Economic Research, January 2012]

There is a key component of government spending which usually isn’t captured in estimates of the multiplier: transfer payments, which are mainly “social benefits” (e.g., Social Security, Medicare, and Medicaid). In fact, actual government spending in the U.S., including transfer payments, is about double the nominal amount that is represented in G, the standard measure of government spending (the actual cost of government operations, buildings, equipment, etc.). But transfer payments — like other government spending — are subsidized by directing resources from persons who are directly productive (active worker) and whose investments are directly productive (innovators, entrepreneurs, stockholders, etc.) to persons who (for the most part) are economically unproductive and counterproductive. It follows that real economic output must be affected by transfer payments.

Other factors are also important to economic growth, namely, private investment in business assets, the rate at which regulations are being issued, and inflation. The combined effects of these factors and aggregate government spending have been estimated:. I borrow from that estimate, with a slight, immaterial change in nomenclature:

gr = 0.0275 -0.347F + 0.0769A – 0.000327R – 0.135P

Where,

gr = real rate of GDP growth in a 10-year span (annualized)

F = fraction of GDP spent by governments at all levels during the preceding 10 years [including transfer payments]

A = the constant-dollar value of private nonresidential assets (business assets) as a fraction of GDP, averaged over the preceding 10 years

R = average number of Federal Register pages, in thousands, for the preceding 10-year period

P = growth in the CPI-U during the preceding 10 years (annualized).

The r-squared of the equation is 0.73 and the F-value is 2.00E-12. The p-values of the intercept and coefficients are 0.099, 1.75E-07, 1.96E-08, 8.24E-05, and 0.0096. The standard error of the estimate is 0.0051, that is, about half a percentage point.

Assume, for the sake of argument, that F rises while the other independent variables remain unchanged. A rise in F from 0.24 to 0.33 (the actual change from 1947 to 2007) would reduce the real rate of economic growth by 0.031 percentage points. The real rate of growth from 1947 to 1957 was 4 percent. Other things being the same, the rate of growth would have dropped to 0.9 percent in the period 2008-2017. It actually dropped to 1.4 percent, which is within the standard error of the equation. And the discrepancy could be the result of changes in the other variables — a disproportionate increase in business assets (A), for example.

Given that rg = -0.347F, other things being the same, then

Y1 = Y0(c – 0.347F)

Where,

Y1 = real GDP in the period after a change in F, other things being the same

Y0 = real GDP in the period during which F changes

c = a constant, representing the sum of 1 + 0.025 + the coefficients obtained from fixed values of A, R, and P

The true F multiplier, kT, is therefore negative:

kT = ∆Y/∆F = -0.347Y0

For example, with Y0 = 1000 , F =0 , and other things being the same,

∆Y = [1000 – (o)(1000)] = 1000, when F = 0

∆Y = [1000 – (-0.347)(1000)] = 653, when F = 1

Keeping in mind that the equation is based on an analysis of successive 10-year periods, the true F multiplier should be thought of as representing the effect of a change in the average value of F in a 10-year period on the average value of Y in a subsequent 10-year period.

This is not to minimize the deleterious effect of F (and other government-related factors) on Y. If the 1947-1957 rate of growth (4 percent) had been sustained through 2017, Y would have risen from $1.9 trillion in 1957 to $20 trillion in 2017. But because F, R, and P rose markedly over the years, the real rate of growth dropped sharply and Y reached only $17.1 trillion in 2017. That’s a difference of almost $3 trillion in a single year.

Such losses, summed over several decades, represent millions of jobs that weren’t created, significantly lower standards of living, greater burdens on the workers who support retirees and subsidize their medical care, and the loss of liberty that inevitably results when citizens are subjugated to tax collectors and regulators.

%d bloggers like this: