Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Property owners have long suffered under the Supreme Court’s erratic rulings. It got worse today. In Murr v. Wisconsin, the Court ruled against the owners, 5-3, with Justice Kennedy writing for the majority, Chief Justice Roberts writing a dissent, joined by Justices Thomas and Alito, Thomas writing a separate dissent, and Justice Gorsuch taking no part. The problem isn’t simply with the majority’s holding and opinion, it’s with the dissent as well. Only Thomas points in the right direction.

This was a regulatory takings case arising under the Fifth Amendment’s Takings Clause, which prohibits government from taking private property for public use without just compensation. In separate conveyances in 1994 and 1995, the Murrs, four siblings, inherited two contiguous lots on the St. Croix River that their parents had purchased in 1960 and 1963. The parents had built an ancestral home on the first lot. They bought the second for investment purposes.

The trouble began in 2004 when the Murrs sought to sell the second lot, valued at $410,000, and use the proceeds to upgrade the ancestral home. But they were blocked by a 1975 local zoning ordinance that treated the two lots as one, even though they had long been deeded and taxed separately. Under the ordinance they had to sell the lots together or not at all. Out $410,000, the Murrs sued, claiming that the ordinance had deprived them of their right to sell their property.

Here it gets complicated. In a 1992 decision, Lucas v. South Carolina Coastal Council, a 5-4 Court held that David Lucas was entitled to compensation after an ordinance prohibiting him from building on his property effectively wiped out all of its value. The problem with this “wipeout” rule, of course, is that most regulations leave at least some value in the property. When Justice Stevens called the rule “arbitrary” since “the landowner whose property is diminished in value 95% recovers nothing,” Justice Scalia, writing for the Court, responded tersely, “Takings law is full of these ‘all or nothing’ situations.”

In so writing, Scalia was citing a 1978 decision, Penn Central v. New York, which gave us a balancing test that nobody understands, least of all Justice Brennan who crafted it.  There that Court held that its test must be applied to “the parcel as a whole,” not to some portion of it. Combined with Lucas, that makes all the difference in the world for the Murrs. If their lots are treated separately, as they have always been except for this ordinance, virtually all value in the second has been wiped out and the Murrs, under Lucas, are entitled to compensation for the taking. But with the two lots combined as one, value remains, so the state can escape paying the Murrs any compensation. Thus, the question before the Court was whether the state could do that simply by treating the two lots as one.

Thomas joined the dissent because, as he wrote, “it correctly applies this Court’s regulatory takings precedents, which no party has asked us to reconsider.” But he went on to say that “it would be desirable for us to take a fresh look at our regulatory takings jurisprudence, to see whether it can be grounded in the original public meaning of the Takings Clause of the Fifth Amendment or the Privileges or Immunities Clause of the Fourteenth Amendment.” Why take a fresh look? Because the Court “has never purported to ground [its] precedents in the Constitution as it was originally understood.”

Justice Kennedy begins his opinion for the Court with Justice Holmes’s famous 1922 remark, that if a regulation goes “too far” it constitutes a taking—and the opinion goes downhill from there, a mass of confusions. Roberts does a tolerable job of dissecting it, concluding that “today’s decision knocks the definition of ‘private property’ loose from its foundation on stable state law rules and throws it into the maelstrom of multiple factors” for determining when a taking occurs. Correct, but Roberts himself does little better. In fact, he writes that the Court’s holding “that the regulation does not constitute a taking that requires compensation … does not trouble him.” (emphasis added) It’s only the Court’s reasoning that’s troubling (and rightly so). Roberts would have vacated the judgment below and remanded for the court to identify the relevant property using ordinary principles of Wisconsin property law.

But there, precisely, is the problem. State law defined the property. There were two lots, deeded and taxed separately, and that continued to the present. But then state law redefined the property. It was the later local ordinance that combined the lots, effectively taking one of the most basic rights an owner has, the right to dispose of (sell) that distinct second lot, bought for investment purposes. That was when the taking occurred, even though it wasn’t realized until the Murrs tried to sell the lot. The rest of the analysis coming from Penn Central’s multi-factor balancing test—like whether the Murrs retained value in “the parcel as a whole”—is just so much distraction from the core issue. And even if that were the question, it takes us back to Lucas’s error. Roberts’ invokes the metaphor that treats property like a “bundle of sticks,” signifying all the rights that go with property. Lucas held, wrongly, that compensation is due only after the last stick is taken—the wipeout rule. No, a taking occurs with the first stick taken. The stick the Murrs lost was the right to sell that lot. It’s no more complicated than that—unless the decision turns on a long line of mistaken precedents. One can only hope that Justice Thomas will one day have an opportunity to write the opinion that sets this sorry record straight.

The Supreme Court today came down with opinions in two cases in which Cato filed a brief. First, in Murr v. Wisconsin, it unfortunately ruled against property owners in an important regulatory-takings case. Then, in Lee v. United States, it correctly found that a criminal defendant who had virtually no chance to win at trial—absent jury nullifcation, which was our focus—was still prejudiced by (and entitled to a new trial due to) his counsel’s wrong advice that he wouldn’t be deported if he pled guilty.

Murr: Whenever you see a court invoke a “multifactor balancing test,” you know it’s just making stuff up. Alas that’s what happened in Murr v. Wisconsin, where a family was deprived of significant use of its property—not to mention economic benefits—because of an unfortunate operation of local law. The Supreme Court compounded that harm by essentially deferring to state determinations of property owners’ rights, and did so by applying that “multifactor” standard that allows it to reach whatever result it wants. This ruling shows that in the grander scheme, as Justice Thomas noted in his dissent, the Supreme Court needs to reevaluate its regulatory-takings jurisprudence altogether. (For more, see Cato’s amicus brief.)

Lee: The Court was correct to give even seemingly hopeless criminal defendants the right to adequate legal repreentation. Jae Lee only took a plea deal because his lawyer repeatedly assured him that he wouldn’t face deportation. The fact that going to trial, where he had no legal leg to stand on, would’ve almost certainly resulted in a longer prison sentence is immaterial. It’s clear that for Lee, who was brought to the United States from South Korea as a child, the risk of being forced to leave the only country he knows was much more important than a longer prison sentence. Lurking under this case was the controversial doctrine-that-must-not-be-named of jury nullification, which was essentially Lee’s only chance for acquittal. (For more, see Cato’s amicus brief.)

Stay tuned Monday for the Supreme Court’s final opinions of the term (especially Trinity Lutheran), as well as decisions on whether to take up the travel-ban case, Masterpiece Bakery (vendors for same-sex weddings), and Peruta (Second Amendment right to carry). And maybe, just maybe, Justice Anthony Kennedy will announce his retirement—though if I had to bet, I’d say he sticks around another year.

Yesterday, I posted “Five Questions I Will Use to Evaluate the Phantom Senate Health Care Bill.” The phantom bill took corporeal form today when Senate Republicans released the text of the “Better Care Reconciliation Act.”

So how does the Senate bill fare with regard to my five questions?

1. Would it repeal the parts of ObamaCare—specifically, community rating—that preclude secure access to health care by causing coverage to become worse for the sick and the Exchanges to collapse?

No. The Senate bill would preserve ObamaCare’s community-rating price controls. To be fair, it would modify them. ObamaCare forbids premiums for 64-year-olds to be more than three times premiums for 18-year-olds. The Senate bill would allow premiums for the older cohort to be up to five times those for the younger cohort. But these “age rating” restrictions are the least binding part of ObamaCare’s community-rating price controls. Those price controls would therefore continue to wreak havoc in the individual market. The Senate bill would also preserve nearly all of ObamaCare’s other insurance regulations. 

2. Would it make health care more affordable, or just throw subsidies at unaffordable care?

The Senate bill, like ObamaCare, would simply throw taxpayer dollars at unaffordable care, rather than make health care more affordable.

Making health care more affordable means driving down health care prices. Recent experiments have shown that cost-conscious consumers do indeed push providers to cut prices. (See below graph. Source.)  

If you want to see that level of price reductions, you need something along the lines of “large” health savings accounts.

The Senate bill would make only minor adjustments to tax-free HSAs that would not deliver lower prices. 

3. Would it actually sunset the Medicaid expansion, or keep the expansion alive long enough for a future Democratic Congress to rescue it?

The bill would keep it alive so ObamaCare supporters can rescind the repeal.

To be fair, the Senate bill would forbid the 19 states that haven’t implemented ObamaCare’s Medicaid expansion from doing so. As explained below, however, the bill would expand a different entitlement–ObamaCare’s Exchange subsidies–to that population. 

The bill would also repeal the Medicaid expansion in 2024. Yet three new Congresses would take their seats between passage of the bill and when it would repeal the Medicaid expansion. We may even get a new president by then. It is almost guaranteed that one of those Congresses (if not all three) will be more supportive of the Medicaid expansion than the current Congress. Such a Congress could rescind that before it ever happens, as if it never happened.

Senate Republicans rigged this Medicaid-expansion repeal never to take effect.

4. Tax cuts are almost irrelevant—how much of ObamaCare’s spending would it repeal?

This one is hard to answer without an official score from the Congressional Budget Office–or even with one. Senate Republicans played budget games that hide how much of ObamaCare’s spending they are keeping. 

Senate Republicans required the CBO to compare the cost of the bill to projections of Exchange enrollment and spending that everyone agrees are inflated. So the forthcoming CBO score will make it look like the Senate bill increases the uninsured more than it actually does. Put differently, the CBO score will count some people as losing coverage under the Senate bill even though they weren’t going to have coverage anyway. By the same token, this gimmick will make the Senate bill look like it cuts ObamaCare spending more than it does. It requires the CBO to score the Senate bill as eliminating ObamaCare outlays that were never going to happen. The sneaky part is that this budget gimmick then allows Senate Republicans to apply those phantom cuts either to new spending or deficit reduction.

One way the Senate bill applies those phantom cuts to new spending is by expanding ObamaCare to an additional 2.6 million Americans. Thirty-one states and D.C. have implemented ObamaCare’s Medicaid expansion. The Kaiser Family Foundation estimates that in the 19 states that have not expanded Medicaid, there are 2.6 million able-bodied adults who earn too much to qualify for Medicaid but less than 100 percent of the federal poverty level, and thus not enough to receive a “premium assistance tax credit” toward the purchase of an Exchange plan. (We call them tax credits, but they are mostly outlays.) In 2020, the Senate bill would open eligibility for the tax credits to everyone below 100 percent of the federal poverty level in states that do not implement the expansion. This would expand ObamaCare to another 2.6 million people. In effect, it is Medicaid expansion by another means–and it effectively snubs GOP officials in the 19 states that did the right thing (reduced federal deficits, etc.) by not expanding Medicaid.

The Senate bill would also fund ObamaCare’s “cost-sharing” subsidies, something the law’s Democratic authors never did. That, too, would expand ObamaCare beyond what a Democratic Congress created.

So even if the bill’s spending cuts were real (they’re not; see above), we still wouldn’t really know how much the Senate bill reduces actual federal outlays. All we know for sure is that Senate Republicans want to hide how much ObamaCare spending they are preserving, and that the CBO score will likely overstate the bill’s deficit reduction.

5. If it leaves major elements of ObamaCare in place, would it lead voters to blame the ongoing failure of those provisions on (supposed) free-market reforms?

Yes.

Supporters of the Senate bill are calling it a huge win for conservative governance. 

Finished reading the Senate HC bill. Put simply: If it passes, it’ll be the greatest policy achievement by a GOP Congress in my lifetime.

— Avik Roy (@Avik) June 22, 2017

Yet the bill does almost nothing to address the fundamental flaws and instability in ObamaCare’s architecture. Community rating and other provisions of the law will continue to increase premiums, degrade the quality of coverage, and destabilize insurance markets. ObamaCare supporters, including those who also support a single-payer system, will be quick to blame ObamaCare’s failures on the conservative, free-market ideology that supposedly animates the Senate bill. Such claims will be nonsense. But the narrative will be difficult to combat. The Senate bill could therefore set back the cause of free-market health care reform by decades–yet another feature it shares with ObamaCare. 

—–

The Senate bill is not even a step in the right direction. If this is the choice facing congressional Republicans, it would be better if they did nothing. Consumers would continue to struggle under ObamaCare’s regulations, but those costs would focus attention on their source. The lines of accountability would be clearer if Republicans signed off on legislation that seems designed to rescue ObamaCare rather than repeal and replace it.

One of the liberties protected by the Constitution is the right to do business in other states, on the same terms as companies based in those states. That right is enshrined in the Privileges and Immunities Clause of Article IV, section 2, one of the handful of individual rights that the Framers saw fit to safeguard even before the Bill of Rights was enacted. In fact, ensuring the opportunity to do business out-of-state on equal terms with a state’s residents was one of the principal motivations for holding the Constitutional Convention in the first place. But the U.S. Court of Appeals for the Ninth Circuit has condoned California’s violation of that right.

California enacted a set of commercial-fishing license fees that require nonresidents to pay several times more than residents. The system is explicitly discriminatory, harshly regressive, and intentionally protectionist. The Supreme Court and the Fourth Circuit, in substantively identical circumstances, have ruled these kinds of provisions to be impermissible: States must charge license fees equally to residents and nonresidents alike, or else bear the burden of justifying their discrimination (which California has made little real effort to do). But an en banc majority of the Ninth Circuit quite literally imposed the opposite rule. Not only did it uphold California’s discrimination, but it supported its holding with guesstimates of tax payments and rough calculations of economic costs that the state itself had never supplied. The result is conflict between two federal circuits and an open door for new methods of discrimination that the Constitution has always forbidden.

Now, a group of fishermen, with amicus support from Cato, is asking the Supreme Court to hear their case and strike down California’s differential commercial fishing license fees. Under the Ninth Circuit’s reasoning, everything California spends on fishery regulation is considered a “subsidy” to that industry—a subsidy paid by resident taxpayers for which the state must be compensated. This framing ignores the fact that nonresident fishermen also pay California sales tax and California income tax for income derived from in-state activities (when their income is enough to qualify for taxation, which it often isn’t) and directly contradicts controlling Supreme Court precedent. This dangerous rationale could otherwise be applied to any number of the nearly one-third of US occupations currently regulated by the states, and if unchecked could contribute significantly to creating just the sort of balkanized national economy that the Constitution was intended to prevent.

The fact of the matter is that California is attempting to protect local business interests at the expense of nonresidents and dress up its blatantly protectionist violation of the Privileges and Immunities Clause in reasonable-sounding language about fairness. The Supreme Court should grant certiorari and remind the Ninth Circuit that this sort of behavior is constitutionally unacceptable.

One of the original arguments for educating children in traditional public schools is that they are necessary for a stable democratic society. Indeed, an English parliamentary spokesman, W.A. Roebuck, argued that mass government education would improve national stability through a reduction in crime.

Public education advocates, such as Stand for Children’s Jonah Edelman and the American Federation for Teachers’ Randi Weingarten, still insist that children must be forced to attend government schools in order to preserve democratic values.

Theory

In principle, if families make schooling selections based purely on self-interest, they may harm others in society. For instance, parents may send their children to schools that only shape academic skills. As a result, children could miss out on imperative moral education and harm others in society through a higher proclivity for committing crimes in the future.

However, since families value the character of their children, they are likely to make schooling decisions based on institutions’ abilities to shape socially desirable skills such as morality and citizenship. Further, since school choice programs increase competitive pressures, we should expect the quality of character education to increase in the market for schooling. An increase in the quality of character education decreases the likelihood of criminal activity and therefore improves social order.

Evidence

There are only three studies causally linking school choice programs to criminal activity. Two studies examine the impacts of charter schools and one looks at the private school voucher program in Milwaukee. Each study finds that access to a school choice program substantially reduces the likelihood that a student will commit criminal activity later on in life.

Notably, Dobbie & Fryer (2015) find that winning a random lottery to attend a charter school in Harlem completely eliminates the likelihood of incarceration for males. In addition, they find that female charter school lottery winners are less than half as likely to report having a teen pregnancy.

Note: A box highlighted in green indicates that the study found statistically significant crime reduction.

According to the only causal studies that we have on the subject, school choice programs improve social order through substantial crime reduction. If public education advocates want to continue to clench onto the idea that traditional public schools are necessary for democracy, they ought to explain why the scientific evidence suggests the opposite.

Of course, these impacts play a significant role in shaping the lives of individual children. Perhaps more importantly, these findings indicate that voluntary schooling selections can create noteworthy benefits for third parties as well. If we truly wish to live in a safe and stable democratic society, we ought to allow parents to select the schooling institutions that best shape the citizenship skills of their own children.

On Monday, the Supreme Court ruled that a North Carolina preventing sex offenders from accessing social media and other websites – without any attempt to tailor restrictions to potential contact with minors – violated the First Amendment. But restrictions on the freedom of speech aren’t the only unconstitutional deprivations sex offenders face.

In 1994, Minnesota passed what has become arguably the most aggressive and restrictive sex-offender civil-commitment statute in the country. The Minnesota Sex Offender Program (MSOP) provides for the indefinite civil commitment of “sexually dangerous” individuals, over and beyond whatever criminal sentence they may have already completed.

And while there is technically a system in place whereby committed individuals can petition for release or a loosening of their restrictions, in the more than 20 years that the MSOP has existed, only one person has ever been fully discharged (someone in the program for offenses committed as a minor, and he was only discharged after a court challenge). As Craig Bolte, one person committed in the MSOP, has testified, there is a distinct feeling that “the only way to get out is to die.”

The Supreme Court has held that states have the authority to commit individuals against their will outside the traditional criminal justice context, but only for the purpose of keeping genuinely dangerous people off the streets while undergoing rehabilitative treatment. Punishment and deterrence are legitimate goals exclusively of the criminal justice system, so any deprivation of liberty for either of those two purposes must follow only from that system, with all the procedural protections our Constitution requires.

What sets Minnesota’s program apart from other schemes that have been upheld is that it doesn’t provide for any sort of periodic assessment to determine who does or doesn’t meet the requirements for discharge. By the state’s own admission, hundreds of civilly committed individuals have never received an assessment of their risk to the public, and hundreds more have received assessments only sporadically.

The MSOP is aware that at least some of the people in its custody satisfy statutory-discharge criteria, yet has taken no steps to determine who they are, let alone begin discharge proceedings. For these reasons, Kevin Karsjens and other similarly committed individuals have brought a federal class action challenging the MSOP as an irrational violation of their right to freedom from bodily restriction. They prevailed in the trial court, but the U.S. Court of Appeals for the Eighth Circuit reversed, stating that the plaintiffs have no liberty interest in freedom from physical restraint—not that their liberty interest must be balanced against the state’s interest in protecting the public from violence, but that for sex offenders, that liberty interest simply does not exist.

The plaintiffs now seek Supreme Court review. Cato, joined by the Reason Foundation, has filed an amicus brief in support of the committed individuals. The lack of periodic risk assessment and the punitive nature of the state’s policies represent an unconstitutional attempt to exact effectively criminal penalties on individuals who have not been provided the full procedural protections of criminal law.

The high court should intervene and repair the damage done by the unfettered confinement of sex offenders and restore the appropriate level of constitutional scrutiny to serious deprivations of liberty.

The Supreme Court will decide whether to take up Karsjens v. Piper when it returns from its summer recess.

Leftists don’t have many reasons to be cheerful.

Global economic developments keep demonstrating (over and over again) that big government and high taxes are not a recipe for prosperity. That can’t be very encouraging for them.

They also can’t be very happy about the Obama presidency. Yes, he was one of them, and he was able to impose a lot of his agenda in his first two years. But that experiment with bigger government produced very dismal results. And it also was a political disaster for the left since Republicans won landslide elections in 2010 and 2014 (you could also argue that Trump’s election in 2016 was a repudiation of Obama and the left, though I think it was more a rejection of the status quo).

But there is one piece of good news for my statist friends. The tax cuts in Kansas have been partially repealed. The New York Times is overjoyed by this development.

The Republican Legislature and much of Kansas has finally turned on Gov. Sam Brownback in his disastrous five-year experiment to prove the Republicans’ “trickle down” fantasy can work in real life — that huge tax cuts magically result in economic growth and more, not less, revenue. …state lawmakers who once abetted the Brownback budgeting folly passed a two-year, $1.2 billion tax increase this week to begin repairing the damage. …It will take years for Kansas to recover.

And you won’t be surprised to learn that Paul Krugman also is pleased.

Here’s some of what he wrote in his NYT column.

…there was an idea, a theory, behind the Kansas tax cuts: the claim that cutting taxes on the wealthy would produce explosive economic growth. It was a foolish theory, belied by decades of experience: remember the economic collapse that was supposed to follow the Clinton tax hikes, or the boom that was supposed to follow the Bush tax cuts? …eventually the theory’s failure was too much even for Republican legislators.

Another New York Times columnist did a victory dance as well.

The most momentous political news of the past week…was the Kansas Legislature’s decision to defy the governor and raise income taxes… Kansas, under Gov. Sam Brownback, has come as close as we’ve ever gotten in the United States to conducting a perfect experiment in supply-side economics. The conservative governor, working with a conservative State Legislature, in the home state of the conservative Koch brothers, took office in 2011 vowing sharp cuts in taxes and state spending, except for education — and promising that those policies would unleash boundless growth. The taxes were cut, and by a lot.

Brownback’s supply-side experiment was a flop, the author argues.

The cuts came. But the growth never did. As the rest of the country was growing at rates of just above 2 percent, Kansas grew at considerably slower rates, finally hitting just 0.2 percent in 2016. Revenues crashed. Spending was slashed, even on education… The experiment has been a disaster. …the Republican Kansas Legislature faced reality. Earlier this year it passed tax increases, which the governor vetoed. Last Tuesday, the legislators overrode the veto. Not only is it a tax increase — it’s even a progressive tax increase! …More than half of the Republicans in both houses voted for the increases.

If you read the articles, columns, and editorials in the New York Times, you’ll notice there isn’t a lot of detail on what actually happened in the Sunflower State. Lots of rhetoric, but short on details.

So let’s go to the Tax Foundation, which has a thorough review including this very helpful chart showing tax rates before the cuts, during the cuts, and what will now happen in future years (the article also notes that the new legislation repeals the exemption for small-business income).

We know that folks on the left are happy about tax cuts being reversed in Kansas. So what are conservatives and libertarians saying?

The Wall Street Journal opined on what really happened in the state.

…national progressives are giddy. Their spin is that because the vote reverses Mr. Brownback’s tax cuts in a Republican state that Donald Trump carried by more than 20 points, Republicans everywhere should stop cutting taxes. The reality is more prosaic—and politically cynical. …At bottom the Kansas tax vote was as much about unions getting even with the Governor over his education reforms, which included making it easier to fire bad teachers.

And the editorial also explains why there wasn’t much of an economic bounce when Brownback’s tax cuts were implemented, but suggests there was a bit of good news.

Mr. Brownback was unlucky in his timing, given the hits to the agricultural and energy industries that count for much of the state economy. But unemployment is still low at 3.7%, and the state has had considerable small-business formation every year since the tax cuts were enacted. The tax competition across the Kansas-Missouri border around Kansas City is one reason Missouri cut its top individual tax rate in 2014.

I concur. When I examined the data a few years ago, I also found some positive signs.

In any event, the WSJ is not overly optimistic about what this means for the state.

The upshot is that supposedly conservative Kansas will now have a higher top marginal individual income-tax rate (5.7%) than Massachusetts (5.1%). And the unions will be back for another increase as spending rises to meet the new greater revenues. This is the eternal lesson of tax increases, as Illinois and Connecticut prove.

And Reason published an article by Ben Haller with similar conclusions.

What went wrong? First, the legislature failed to eliminate politically popular exemptions and deductions, making the initial revenue drop more severe than the governor planned. The legislature and the governor could have reduced government spending to offset the decrease in revenue, but they also failed on that front. Government spending per capita remained relatively stable in the years following the recession to the present, despite the constant fiscal crises. In fact, state expenditure reports from the National Association of State Budget Officers show that total state expenditures in Kansas increased every year except 2013, where expenditures decreased a modest 3 percent from 2012. It should then not come as a surprise that the state faced large budget gaps year after year. …tax cuts do not necessarily pay for themselves. Fiscal conservatives, libertarians, …may have the right idea when it comes to lowering rates to spur economic growth, but lower taxes by themselves are not a cure-all for a state’s woes. Excessive regulation, budget insolvency, corruption, older demographics, and a whole host of other issues can slow down economic growth even in the presence of a low-tax environment.

Since Haller mentioned spending, here’s another Tax Foundation chart showing inflation-adjusted state spending in Kansas. Keep in mind that Brownback was elected in 2010. The left argued that he “slashed” spending, but that assertion obviously is empty demagoguery.

Now time for my two cents.

Looking at what happened, there are three lessons from Kansas.

  1. A long-run win for tax cutters. If this is a defeat, I hope there are similar losses all over the country. If you peruse the first chart in this column, you’ll see that tax rates in 2017 and 2018 will still be significantly lower than they were when Brownback took office. In other words, the net result of his tenure will be a permanent reduction in the tax burden, just like with the Bush tax cuts. Not as much as Brownback wanted, to be sure, but leftists are grading on a very strange curve if they think they’ve won any sort of long-run victory.
  2. Be realistic and prudent. It’s a good idea to under-promise and over-deliver. That’s true for substance and rhetoric.
    1. Don’t claim that tax cuts pay for themselves. That only happens in rare circumstances, usually involving taxpayers who have considerable control over the timing, level, and composition of their income. In the vast majority of cases, tax cuts reduce revenue, though generally not as much as projected once “supply-side” responses are added to the equation.
    2. Big tax cuts require some spending restraint. Since tax cuts generally will lead to less revenue, they probably won’t be durable unless there’s eventually some spending restraint (which is one of the reasons why the Bush tax cuts were partially repealed and why I’m not overly optimistic about the Trump tax plan).
    3. Tax policy matters, but so does everything else. Lower tax rates are wonderful, but there are many factors that determine a jurisdiction’s long-run prosperity. As just mentioned, spending restraint is important. But state lawmakers also should pay attention to many other issues, such as licensing, regulation, and pension reform.
  3. Many Republicans are pro-tax big spenders. Most fiscal fights are really battles over the trend line of spending. Advocates of lower tax rates generally are fighting to reduce the growth of government, preferably so it expands slower than the private sector. Advocates of tax hikes, by contrast, want to enable a larger burden of government spending. What happened in Kansas shows that it’s hard to starve the beast if you’re not willing to put government on a diet.

By the way, all three points are why the GOP is having trouble in Washington.

The moral of the story? As I noted when writing about Belgium, it’s hard to have good tax policy if you don’t have good spending policy.

Recent terrorist attacks in Europe have increased death tolls and boosted fears on both sides of the Atlantic. Last year, I used common risk analysis methods to measure the annual chance of being murdered in an attack committed on U.S. soil by foreign-born terrorists. This blog is a back of the envelope estimate of the annual chance of being murdered in a terrorist attack in Belgium, France, Germany, Sweden and the United Kingdom. The annual chance of being murdered in a terrorist attack in the United States from 2001 to 2017 is about 1 in 1.6 million per year. Over the same period, the chances are much lower in European countries.

Methods and Sources

Belgium, France, and the United Kingdom are included because they have suffered some of the largest terrorist attacks in Europe in recent years. Sweden and Germany are included because they have each allowed in large numbers of refugees and asylum seekers who could theoretically be terrorism risks.

The main sources of data are the Global Terrorism Database at the University of Maryland for the years of 1975 to 2015, with the exception of 1993. I used the RAND Database of Worldwide Terrorism to fill in the year 1993. I have not compiled the identities of the attackers, any other information about them, or the number of convictions for planning attacks in Europe. The perpetrators are excluded from the fatalities where possible. Those databases do not yet include the years 2016 and 2017, so I relied on Bloomberg and Wikipedia to supply a rough estimate of the number of fatalities in terrorist attacks in each country in those two years through June 20, 2017. The United Nations Population Division provided the population estimates for each country per year.

Terrorism Fatality Risk for Each Country

This section displays the number of terrorist fatalities and the annual chance of a resident of each country being murdered. The results in this section answer three important questions: What is the annual chance of having been killed in a terrorist attack from 1975 through 2017 in each European country? Has the annual chance of being killed in a terrorist attack gone up since the 9/11 attacks? How does the risk in Europe compare to the risk in the United States?

European Terrorism from 1975 through June 20th, 2017

Residents of the United Kingdom have suffered the most from terrorism. Almost 78 percent of the European fatalities reported in Table 1 were residents of the United Kingdom and about 95 percent of those British fatalities occurred before 2001.

Residents of the United Kingdom suffered the most from terrorism with the highest annual chance of dying at one in 964,531 per year (Table 1).

Table 1: Fatalities and Annual Chance of Dying in a Terrorist Attack, 1975–June 20th, 2017

 

Fatalities

Annual Chance of Dying

United Kingdom

2,632

1 in 964,531

Belgium

64

1 in 6,936,545

France

506

1 in 4,984,301

Sweden

20

1 in 19,001,835

Germany

148

1 in 23,234,378

United States

3568

1 in 3,241,363

Sources: Global Terrorism Database, RAND Corporation, United Nations Population Division, Bloomberg, Wikipedia, author’s calculations.

The deadliest terrorist attack across these five European countries was the 1988 bombing of Pan Am 103 over Lockerbie, Scotland, which killed 270. An additional 110 residents of these five countries were murdered in that year. The next deadliest year was 1976 with 354 victims. The third deadliest year was 1975, when there were 1,252 murders in terrorist attacks (Figure 1). The number of fatalities in European terrorist attacks increased to 172 in 2015 and fell to 133 in 2016. Every death in a terrorist attack is a tragedy but Europeans should feel comforted by the fact that their chances of dying of such an attack are minuscule.

Figure 1: Terrorism Fatalities in Belgium, France, Germany, Sweden, and the United Kingdom, 1975–2017

Sources: Global Terrorism Database, RAND Corporation, United Nations Population Division, Bloomberg, Wikipedia.

Terrorism Risk in Europe versus the United States

The annual chance of being murdered in any terrorist attack in the United States from 2001 to 2017 is about 1 in 1.6 million per year (Table 2). The annual chances were much higher in every European country during the same period. Table 2 also includes the United States without the fatalities from the 9/11 attacks as they were such extremely deadly outliers that are unlikely to be repeated. Thus, excluding the 9/11 attacks in one example allows a potentially better cross-country comparison of the annual fatality chances. Strikingly, the annual chance of an American being murdered in a terrorist attack is almost identical across the two periods when 9/11 is excluded – evidence that those attacks were outliers that punctuated an otherwise steady trend.

Prior to 2001, the annual chance of dying in a terrorist attack in every country in Europe was higher than in the United States, with the sole exception of Sweden. When 9/11 occurred, the relative risk to residents in these countries flipped and the United States became more dangerous.

Table 2: Annual Chance of Dying in a Terrorist Attack by Period

 

Annual Chance of Dying in a Terrorist Attack

 

1975–2000

2001–2017

United States

19,767,153

1,602,021

France

6,059,061

4,006,878

Belgium

9,611,873

4,373,511

United Kingdom

590,389

8,796,562

Sweden

22,145,655

15,858,016

United States (exc. 9/11)

19,767,153

19,772,468

Germany

17,338,091

47,429,484

Sources: Global Terrorism Database, RAND Corporation, United Nations Population Division, Bloomberg, Wikipedia, author’s calculations.  Through June 20th, 2017.

Terrorism Risk Since 9/11

Many think that Islamic terrorism since 2001 is deadlier than past terrorism. This is certainly true in the United States where at least 3,246 people were killed on U.S-soil in all terror attacks from 2001 to through 2017 compared to only 322 from 1975 through 2000. Those differences are reflected in the greater, but still small, annual chance of an American dying from terrorism in the later period (Table 2). The chances of being murdered in a terrorist attack are also higher in France, Belgium, and Sweden but they are still tiny. Residents in the United Kingdom and Germany were less likely to die, per year, in a terrorist attack from 2001 through 2017.

The largest decline in risk was in the United Kingdom where the annual chance of being killed by terrorists went from 1 in 590,380 per year prior to 2001 to 1 in 8,796,562 per year from 2001 through June 20th, 2017. For 2016 and 2017 (so far), the chance of a British resident dying in a terrorist attack is about 1 in 3.5 million per year. The chance of a British resident being murdered in a non-terrorist homicide in 2013 was about 133 times as great as his or her chance of being murdered in a terrorist attack in the same year.

Conclusion

The chance of an American being murdered in a terrorist attack is greater than for a European resident of any of these five countries from 2001 through June 20th, 2017. Future terrorist attacks are unlikely to be as deadly as 9/11 even though there is a fat-tailed risk. When the unprecedented deadliness of 9/11 is excluded, the annual risk of being killed in a terrorist attack is reversed and residents of every European country except for Germany have a greater chance of being murdered than an American on U.S. soil.

The number of deaths from terrorism is so tiny that the addition or subtraction of another few murders can drastically change the annual chances of being murdered, which is evidence of how manageable the threat from terrorism actually is. If terrorism was as common or deadly as people erroneously believe it to be then another attack or two would not make a big difference in the annual chances.

A total of 3,370 residents of Belgium, France, Germany, Sweden, and the United Kingdom were murdered by terrorists from 1975 to June 20th, 2017. About 231 million people lived in those five countries in 2015. If they were combined into a single country, the annual chance of dying would be about 1 in 2.8 million per year over that period. The annual chance of being killed in a terrorist attack was a mere 1 in 8.3 million per year if those five European countries were judged as one state from 2001 through June 20th, 2017. That is a lower risk than the 1 in 1.6 million per year chance of an American being murdered in a terrorist attack on U.S. soil from 2001 through 2017. Even in Europe, terrorism is a relatively small and manageable threat.

There has been debate this week about how many libertarians are there. The answer is: it depends on how you measure it and how you define libertarian. The overwhelming body of literature, however, using a variety of different methods and different definitions, suggests that libertarians comprise about 10-20% of the population, but may range from 7-22%.

Furthermore, if one imposes the same level of ideological consistency on liberals, conservatives, and communitarians/populists that many do on libertarians, these groups too comprise similar shares of the population.

In this post I provide a brief overview of different methods academics have used to identify libertarians and what they found. Most methods start from the premise that libertarians are economically conservative and socially liberal. Despite this, different studies find fairly different results. What accounts for the difference?

1) First, people use different definitions of libertarians

2) Second, they use different questions in their analysis to identify libertarians

3) Third, they use very different statistical methods.

Let’s start with a few questions: How do you define a libertarian? Is there one concrete libertarian position on every policy issue?

What is the “libertarian position” on abortion? Is there one? What is the “libertarian position” on Social Security? Must a libertarian support abolishing the program, or might a libertarian support private accounts, or means testing, or sending it to the states instead? A researcher will find fewer libertarians in the electorate if they demand that libertarians support abolishing Social Security rather than means testing or privatizing it. 

Further, why are libertarians expected to conform to an ideological litmus test but conservatives and liberals are not? For instance, what is the “conservative position” on Social Security? Is there one? When researchers use rigid ideological definitions of liberals and conservatives, they too make up similar shares of the population as libertarians. Thus, as political scientist Jason Weeden has noted, researchers have to make fairly arbitrary decisions about where the cut-off points should be for the “libertarian,” “liberal,” or “conservative” position. This pre-judgement strongly determines how many libertarians researchers will find.

Next, did researchers simply ask people if they identify as libertarian, or did they ask them public policy questions (a better method)? If the latter, how many issue questions did they ask? Then, what questions did they ask?

For instance, what questions are used to determine if someone is “liberal on social issues”? For instance, did the researcher ask survey takers about legalizing marijuana or did the researcher ask about affirmative action for women in the workplace instead? Libertarians will answer these questions very differently and that will impact the number of libertarians researchers find.

While there is no perfect method, the fact that academics using a variety of different questions, definitions, and statistical techniques still find that the number is somewhere between 7-22% gives us some idea that the number of libertarians is considerably larger than 0.

Next, I give a brief overview of the scholarly research on the estimated share of libertarians, conservatives, liberals, and communitarians in the American electorate. I organize their findings by methods used starting with most empirically rigorous:

Ask people to answer a series of questions on a variety of policy topics and input their responses into a statistical algorithm

In theses studies, researchers ask survey respondents a variety of issue questions on economic and social/cultural issues. Then, they input people’s answers into a statistical clustering technique and allow an algorithm to find the number of libertarians. This is arguably the strongest method to identify libertarians.

  1. Political scientists Stanley Feldman and Christopher Johnson use a sophisticated statistical method to find ideological groups in the electorate (latent class analysis). They find six ideological groups based on answers to a variety of questions on economic and social issues. Feldman and Johnson’s results indicate that about:
  • 15% are likely libertarians (conservative on economics and liberal on social issues)
  • 23% are likely liberals
  • 17% are likely conservatives
  • 8% are communitarians/populists (liberal on economics and very conservative on social issues)
  • 13% are economic centrists but social liberals
  • 24% are economic centrists but lean socially conservative.   

Ask people to answer a series of questions on a variety of policy topics and plot their average responses on a 2-dimensional plot

In these studies, researchers 1) average responses to multiple questions on economics and then 2) average responses to multiple questions on social/cultural/identity/lifestyle issues. They then take the two averaged scores to plot respondents on a 2-dimensional graph (Economic Issues by Social Issues).

  1. Political scientist Jason Weeden averages people’s responses to questions on economics (income redistribution and government assistance to the poor) and on social issues (abortion, marijuana legalization, the morality of premarital sex) found in the General Social Survey. He finds:
  • 11% of Americans are libertarian
  • 11% are conservative
  • 14% are liberal
  • 9% are communitarian/populist
  • Remaining people are roughly evenly distributed between these groups

  1. Political Scientists William Clagget, Par Jason Engle, and Byron Shafer use answers to variety of questions on economics and the “culture” issues from the American National Election Studies from 1992-2008 to determine that:
  • 10% of the population is libertarian
  • 11% is populist
  • 30% is conservative
  • 30% is liberal
  • (Their methods are unclear and their “culture” index may include questions about spending on crime and support for affirmative action for women.)
  1. Political scientists William Maddow and Stuart Lilie average responses to three questions on government economic intervention and three questions about personal freedom from the American National Election Studies and find that:
  • 18% of the population is libertarian
  • 24% is liberal
  • 17% is conservative
  • 26% is communitarian 
  1. The Public Religion Research Institute added (rather than averaged) responses to 9 questions on social and economic issues and made a decision that cumulative scores of 9-25 would be coded as libertarian. Doing this they find that:
  • 7% of Americans are libertarian
  • 15% lean libertarian
  • 17% lean communalist
  • 7% are communalist
  • 54% have mixed attitudes
  1. For a previous Cato blog post, conducted a similar analyses and created three separate estimations. Each used averaged responses on economic questions, but were plotted alongside their average answers to either 1) social issues questions, 2) race/identity questions, and 3) criminal justice and racial equality questions.
  • Using economic and social issues I find:
    • 19% Libertarian
    • 20% Communitarian
    • 31% Conservative
    • 30% liberal
  • Using economic and race issues, I find:
    • 19% Libertarian
    • 15% Communitarian
    • 33% Conservative
    • 33% Liberal
  • Using economic and criminal justice issue positions I find:
    • 24% Libertarian
    • 15% Communitarian
    • 28% Conservative
    • 33% Liberal

Ask people to answer a question about economic policy and a question about social policy

While not as rigorous as asking people multiple questions, this is another quick way to observe the diversity of ideological opinion in surveys.

  1. Nate Silver of FiveThirtyEight using two questions from the General Social Survey: support for same-sex marriage and whether government ought to reduce income inequality with high taxes on the rich and income assistance to the poor, finds
  • 22% are libertarian
  • 25% conservative
  • 34% liberal
  • 20% communitarian 

  1. David Kirby and David Boaz use answers to 3 survey questions and find that 15% of the population are libertarians (agree that less government is better, and that free markets can better solve economic problems, and that we should be tolerant of different lifestyles)

Ask people if they identify as libertarian and know what the word means

The Pew Research Center found that 11% of Americans agree that the word “libertarian describes me well” and know libertarians “emphasize individual freedom by limiting the role of government.”

Ask people if they identify as socially liberal and fiscally conservative, an oft-used definition of libertarianism

A 2011 Reason-Rupe poll found that 8% of Americans said they were “conservative” on economic issues and also “liberal” on social issues. But the same method found 9% identified as “liberal” on both social and economic issues, 2% identified as liberal on economic issues and conservative on social issues, and 31% identified as conservative on both social and economic issues. They remainder were somewhere in the middle These results are consistent with polls from Rasmussen, and Gallup which finds a public preference for the word “conservative” over “liberal.” This means many people who endorse liberal policy are inclined to self-identify as moderate or conservative.

Conclusions

In sum, the overwhelming body of empirical evidence suggests that libertarians’ share of the electorate is likely somewhere between 10-20% and the conservative and liberal shares’ aren’t that much greater. Libertarians exist, quite a lot, but you have to know what you’re looking for.

Rumor has it that tomorrow is the day Senate Republican leaders will unveil the health care bill they have been busily assembling behind closed doors. So few details have emerged, President Trump could maybe learn something from Senate Majority Leader Mitch McConnell about how to prevent leaks. Even GOP senators are complaining they haven’t been allowed to see the bill.

Here are five questions I will be asking about the Senate health care bill if and when it sees the light of day.

  1. Would it repeal the parts of ObamaCare—specifically, community rating—that preclude secure access to health care for the sick by causing coverage to become worse for the sick and the Exchanges to collapse?
  2. Would it make health care more affordable, or just throw subsidies at unaffordable care?
  3. Would it actually sunset the Medicaid expansion, or keep the expansion alive long enough for a future Democratic Congress to rescue it?
  4. Tax cuts are almost irrelevant—how much of ObamaCare’s spending would it repeal?
  5. If it leaves major elements of ObamaCare in place, would it lead voters to blame the ongoing failure of those provisions on (supposed) free-market reforms?

Depending on how Senate Republicans—or at least, the select few who get to write major legislation—answer those questions, the bill could be a step in the right direction. Or it could be ObamaCare-lite.

The Trump administration’s recent proposal on infrastructure stressed federalism. It said that the “federal government now acts as a complicated, costly middleman between the collection of revenue and the expenditure of those funds by states and localities. Put simply, the administration will be exploring whether this arrangement still makes sense, or whether transferring additional [infrastructure] responsibilities to the states is appropriate.”

Indeed, the federal-middleman arrangement does not make sense. With regard to highways, federal funds go not just to the 47,000-mile interstate highway system (IHS), but also to the vast 3.9 million mile “federal-aid highway system.” But there are few advantages in federal funding over state funding for most the nation’s highways, which are owned by the states and mainly serve state-local needs.

As such, there have been many proposals to devolve at least the non-IHS activities to the states. In such “turnback” proposals, the federal government would cut its highway spending and its gas tax, and allow states to fill the void.

The turnback idea has been around awhile. A major 1987 study by the Advisory Commission on Intergovernmental Relations (ACIR) proposed devolving highway funding except for IHS funding to the states. The ACIR was led by a bipartisan mix of federal, state, and local elected officials, and was known for its top-notch staff experts.

Thirty years later, the ACIR report contains sound advice for today’s policymakers. Here are some excerpts:

The Commission concludes that a devolution of non-Interstate highway responsibilities and revenue sources to the states is a worthwhile goal and an appropriate step toward restoring a better balance of authority and accountability in the federal system (page 2).

It is the sense of the Commission that the Congress should move toward the goal of repealing all highway and bridge programs that are financed from the federal Highway Trust Fund, except for: (1) the Interstate highway system, (2) the portion of the bridge program that serves the Interstate system, (3) the emergency relief highway program, and (4) the federal lands highway program. The Commission urges that the Congress simultaneously relinquish an adequate share of the federal excise tax on gasoline—about 7 cents of the federal tax on motor fuel plus an additional 1 cent for a grant based on lane mileage—to finance the above programs (page 2). [Note: the federal gas tax at the time was just 9.1 cents per gallon].

With state and local governments freed from federal requirements, some of which are unsuitable and expensive, turnbacks offer the possibility of more flexible, more efficient, and more responsive financing of those roads that are of predominantly state or local concern. Investment in highways could be matched more closely to travel demand and to the benefits received by the communities served by those roads (page 3).

Highway turnbacks potentially can add both certainty and flexibility—as well as efficiency and accountability—to the financing of the nation’s transportation infrastructure as well as to the design and operation of both new and modernized roads (page 4).

In time, federal requirements and sanctions have accumulated, which have limited state and local governments’ flexibility in road construction and operation, have restricted these governments’ ability to address specific transportation needs, and have probably increased the cost and time needed for road improvements … The design standards required for receiving federal road grants may often be higher than those actually employed for roads built with state or local funds alone. The result can be that some federally subsidized highways are “gold-plated,” that is, built more lavishly than would be the case if state and local governments made the tradeoffs involved in highway plans and financed their choices by taxes levied on their own constituents (page 11).

[Federal highway regulations] may intrude the most broadly upon the choices of state-local governments and citizens. Examples include the rule that federally aided projects be preceded by an environmental analysis and the Davis-Bacon requirement to pay union wage rates, or the equivalent. The Federal Highway Administration has estimated that the Davis-Bacon requirement added between $293 and $586 million to road costs in FY 1986 (page 12).

The federal restriction on state and local road choices occurs not solely because federal standards are high, but because they tend to be inflexible, inappropriate to circumstances that vary from place to place, and more responsive to national interest groups than to the users of specific highways (page 13).

There is “fiscal equivalence” when the same political community—the same jurisdiction—finances a governmental program, is responsible for its operation, and receives the benefits of that program … The tie between taxing and spending promotes efficiency and careful choices, whether spending levels are high or low. Because various areas’ highway needs and preferences are so different, a nationally uniform program cannot tailor taxing and spending to each other, as state and local programs can (page 22).

With the Interstate system used for long-distance travel, most of the benefits of other federally aided roads are contained within state boundaries. These non-Interstate, federally aided roads should be considered for turnback. Absent federal funding, there is reason to believe that state-local responsibility for the devolved highways would not impair nationwide mobility or interstate commerce. Devolution would move toward “fiscal equivalence.” The same jurisdiction that finances a set of roads will benefit from them. Thus highway spending and highway services would be more closely linked than is presently the case. Efficiency would be enhanced as would political, fiscal, and program accountability (page 48).

The diverse goals and constituencies served by the federal highway program has led to a complex operation and has engendered controversy over the program’s procedures and allocation formulas … Devolution … would sharpen goals and priorities (page 48).

The ACIR report (“Devolving Selected Federal-Aid Highway Programs and Revenue Bases: A Critical Appraisal”) is here.

The federal-government-managed National Flood Insurance Program (NFIP) is $25 billion in debt, stokes moral hazard, and entails a regressive wealth transfer that favors coastal areas. The NFIP is set to expire at the end of September, offering policymakers an important chance to rethink the program. The House Financial Services Committee is considering the Flood Insurance Market Parity and Modernization Act Wednesday, the current version of the bill takes important steps in moving the U.S. towards a private flood insurance market. Private insurance would improve upon the NFIP by ending transfers from the general taxpayers to the wealthy and the coasts and by limiting moral hazard.

Private insurance functions as a market-driven regulator of risk. Private insurers devise premium payments to accurately reflect risk, forcing economic agents to internalize the risk they choose to assume. For instance auto insurance premiums depend both on a driver’s performance as well as other factors that correlate with risk, such as age or area of the country.

The enactment of the NFIP in 1968 reflected a belief that a centrally planned insurance program could better fulfill the regulatory function of insurance than the private market. Government-managed insurance could, it was held at the time, “limit future flood damages without hampering future economic development” and “prompt an adjustment in land use to reduce individual and public losses from floods,” reported a Housing and Urban Development study integral to the program’s design.

However, the NFIP’s fifty-year record shows why the reasoning behind the creation of the program was misguided. The NFIP is beset by many design flaws, especially in terms of how premiums are priced. About 20% of all NFIP policies are explicitly subsidized and receive a 60-65% discount off the NFIP’s typical rate. These subsidies are in no way a subsidy to poor homeowners but instead relate to the age of a property. They turn out to be wildly regressive.

Even the 80% of the NFIP’s so-called “full risk” properties are not priced accurately. For instance, despite their name the full risk rates do not include a loading charge to cover losses in especially bad years, so even these insurance policies are money-losers in the long run.

Moreover, the NFIP’s rates are not set on a property-by-property basis. Instead, they reflect average historical losses within a property’s risk-based categories. As a result, while the subsidies and lack of loading charge mean that the NFIP generally undercharges risk, in some instances premiums are actually overpriced.

Debt is not the only consequence of the NFIP’s misguided premiums. The systemic underpricing of insurance causes moral hazard, by masking the cost of flood risk and encouraging overdevelopment in flood-prone areas. Because the average home in the NFIP is much more valuable than an average American home, the program is regressive on the whole. And since a disproportionate number of properties in the NFIP are on the southeastern coast, wealth is transferred from the rest of the country to homeowners near the coast in those states.

Congress could, theoretically, fix some of these design problems, but past attempts to reform the NFIP to more closely resemble a private insurance company failed miserably, and exemplify why in practice government rarely succeeds in competently managing what should be private business. For instance, in 2012 Congress passed the Biggert-Waters Flood Insurance Reform Act, which required the NFIP to end subsidies and to begin including a catastrophe loading surcharge. However, due to interest group pressure Congress reversed itself just two years later, halting some reforms and getting rid of others outright. The quick backtrack was a classic example of government failing to act in the public interest due to concentrated benefits and diffused costs.

However, one positive aspect of the 2012 reforms has persisted. The Biggert-Waters law ended the NFIP’s de-facto monopoly by allowing property owners to meet mandatory purchase requirements with private market insurance. Private insurers have since returned to the market, successfully competing with the NFIP.

Recent innovations in catastrophic modeling and catastrophic risk hedging mean that private market flood insurance is more viable than ever. Insurance industry experts suggest that private insurers can cover most properties in the NFIP and note that U.S. flood risk is the largest growth area for world-wide private reinsurers.

A forthcoming Cato Policy Analysis discusses technological innovations in the private flood insurance industry and the social benefits of moving to private flood insurance and terminating NFIP. If that is politically impossible, it suggests that any reauthorization of the NFIP should at least include measures that level the playing field between the NFIP and private alternatives.

Measures to encourage private competition include allowing a more flexible array of private coverage terms to meet mandatory purchase requirements, mandating that FEMA release property-level flood data to private insurers, and allowing firms that contract with the NFIP to also issue their own insurance plans. The Flood Insurance Market Parity and Modernization Act contains many of these measures, and would represent an excellent step towards ending a system that subsidizes wealthy coastal homeowners to take imprudent risks.

Special thanks to Ari Blask, who co-authored the forthcoming report and provided copious assistance on this blog post as well. 

It comes as no surprise that the Supreme Court has agreed to hear the case of Gill v. Whitford, in which a district court struck down the Wisconsin legislature’s partisan gerrymander. Conservative justices want to hear the case as a way to correct an error, while liberals see it as their last best chance to tee up a landmark constitutional case on redistricting while Anthony Kennedy is still on the Court. Within hours, however, the grant of review was followed by a kicker – an order staying the court order below, over dissents from the four liberals – that calls in question whether the momentum is really with those hoping to change Kennedy’s mind.

Last time around, in 2004’s Vieth v. Jubelirer, the Court foreshadowed this day. Four Justices led by Scalia declared that for all the evils of political gamesmanship in drawing district lines – a practice already familiar before the American revolution – there was and is no appropriately “justiciable” way for the Court to correct things; it would be pulled into a morass of subjective and manipulable standards that could not be applied in a practical and consistent way and would cost it dearly in political legitimacy. Justice Anthony Kennedy, in a separate concurrence, agreed in dismissing the Pennsylvania case at hand, and said the Court was “correct to refrain from directing this substantial intrusion into the Nation’s political life” that would “commit federal and state courts to unprecedented intervention in the American political process.” But he left the door open to some future method of judicial relief “if some limited and precise rationale were found to correct an established violation of the Constitution.”

That set up a target for litigators and scholars to shoot for: can a formula be found that is “limited and precise” enough, and based on an “established” enough constitutional rationale, to convince Justice Kennedy? After all, the Court’s 1962 Baker v. Carr one-person-one-vote decision on districting had been an unprecedented intervention in the American political process, but also one that could be implemented by a simple formula yielding consistent outcomes and little need for ongoing supervision (take the number of people in a state and divide by the number of districts).  

Plaintiffs in the Wisconsin case are hoping that a newly devised index they call the “efficiency gap” can serve as an adequately objective measure of whether partisan gerrymandering has taken place, given the presence of evidence of such motivation. Even if courts accept this, it is another big jump to the confidence that they can provide consistent and predictable remedies unaffected by judges’ own political prejudices. 

The decision to stay or not stay a lower court order often provides a peek as to which side the Justices expect to prevail. And the five-member majority to stay the Wisconsin order – a majority including unsurprisingly Gorsuch, but more significantly Kennedy – suggests that at this point it is the conservative side’s case to lose. 

Whatever the Court’s disposition of the Wisconsin case, gerrymandering remains a distinctive political evil, an aid to incumbency that promotes the interests of a permanent political class, and a worthy target for efforts at reform. I’ve written more on that here and here

 

The federal government runs more than 2,300 subsidy programs, and they are all susceptible to fraud and other types of improper payments. The EITC program, for example, throws about $18 billion down the drain each year in such payments.

Perhaps the program that generates the most outrageous rip-offs is the $150 billion Social Security Disability (SSDI) program. From the Washington Post today:

Eric Conn, the fugitive attorney who pleaded guilty to orchestrating a scheme to defraud the federal government of $600 million, remains at large since he cut off his court-ordered GPS monitoring bracelet on June 2…Conn in March entered guilty pleas to defrauding the Social Security Administration via bribes he paid to a doctor and a judge to process and approve his clients’ disability claims. 

From 2006 to 2016, Conn processed 1,700 client applications for Social Security benefits with a potential of $550 million in lifetime benefits. Since the revelation of the allegations, the Social Security Administration has contacted many of Conn’s former clients with claims they owe as much as $100,000 for disability payments going back 10 years unless they can prove they have been disabled the entire time…

Conn’s fraud scheme was fueled by television advertisements that included a 3-D television ad from 2010 and one from 2009 in which Conn hired YouTube star “Obama Girl” and Bluegrass music legend Ralph Stanley to sing a version of “Man of Constant Sorrows” with new lyrics that refer to Conn as a “superhero without a cape” and to brag that Conn had “learned Spanish off of a tape.” In a rap video, Conn billed himself as Hispanic-friendly: “Even if you’re Latino, no need to worry cuz this gringo speaks the lingo.”

One greedy lawyer, a corrupt doctor and judge, some jingoism, and our government gets ransacked for $600 million. That’s not very comforting to taxpayers, is it?

In his study of SSDI for DownsizingGovernment.org, Tad Dehaven said, “SSDI is a classic example of a well-intentioned effort to provide modest support to truly needy people that has exploded into a massive entitlement that is driving up the federal deficit.” 

DeHaven proposed these SSDI reforms: 

  • Cut the program’s average benefit levels.
  • Impose stricter eligibility standards to discourage claims from people who should be working.
  • Create a longer delay for the initial receipt of benefits to discourage frivolous applications.
  • Reduce the large number of appeals for people initially denied benefits.
  • Ensure greater quality control and consistency of decisions by officials and judges.
  • Create a “taxpayer advocate” in the administrative law process to challenge dubious claims made by applicants and their lawyers.
  • Apply continuous disability reviews of people receiving benefits in a more vigorous manner.

His study is here

…is data, as the late UC-Berkeley political scientist Ray Wolfinger once said.

David Boaz used Wolfinger’s quote when emailing me this short note from the Economic Policy Journal’s website about the apparent harmful effects on employment of Washington state’s recent minimum wage increase. A snippet:

As we were seated, I couldn’t help but notice that there were no busboys in sight—waitresses and the manager were busy clearing and cleaning tables. There were no young people in sight either, only employees in their late-20s and up.

I waited for the manager to man the checkout register and couldn’t pass up a brief economic discussion. I commented that I’m from out of state (Idaho, where the minimum wage is the federally mandated $7.25/hr) and couldn’t help but notice the impact that Washington’s minimum wage ($11/hr) was having on his restaurant.

Well-intended proponents of higher minimum wages will likely dismiss this note using the far-more-common but very wrong misquotation that “the plural of anecdote isn’t data.” More sophisticated proponents will go further and cite David Card and Alan Kreuger’s 1994 American Economic Review paper on the apparent beneficial effects on employment of a minimum wage increase on fast-food restaurant employment in the Philadelphia metropolitan area in the early 1990s.

Thing is, there has been an awful lot more empirical research on the effects of minimum wage increases than this one paper by Card and Kreuger. The overwhelming balance of that research has found harmful employment effects, falling mainly on an especially disadvantaged population: young black males. In a review of this academic literature, economists David Neumark and William Wascher find:

Nearly two-thirds [of the 102 analyses they reviewed] give a relatively consistent (although by no means always statistically significant) indication of negative employment effects of minimum wages while only eight give a relatively consistent indication of positive employment effects. … [Further, of the 33 analyses we] view as providing the most credible evidence; 28 (85 percent) of these point to negative employment effects. Moreover, when researchers focus on the least-skilled groups most likely to be adversely affected by minimum wages, the evidence for disemployment effects seems especially strong. … We view the literature—when read broadly and critically—as largely solidifying the conventional view that minimum wages reduce employment among low-skilled workers.

The plural of anecdote, indeed.

For more on minimum wage research, see this Cato Policy Analysis by former U.S. deputy assistant labor secretary Mark Wilson. Or this brilliant little Cato Handbook on Policy chapter.

When people hear “democracy,” they tend to get warm, fuzzy feelings. As the Century Foundation’s Richard Kahlenberg writes in an article that, among other things, portrays private school choice as a threat to democracy, “public education…was also meant to instill a love of liberal democracy: a respect for the separation of powers, for a free press and free religious exercise, and for the rights of political minorities.” The fundamental, ironic problem is that both democracy and democratically controlled public schooling are inherently at odds with the individual rights, and even separation of powers, that Kahlenberg says democracy and public schools are supposed to protect.

Let’s be clear what “democracy” means: the people collectively, rather than a single ruler or small group of rulers, make decisions for the group. We typically think of this as being done by voting, with the majority getting its way.

Certainly, it is preferable for all people to have a say in decisions that will be imposed on them than to have a dictator impose things unilaterally. But there is nothing about letting all people have a vote on imposition that protects freedom. Indeed, in a pure democracy, as long as the majority decides something, no individual rights are protected at all. The will of the majority is all that matters.

We’ve seen basic rights and equality under the law perpetually and unavoidably violated by democratically controlled public schooling. It cannot be otherwise: At its core, a single system of government schools—be it a district, state, or federal system—can never serve all, diverse people equally. It must make decisions about whose values, histories, and culture will and will not be taught, as well as what students can wear, what they can say, and what they can do, in order to function.

Public schooling since the days of Horace Mann has found it impossible to uphold religious freedom and equality. Mann himself was constantly assailed by people who felt that by trying to make public schools essentially lowest-common-denominator Protestant institutions, he was throwing out religion or making the schools de facto Unitarian (his denomination). Mann, in response, promised that the Protestant Bible would always be used in public schools. Indeed, Protestantism was often thought essential to being a good American, including supportive of democracy, which meant that if the public schools were to serve their civic purpose they could not treat religious minorities equally, especially Roman Catholics, who were suspected of taking their political orders from the Pope in Rome.

Today, after more than a century of even deadly conflict over religion, the public schools are no longer de facto Protestant, but instead may legally have no connection that could appear to be advancing religion, right down, often, to speeches by individual students at events such as graduation ceremonies or athletic contests. This inherently renders religious people second-class citizens—any values are fair game in public schools except for theirs—while also curbing basic expression rights.

Of course, the inherent inequality of public schooling is not restricted to religion. In a public school a teacher, committee, school board, or other government actor must decide what aspects of history will be taught or literature read. This requires that government elevate some peoples’ speech and perspectives, while deeming others’ essentially unworthy. As a result, we have perpetual battles that tear at the social fabric over which books—The Bluest Eye, The Adventures of Huckleberry Finn, The Absolutely True Diary of a Part-Time Indian—should or should not be read in class or over whose history should be taught, and the losers are rendered unequal under the law.

Public schooling has also constantly intruded on separation of powers. Power is first supposed to be separated among levels of government, with local control often considered ideal for democratic control of schools. But local control has been shrunk as states and the federal government have stepped in to stop discrimination, or because districts have been deemed “in need of improvement.” State authority has been circumscribed for similar reasons. And the separation of federal powers—legislative, executive, and judicial—was shredded under President Obama when he offered states waivers out of the No Child Left Behind Act’s most onerous provisions, but only if they agreed to conditions unilaterally determined by his administration.

Alas, such compression and destruction of subsidiarity is almost guaranteed with democratically controlled schooling. Why? Because if people in a political minority—or even a majority unable to accumulate sufficient political power—cannot get the democratic government closest to it to provide the education they want, they can only with huge difficulty—moving their homes—meaningfully help themselves. They have basically no option but to appeal to a “higher” level of government. And when no level of democratic governance seems to respond, they feel compelled to allow a single person—a mayor, governor, or president—to take over.

The good news is that American government is not supposed to be grounded in democracy. It is grounded in liberty—the freedom of individuals to govern their own lives, and to combine however they freely choose. “Life, liberty, and the pursuit of happiness” are laid out as “unalienable rights” in the Declaration of Independence, and they, not “democracy,” are what government is created to protect.

Ironically, the educational system that is consistent with the liberty on which the country is based is the one that Kahlenberg and other government schooling advocates argue is fundamentally at odds with American values: school choice. And their major worries, at least at first-blush, are not unreasonable: people have a tendency to associate with people like themselves, potentially “balkanizing” the country, and private schools do not have to teach values like religious tolerance. But first-blush is not reality.

First, of course, the protection of individual rights that Kahlenberg wishes to defend is sacrificed the moment people are compelled to fund a government-run school. One school cannot teach both that we were created by God and that we were not. It cannot put even a tiny fraction of all literature on class syllabi. It cannot have a dress code and allow total freedom of expression. The only possible way for government to treat all diverse people equally in education is to enable them to choose what they will teach and learn.

But private schools, especially if they stand for specific beliefs, will fail to promote tolerance and teach civic values, right? Wrong. Quite possibly because chosen schools, especially private ones, are free to say “we stand for this” and “we do not stand for that, choose us if you agree,” research suggests that they are more effective at inculcating the civic knowledge and behaviors, like voting, volunteering in one’s community, and tolerance of those with whom they disagree, than are public schools. Why? Quite possibly because everyone in the school—both educators and families—voluntarily agree to a set of beliefs and standards a school promotes, allowing more rigor and clarity in teaching history, civics, or personal behavior. Public schools, in contrast, must work with diverse populations, and to avoid wrenching conflict and the distinctly un-American imposition of one group’s views on another, will often choose lowest-common-denominator content that may offend few people, but also convey little of clarity or use. Students in private schools might also cherish individual liberties a bit more than those in public schools because they see theirs curbed by the public schools.

Then there’s this: While the evidence is strong that in myriad ways people tend to prefer to associate with others like themselves—and that government can do little to change that—people also want to have commonalities with larger society. It simply makes their lives easier: Speaking the common language makes daily life smoother. Adopting the common culture makes one feel more at home. All these things make succeeding economically easier. So people will seek out commonality on their own. This means that public schooling, or any other government effort to impose commonality, may well be unnecessary, while definitely being inherently conflict-fostering and rights-trampling.

“Democracy” is a confusion-enshrouded, contradictory weapon that has been successfully employed against freedom in education for too long. It is time to reassert liberty as the fundamental American value and cease letting it be trampled by, and for, public schooling.

While it’s apt to get lost in news coverage of this morning’s bigger rulings, a moment should be set aside to applaud today’s solid 8-1 Supreme Court decision in Bristol-Myers Squibb, together with the related 8-0 outcome from May 30 in the case of BNSF v. Tyrrell. Both cases arose from state courts’ attempts to grab jurisdiction over out-of-state corporations for purposes of hearing lawsuits arising from out-of-state conduct affecting out-of-state complainants. And in both instances—with only Justice Sonia Sotomayor still balking—the Justices made clear that some states’ wish to act as nationwide regulators does not allow them to stretch the constitutional limits on their jurisdiction that far. 

For background on the cases, see our April post. We wondered then whether the consensus of Justices displayed in the benchmark 2014 Daimler case would endure rather than be splintered, and the answer was yes, it did and has. Justice Sotomayor, sticking to a once popular position, is still convinced that if states want to do a certain amount of long-arm collaring of cases involving interstate businesses that arose elsewhere and might fit conveniently into their docket, well, that’s fair enough for government work. That led her to file a lone separate partial concurrence in BNSF, as against a majority opinion written by Justice Ruth Ginsburg (who has authored much of the Court’s modern jurisprudence in this area) and an outright dissent in today’s decision in Squibb, authored by Justice Samuel Alito. To no one’s surprise, new Justice Neil Gorsuch joined the majority in both cases.

Many commenters will inevitably group these cases with last month’s 8-0 decision in the patent venue case of TC Heartland v. Kraft Foods, which I described as “a landmark win for defendants in patent litigation—and, on a practical level, for fairer ground rules in procedure.” To be sure, the underlying legal materials were completely different; TC Heartland involved the interpretation of wording in a federal statute. What united the three cases with Daimler is that the contemporary Court is keenly aware of the danger that the tactical use of forum-shopping will eclipse the merits in many categories of high-stakes litigation, turning potentially losing cases into winners through the chance to file them in a more friendly court.

That insight might prove significant at a time when forum-shopping has come to play a prominent role in high-profile ideological litigation—with conservatives running to file suit in the Fifth Circuit, liberals in the Ninth.

In a unanimous judgment that splintered on its reasoning, the Supreme Court correctly held that the “disparagement clause” of the Lanham Act (the federal trademark law) violated the Constitution. The ruling boils down to the simple point that bureaucrats shouldn’t be deciding what’s “disparaging.”

Trademarks, even ones that may offend many people—of which plenty are registered by the Patent and Trademark Office (PTO)—are private speech, which the First Amendment prevents the government from censoring. As Justice Samuel Alito put it in a part of the opinion that all the justices joined (except Neil Gorsuch, who didn’t participate in the case), “If the federal registration of a trademark makes the mark government speech, the Federal Government is babbling prodigiously and incoherently.”

At this point, the Court split. Justice Alito, joined by Chief Justice Roberts and Justices Thomas and Breyer, explained why trademarks don’t constitute a subsidy or other type of government program (within which the government can regulate speech), and that the “disparagement clause” doesn’t even survive the more deferential scrutiny that courts give “commercial” speech. The remaining four justices, led by Justice Anthony Kennedy, would’ve ended the discussion after finding that the PTO here is engaging in viewpoint discrimination among private speech. The end of his opinion is worth quoting in full:

A law that can be directed against speech found offensive to some portion of the public can be turned against minority and dissenting views to the detriment of all. The First Amendment does not entrust that power to the government’s benevolence. Instead, our reliance must be on the substantial safeguards of free and open discussion in a democratic society.

Fundamentally, this somewhat unusual case brought by an Asian-American electronic-rock band shows that government can’t make you choose among your rights. The Lanham Act’s disparagement clause placed an unconstitutional condition on those who consider the use of an edgy or taboo phrase to be part of their brand: either change your name or be denied the right to use it effectively. Whether you’re a musician, a politician, or a sports team—the Washington Redskins’ moniker will now be safe—it’s civil society (consumers, voters, fans) who should decide whether you’re being too offensive for polite company.

For more, see my previous writings here and here—and of course reading Cato’s “funny brief” is all the sweeter after this ruling.

Attorney General Jeff Sessions writes in Sunday’s Washington Post:

Drug trafficking is an inherently violent business. If you want to collect a drug debt, you can’t, and don’t, file a lawsuit in court. You collect it by the barrel of a gun. 

Sessions correctly understands a major source of crime in the drug distribution business: people with a complaint can’t go to court. But he jumps to the conclusion that “Drug trafficking is an inherently violent business.” This is a classic non sequitur. It’s hard to imagine that he actually doesn’t understand the problem. He is, after all, a law school graduate. How can he not understand the connection between drugs and crime? Prohibitionists talk of “drug-related crime” and suggest that drugs cause people to lose control and commit violence. Sessions gets closer to the truth in the opening of his op-ed. He goes wrong with the word “inherently.” Selling marijuana, cocaine, and heroin is not “inherently” more violent than selling alcohol, tobacco, or potatoes. 

Most “drug-related crime” is actually prohibition-related crime. The drug laws raise the price of drugs and cause addicts to have to commit crimes to pay for a habit that would be easily affordable if it were legal. And more dramatically, as Sessions notes, rival drug dealers murder each other–and innocent bystanders–in order to protect and expand their markets. 

We saw the same phenomenon during the prohibition of alcohol in the 1920s. Alcohol trafficking is not an inherently violent business. But when you remove legal manufacturers, distributors, and bars from the picture, and people still want alcohol, then the business becomes criminal. As the figure at right (drawn from a Cato study of alcohol prohibition and based on U.S. Bureau of the Census, Historical Statistics of the United States, Colonial Times to 1970 [Washington: Government Printing Office, 1975], part 1, p. 414) shows, homicide rates climbed during Prohibition, 1920-33, and fell every year after the repeal of prohibition. 

Tobacco has not (yet) been prohibited in the United States. But as a Cato study of the New York cigarette market showed in 2003, high taxes can have similar effects:

Over the decades, a series of studies by federal, state, and city officials has found that high taxes have created a thriving illegal market for cigarettes in the city. That market has diverted billions of dollars from legitimate businesses and governments to criminals.

Perhaps worse than the diversion of money has been the crime associated with the city’s illegal cigarette market. Smalltime crooks and organized crime have engaged in murder, kidnapping, and armed robbery to earn and protect their illicit profits. Such crime has exposed average citizens, such as truck drivers and retail store clerks, to violence.

Again, to use Sessions’s language, cigarette trafficking is not an inherently violent business. But drive it underground, and you will get criminality and violence. 

Sessions’s premise is wrong. Drug trafficking (meaning, in this case, the trafficking of certain drugs made illegal under our controlled substances laws) is not an inherently violent business. The distribution of illegal substances tends to produce violence. Because Sessions’s premise is wrong, his conclusion–a stepped-up drug war, with more arrests, longer sentences, and more people in jail–is wrong. A better course is outlined in the Cato Handbook for Policymakers.

 

The negotiations on the UK exiting the EU start today. Here’s the BBC:

Brexit Secretary David Davis will call for “a deal like no other in history” as he heads into talks with the EU.

Subjects for the negotiations, which officially start in Brussels later, include the status of expats, the UK’s “divorce bill” and the Northern Ireland border.

Mr Davis said there was a “long road ahead” but predicted a “deep and special partnership”.

The UK is set to leave the EU by the end of March 2019.

Day one of the negotiations will start at about 11:00 BST at European Commission buildings in Brussels.

Mr Davis and the EU’s chief negotiator Michel Barnier, a former French foreign minister and EU commissioner, will give a joint press conference at the end of the day. 

I’ve sensed some growing concerns about how well these talks might go, and the recent UK general election only made things worse. It’s not clear to me that the politicians who are in charge here can make this a success. Time will tell.

If you are looking for something positive related to Brexit, however, once the UK does leave the EU the personnel situation on the technical side of things is looking good. On Friday, the UK Department of International Trade announced that it had hired Crawford Falconer as the Chief Trade Negotiation Advisor. From the announcement: 

Together with his team Crawford will:

  • develop and negotiate free trade agreements and market access deals with non-EU countries
  • negotiate plurilateral trade deals on specific sectors or products
  • make the department a ‘centre of excellence’ for negotiation and British trade
  • support the UK’s membership of the World Trade Organization (WTO).

Falconer is not a household name, but he is someone that I am very familiar with. I had just been reading his latest co-authored work. He was one of the judges (technically, a “panelist”) on a WTO dispute panel that ruled earlier this month on whether the U.S. has complied with a previous ruling related to the subsidies it provides to Boeing. He has also acted as a judge in 14 other GATT/WTO decisions.

Now, you may say, international adjudication is all well and good, but how about trade negotiations? Does he have any experience there? In fact, he does. He is a dual UK/New Zealand citizen and has been negotiating for New Zealand for many years. He was New Zealand’s Ambassador to the WTO in Geneva from 2005-2008 (and during that time, in his personal capacity, he chaired the Doha Round negotiations on agriculture and cotton). His LinkedIn page has more on his professional background.

There is still a long way to go before we get to the point of the UK negotiating free trade deals of its own. But once we do get there, its trade policy team is in pretty good hands.

Pages