Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Cato released my study today on “Tax Reform and Interstate Migration.”

The 2017 federal tax law increased the tax pain of living in a high-tax state for millions of people. Will the law induce those folks to flee to lower-tax states?

To find clues, the study looks at recent IRS data and reviews academic studies on interstate migration.

For each state, the study calculated the ratio of domestic in-migration to out-migration for 2016. States losing population have ratios of less than 1.0. States gaining population have ratios of more than 1.0. New York’s ratio is 0.65, meaning for every 100 residents that left, only 65 moved in. Florida’s ratio is 1.45, meaning that 145 households moved in for every 100 that left.

Figure 1 maps the ratios. People are generally moving out of the Northeast and Midwest to the South and West, but they are also leaving California, on net.

People move between states for many reasons, including climate, housing costs, and job opportunities. But when you look at the detailed patterns of movement, it is clear that taxes also play a role.

I divided the country into the 25 highest-tax and 25 lowest-tax states by a measure of household taxes. In 2016, almost 600,000 people moved, on net, from the former to the latter.

People are moving into low-tax New Hampshire and out of Massachusetts. Into low-tax South Dakota and out of its neighbors. Into low-tax Tennessee and out of Kentucky. And into low-tax Florida from New York, Connecticut, New Jersey, and just about every other high-tax state.

On the West Coast, California is a high-tax state, while Oregon and Washington fall just on the side of the lower-tax states.

Of the 25 highest-tax states, 24 of them had net out-migration in 2016.

Of the 25 lowest-tax states, 17 had net in-migration.  

 

https://object.cato.org/sites/cato.org/files/pubs/pdf/tbb-84-revised.pdf

A new report from the American Public Transportation Association (APTA) comes out firmly in support of the belief that correlation proves causation. The report observes that traffic fatality rates are lower in urban areas with high rates of transit ridership, and claims that this proves “that modest increases in public transit mode share can provide disproportionally larger traffic safety benefits.”


Here is one of the charts that APTA claims proves that modest increases in transit ridership will reduce traffic fatalities. Note that, in urban areas with fewer than 25 annual transit trips per capita – which is the vast majority of them – the relationship between transit and traffic fatalities is virtually nil. You can click the image for a larger view or go to APTA’s document from which this chart was taken.

In fact, APTA’s data show no such thing. New York has the nation’s highest per capita transit ridership and a low traffic fatality rate. But there are urban areas with very low ridership rates that had even lower fatality rates in 2012, while there are other urban areas with fairly high ridership rates that also had high fatality rates. APTA claims the correlation between transit and traffic fatalities is a high 0.71 (where 1.0 is a perfect correlation), but that’s only when you include New York and a few other large urban areas: among urban areas of 2 million people or less, APTA admits the correlation is a low 0.28.

The United States has two kinds of urban areas: New York and everything else. Including New York in any analysis of urban areas will always bias any statistical correlations in ways that have no application to other urban areas.

In most urban areas outside of New York, transit ridership is so low that it has no real impact on urban travel. Among major urban areas other than New York, APTA’s data show 2012 ridership ranging from 55 trips per person per year in Los Angeles to 105 in Washington DC to 133 in San Francisco-Oakland. From the 2012 National Transit Database, transit passenger miles per capita ranged from 287 in Los Angeles to 544 in Washington to 817 in San Francisco.

Since these urban areas typically see around 14,000 passenger miles of per capita travel on highways and streets per year, the 530-mile difference in transit usage between Los Angeles and San Francisco is pretty much irrelevant. Thus, even if there is a weak correlation between transit ridership and traffic fatalities, transit isn’t the cause of that correlation.

San Francisco and Washington actually saw slightly more per capita driving than Los Angeles in 2012, yet APTA says they had significantly lower fatality rates (3.7 fatalities per 100,000 residents in San Francisco and 3.6 in Washington vs. 6.4 in Los Angeles). Clearly, some other factor must be influencing both transit ridership and traffic fatalities.

With transit ridership declining almost everywhere, this is just a desperate attempt by APTA to make transit appear more relevant than it really is. In reality, contrary to APTA’s unsupported conclusion, modest rates in transit ridership will have zero measurable effect on traffic fatality rates.

Content moderation remains in the news following President Trump’s accusation that Google manipulated its searches to harm conservatives. Yesterday Congress held two hearings on content moderation, one mostly about foreign influence and the other mostly about political bias. The Justice Department also announced Attorney General Sessions will meet soon with state attorneys general “to discuss a growing concern that these companies may be hurting competition and intentionally stifling the free exchange of ideas on their platforms.” 

None of this is welcome news. The First Amendment sharply limits government power over speech. It does not limit private governance of speech. The Cato Institute is free to select speakers and topics for our “platform.” The tech companies have that right also even if they are politically biased. Government officials should also support a culture of free speech. Government officials bullying private companies contravenes a culture of free speech. Needless to say, having the Justice Department investigate those companies looks a lot like a threat to the companies’ freedom. 

So much for law and theory. Here I want to offer some Madisonian thoughts on these issues. No one can doubt James Madison’s liberalism. But he wanted limited government in fact as well as in theory. Madison thought about politics to realize liberal ideals. We should too. 

Let’s begin with the question of bias. The evidence for bias against conservatives is anecdotal and episodic. The tech companies deny any political bias, and their incentives raise doubts about partisan censorship. Why take the chance you might drive away millions of customers and invite the wrath of Congress and the executive branch on your business? Are the leaders of these companies really such political fanatics that they would run such risks? 

Yet these questions miss an important point. The problem of content moderation bias is not really a question of truth or falsity. It is rather a difficult political problem with roots in both passion and reason. 

Now, as in the past, politicians have powerful reasons to foster fear and anger among voters. People who are afraid and angry are more likely to vote for a party or a person who promises to remedy an injustice or protect the innocent. And fear and anger are always about someone threatening vital values. For a Republican president, a perfect “someone” might be tech companies who seem to be filled with Progressives and in control of the most important public forums in the nation. 

But the content moderation puzzle is not just about the passions. The fears of the right (and to a lesser degree, the left) are reasonable. To see this, consider the following alternative world. Imagine the staff of the Heritage Foundation has gained potential control over much of the online news people see and what they might say to others about politics. Imagine also that after a while Progressives start to complain that the Heritage folks are removing their content or manipulating new feeds. The leaders of Heritage deny the charges. Would you believe them? 

Logically it is true that this “appearance of bias” is not the same as bias, and bias may be a vice but cannot be a crime for private managers. But politically that may not matter much, and politics may yet determine the fate of free speech in the online era. 

Companies like Google have to somehow foster legitimacy for their moderation of content, moderation that cannot be avoided if they are to maximize shareholder value. They have to convince most people that they have a right to govern their platforms even when their decisions seem wrong. 

Perhaps recognizing that some have reasonable as well as unreasonable doubts about their legitimacy would be a positive step forward. And people who harbor those reasonable doubts should keep in mind the malign incentives of politicians who benefit from fostering fear and anger against big companies. 

If the tech companies fail to gain legitimacy, we all will have a problem worse than bias. Politicians might act, theory and law notwithstanding. The First Amendment might well stop them. But we all would be better off with numerous, legitimate private governors of speech on the internet. Google’s problem is ours.

In Supreme Court nominee Brett Kavanaugh’s opening statement at his hearing Tuesday, he praised Merrick Garland, with whom he serves on the D.C. Circuit, as “our superb chief judge.”

If you were surprised by that, you shouldn’t have been. When President Obama nominated Garland to the high court, Judge Kavanaugh described his colleague as “supremely qualified by the objective characteristics of experience, temperament, writing ability, scholarly ability for the Supreme Court … He has been a role model to me in how he goes about his job.”

In fact, it has been reported in at least one place that one reason Kavanaugh was left off Trump’s initial list of SCOTUS nominees was that he had been so vocal and public in praising Garland’s nomination.

Now, it would be understandable if neither side in the partisan confirmation wars chose to emphasize this bit of background to the story. Republican strategists might not be keen on reminding listeners of what their party did with Garland’s nomination, and might also worry about eroding enthusiasm for Kavanaugh among certain elements of their base. Democratic strategists, meanwhile, might see the episode as one in which the present nominee comes off as not-a-monster, and, well, you can’t have that.

The lesson, if there is one, might be that the federal courts are not as polarized and tribal as much of the higher political class and punditry at nomination time.

The Italian general elections of March 4, 2018 have produced an improbable coalition government between two upstart populist parties: left-Eurosceptic-nationalist Movimento 5 Stelle (Five Star Movement) and the right-Eurosceptic-nationalist Lega (League). The coalition partners agree on greater public spending and, at the same time, on tax cuts that would reduce revenue. How then to pay for the additional spending? Italy is already highly indebted. Its public debt stands at 133 percent of GDP, highest in the Eurozone apart from Greece, and well above the EU’s average of 87 percent. Its sovereign bonds carry a high default risk premium. Today, the yield on Italian 10-year bonds stands at 291 basis points above the yield on 10-year German bunds, up from a spread in the 130-40 range during the months before the election.

If tax revenue and debt cannot practically be increased, the remaining fiscal option—for a country with its own fiat currency—is printing base money. But Italy is part of the Eurozone, and only the ECB can create base-money euros. A group of four Italian economists (Biagio Bossone, Marco Cattaneo, Massimo Costa, and Stefano Sylos Labini), correctly noting that “budget constraints and a lack of monetary sovereignty have tied policymakers’ hands,” and regarding this as a bad thing, have proposed in a series of publications that Italy should introduce a new domestic quasi-money, a kind of parallel currency that they call “fiscal money.” Similar proposals have been made by Yanis Varoufakis, the former Greek finance minister, and by Joseph Stiglitz, the prominent American economist. Italy’s coalition government is reportedly considering these proposals seriously.

Under the Bossone et al. proposal, the Italian government would issue euro-denominated bearer “tax rebate certificates” (TRCs). The government would pledge to accept these at face value in “future payments to the state (taxes, duties, social contributions, and so forth).” The certificates in that sense would be “redeemable at a later date – say, two years after issuance.” If non-interest-bearing, they would trade at a discounted value. But if interest were paid to keep the certificates always at par, and the payment system accordingly accepted them as the equivalent of base-money euros, the certificates would be additional spendable money in the public’s hands. “As a result,” they argue, “Italy’s output gap — that is, the difference between potential and actual GDP — would close.” Thus they claim that “properly designed, such a system could substantially boost economic output and public revenues at little to no cost.”

Remarkable claims. Bossone et al. have recently argued that their “fiscal money” program would not violate ECB rules. But there is a more basic question: would it actually work to boost real GDP sustainably by shrinking unemployment and excess capacity? On critical examination, the answer is no. The proposal is based on wishful thinking.

To provide empirical context, note that estimated slack in the Italian economy is already shrinking. The OECD estimate of Italy’s output gap (the percentage by which real GDP falls short of estimated full-employment or “potential” GDP) was large—greater than 5 percent—for 2014, the year when Bossone et al. first floated their proposal. Among the major Eurozone economies, only Greece, Spain, and Portugal had larger gaps; France had a gap half as large, while Germany was above its estimated potential GDP. For 2018, however, Italy’s estimated output gap is under 0.5 percent. For 2019 the OECD projects that actual GDP will exceed full-employment GDP.

Theoretically (as famously argued by Leland Yeager and by Robert Clower and Axel Leijonhufvud), in a world of sticky prices and wages a depressed level of real output can be due to an unsatisfied excess demand for money, which logically corresponds to an aggregate excess supply (unsold inventories) of other goods including labor. People building up their real money balances will do so by buying fewer goods at current prices and offering more labor at current wages. But is that the cause depressed output in Italy today? Yeager’s “cash-balance interpretation of depression” assumes an economy with its own money, domestically fixed in quantity, so that an excess demand for money can be satisfied only by a drawn-out process of falling prices and wages that raises real balances.

But Italy today does not have its own money. It is a part of a much larger monetary area, the Eurozone. (For one indication of Italy’s share of the euro economy, Italian banks hold 14.7% of euro deposits.) The European Central Bank through tight monetary policy can create an excess demand for money in the entire Eurozone, in which case Italy suffers equally with other Eurozone countries, but it cannot create an excess demand for money specifically in Italy. A specifically Italian excess demand for money can arise if Italians increase their demand for money balances relative to other Eurozone residents, but in that case euros can and will flow in from the rest of the Eurozone (corresponding to Italians more eagerly selling goods or borrowing) to satisfy that demand.

Because Italy’s small output gap in 2018 therefore cannot be plausibly attributed to an unsatisfied excess demand for money, an expansion of the domestic money stock through the creation of “fiscal money” is not an appropriate remedy.

If not due to an excess demand for money, what is the cause of Italy’s lingering output gap? I don’t know, but I would look for real factors. Likely candidates are labor-market inflexibility in the face of real shocks, and the reluctance of investors to put financial or real capital into a country with serious fiscal problems (hence a serious risk of new taxes or higher tax rates soon) and a non-negligible threat of leaving the euro.

The flip side of flowing into Italy from the rest of the Eurozone, to satisfy Italian money demand, is that any excess money in Italy will flow out.  If Italians already hold the quantity of euro balances they desire, then the creation of “fiscal money” would not increase Italy’s money stock except transitorily. Supposing that Italians treat new “fiscal money” as the domestic equivalent of euros, the addition to their money balances would result in holdings greater than desired at current euro prices and interest rates. In restoring their desired portfolio shares (spending off excess balances) they would send euros abroad (by assumption, the domestic quasi-money would not be accepted abroad) in purchases of imported goods and financial assets.

It isn’t clear, however, that the public would actually regard “fiscal money” as the equivalent of base-money euros added to the circulation. Unlike fiat base money, TRCs are not a net asset. They come with corresponding debts, the government’s obligation to accept them in lieu of euros for taxes (say) two years after issue. There is no reason for taxpayers to think themselves richer for having more TRCs in their wallets given that they will need to pay future taxes (equivalent in present value) to service and retire them.

Despite $8.6 billion spent on the eradication of opium in Afghanistan over the past seventeen years, the US military has failed to stem the flow of Taliban revenue from  the illicit drug trade. Afghanistan produces the majority of the world’s opium, and recent U.S. military escalations have failed to alter the situation. According to a recent piece in the Wall Street Journal:

“Nine months of targeted airstrikes on opium production sites across Afghanistan have failed to put a significant dent in the illegal drug trade that provides the Taliban with hundreds of millions of dollars, according to figures provided by the U.S. military.”

This foreign war on drugs has been no more successful than its domestic counterpart. If U.S. military might cannot suppress the underground market, local police forces have no hope.  Supply side repression does not seem to work, and its costs and unintended consequences are large.

 Research assistant Erin Partin contributed to this blog post.

In 1985, Reason Foundation co-founder and then-president Robert Poole heard about a variable road pricing experiment in Hong Kong. In 1986, he learned that France and other European countries were offering private concessions to build toll roads. In 1987, he interviewed officials of Amtech, which had just invented electronic transponders that could be used for road tolling. He put these three ideas together in a pioneering 1988 paper suggesting that Los Angeles, the city with the worst congestion in America, could solve its traffic problems by adding private, variable-priced toll lanes to existing freeways.

Although Poole’s proposal has since been carried out successfully on a few freeways in southern California and elsewhere, it is nowhere near as ubiquitous as it ought to be given that thirty years have passed and congestion is worse today in dozens of urban areas than it was in Los Angeles in 1988. So Poole has written Rethinking America’s Highways, a 320-page review of his research on the subject since that time. Poole will speak about his book at a livestreamed Cato event this Friday at noon, eastern time.

Because Poole has influenced my thinking in many ways (and, to a very small degree, the reverse is true), many of the concepts in the book will be familiar to readers of Gridlock or some of my Cato policy analyses. For example, Poole describes elevated highways such as the Lee Roy Selmon Expressway in Tampa as a way private concessionaires could add capacity to existing roads. He also looks at the state of autonomous vehicles and their potential contributions to congestion reduction.

France’s Millau Viaduct, by many measures the largest bridge in the world, was built entirely with private money at no risk to French taxpayers. The stunning beauty, size, and price of the bridge are an inspiration to supporters of public-private partnerships everywhere.

Beyond these details, Poole is primarily concerned with fixing congestion and rebuilding the nation’s aging Interstate Highway System. His “New Vision for U.S. Highways,” the subject of the book’s longest chapters, is that congested roads should be tolled and new construction and reconstruction should be done by private concessionaires, not  public agencies. The book’s cover shows France’s Millau Viaduct, which a private concessioner opened in 2004 at a cost of more than $400 million. Poole compares the differences between demand-risk and availability-payment partnerships – in the former, the private partner takes the risk and earns any profits; in the latter, the public takes the risk and the private partner is guaranteed a profit – coming down on the side of the former.

This chart showing throughput on a freeway lane is based on the same data as a chart on page 256 of Rethinking America’s Highways. It suggests that, by keeping speeds from falling below 50 mph, variable-priced tolling can greatly increase throughput during rush hours.

The tolling chapter answers arguments against tolling, responses Poole has no doubt made so many times he is tired of giving them. He mentions (but doesn’t emphasize enough, in my opinion) that variable pricing can keep traffic moving at 2,000 to 2,500 vehicles per hour per freeway lane, while throughout can slow to as few as 500 vehicles per hour in congestion. This is the most important and unanswerable argument for tolling, for – contrary to those who say that tolling will keep poor people off the roads – it means that tolling will allow more, not fewer, people to use roads during rush hours.

While I agree with Poole that private partners would be more efficient at building new capacity than public agencies, I don’t think this idea is as important as tolling. County toll road authorities in Texas, such as the Fort Bend Toll Road Authority, have been very efficient at building new highways that are fully financed by tolls.

Despite considerable (and uninformed) opposition to tolling, an unusual coalition of environmentalists and fiscal conservatives has persuaded the Oregon Transportation Commission to begin tolling Portland freeways. As a result, Portland may become the first city in America to toll all its freeways during rush-hour, a goal that would be thwarted if conservatives insisted on private toll concessions.

Tolling can end congestion, but Poole points out that this isn’t the only problem we face: the Interstate Highway System is at the end of its 50-year expected lifespan and some method will be needed to rebuild it. He places his faith in public-private partnerships for such reconstruction.

Tolling and public-private partnerships are two different questions, but of the two only tolling (or mileage-based user fees, which uses the same technology to effectively toll all roads) is essential to eliminating congestion. It is also the best alternative to what Poole argues are increasingly obsolescent gas taxes. Anyone who talks about congestion relief without including road pricing isn’t serious about solving the problem. Poole’s book should be required reading for all politicians and policymakers who deal with transportation.

The environmental impact of cryptocurrencies looms large among the many concerns voiced by sceptics. Earlier this year, Agustín Carstens, who runs the influential Bank for International Settlements, called Bitcoin “a combination of a bubble, a Ponzi scheme and an environmental disaster.”

Carstens’ first two indictments have been challenged. Contrary to his assertion, while the true market potential of Bitcoin, Ethereum and other such decentralized networks remains uncertain, by now it is clear to most people that they are more than mere instruments for short-term speculation and the fleecing of unwitting buyers.

That Bitcoin damages the environment without countervailing benefits is, on the other hand, an allegation still widely believed even by many cryptocurrency fans. Sustaining it is the indisputable fact that the electricity now consumed by  the Bitcoin network, at 73 TWh per year at last count, rivals the amount consumed by countries like Austria and the Philippines.

Computing power is central to the success of Bitcoin

Bitcoin’s chief innovation is enabling payments without recourse to an intermediary. Before Bitcoin, any attempt to devise an electronic payments network without a middleman suffered from a double-spend problem: There was no easy way for peers to verify that funds promised to them had not also been committed in other transactions. Thus, a central authority was inescapable.

“Satoshi Nakamoto”’s 2008 white paper proposing “a peer-to-peer electronic cash system” changed that. Nakamoto suggested using cryptography and a public ledger to resolve the double-spend problem. Yet, in order to ensure that only truthful transactions were added to the ledger, this decentralized payments system needed to encourage virtuous behavior and make fraud costly.

Bitcoin achieves this by using a proof-of-work consensus algorithm to reach agreement among users about which transactions should go on the ledger. Proof-of-work means that users expend computing power as they validate transactions. The reward from validation are newly minted bitcoins, as well as a transaction fee. Nakamoto writes:

Once the CPU effort has been expended, to make it satisfy the proof-of-work, the block cannot be changed without redoing the work. As later blocks are chained after it, the work to change the block would include redoing all the blocks after it.

[…] Proof-of-work is essentially one-CPU-one-vote. The majority decision is represented by the longest chain, which has the greatest proof-of-work effort invested in it.

Because consensus is required for transactions to go on the ledger, defrauding the system – forcing one user’s false transactions on the public ledger, against other users’ disagreement – would require vast expenditures of computing power. Thus, Bitcoin renders fraud uneconomical.

Electricity powers governance on Bitcoin

Bitcoin and other cryptocurrencies replace payments intermediation with an open network of independent users, called ‘miners’, who compete to validate transactions and whose majority agreement is required for any transaction to be approved.

Intermediation is not costless. Payment networks typically have large corporate structures and expend large amounts of resources to facilitate transactions. Mastercard, which as of 2016 accounted for 23 percent of the credit card and 30 percent of the debit card market in the U.S., employs more than 13,000 staff worldwide. Its annual operating expenses reached $5.4 billion in fiscal year 2017. Its larger competitor Visa had running costs of $6.2 billion.

Equally, doing away with intermediaries such as Mastercard has costs. Bitcoin miners require hardware and electricity to fulfill their role on the network. A recent study puts the share of electricity costs in all mining costs at 60 to 70 percent.

Electricity prices vary widely across countries, and miners will tend to locate in countries where electricity is comparably cheap, since the bitcoin price is the same all over the world. One kilowatt-hour of electricity in China, reportedly the location of 80 percent of Bitcoin mining capacity, costs 8.6 U.S. cents, 50 percent below the average price in America. Assuming an average price of 10 cents per kWh, the Bitcoin network would consume $7.3 billion of electricity per year, based on current mining intensity. This yields total Bitcoin annual running costs of $10 to 12 billion.

The value of Bitcoin’s electricity use

Bitcoin total operating costs do not differ much from those of intermediated payment networks such as Mastercard and Visa. Yet these card networks facilitate many more transactions than Bitcoin: Digiconomist reports that Bitcoin uses 550,000 times as much electricity per transaction as Visa.

However, the number of transactions is a poor standard for judging the value exchanged on competing networks. Mastercard and Visa handle large numbers of small-dollar exchanges, whereas the Bitcoin transactions are $16,000 on average. The slow speed of the Bitcoin network and the large fluctuations in average transaction fees make low-value exchanges unattractive. Moreover, unlike card networks, cryptocurrencies are still not generally accepted and therefore used more as a store of value than a medium of exchange.

With that in mind, if we compare Bitcoin and the card networks by the volume of transactions processed, a different picture emerges. The volume of Bitcoin transactions over the 24 hours to August 27 was $3.6 billion, which is not an outlier. That yields annual transaction volume of $1.33 trillion. This is below Mastercard’s approximate $6 trillion and Visa’s $7.8 trillion in payments volume over 2017. But it is not orders of magnitude below.

In fact, as ARK Invest reported over the weekend, Bitcoin has already surpassed the smaller card network Discover, and online payments pioneer Paypal, in online transactions volume. Overtaking the most successful payment network of the internet era is quite a milestone.

Source: ARK Invest newsletter, Aug. 26

Long-term prospects for the Bitcoin network

As mentioned above, comparisons between Bitcoin and intermediated payment networks must be conducted with caution, because most transactions on Mastercard, Paypal and Visa are for the exchange of goods and services, whereas much of the dollar value of Bitcoin transactions has to do with speculative investment in the cryptocurrency and the mining of new bitcoins. Only a fraction of Bitcoin payments involve goods and services.

However, that people are eager to get a hold of bitcoins today shows that some firmly believe Bitcoin has the potential to become more widely demanded.

The prospects for Bitcoin as an investment, on the other hand, are perhaps more questionable than its proponents assume. After all, the value of a medium of exchange is given by the equation MV = PQ, where M is the money supply, V the velocity at which money units change hands, P the price level and Q the real volume of transactions.

Bitcoin bulls posit, quite plausibly, that Q will only grow in coming years. But unless Bitcoin becomes a store-of-value cryptocurrency that is not frequently exchanged, V will also grow as Bitcoin users transact more on the network. This will push up the price level P and depress the value of individual bitcoins. Thus, Bitcoin’s very success as a medium of exchange may doom it as an investment.

But that is orthogonal to the policy discussion as to whether Bitcoin’s admittedly large power requirements are a matter of concern. Whereas competing payment systems rely on a many inputs, from physical buildings to a skilled workforce to reputational and financial capital, Bitcoin’s primary input is electricity. What Bitcoin illustrates is that achieving successful governance without the role of an intermediary is costly – costly enough that Bitcoin may struggle to outcompete payment intermediaries.

On the other hand, there are efforts afoot to introduce innovations into the Bitcoin network to increase its energy efficiency. The Lightning Network project, which seeks to enable transactions to happen outside the Bitcoin blockchain over the course of (for example) a trading day, and to record only the starting and closing balance, is such an initiative. Other, more controversial ideas are to rapidly increase the maximum size of a transaction block, which would speed up transaction processing but might not make much of a dent in power usage. Others have addressed the issue more directly, by building renewable generation capacity specifically aimed at cryptocurrency mining.

Is Bitcoin’s electricity use socially wasteful?

Behind claims like Carstens’ that Bitcoin is “an environmental disaster” lies the veiled accusation that the cryptocurrency’s electricity use is somehow less legitimate, or socially less valuable, than electricity use by schools, hospitals, households and offices. Is there any truth to this claim?

Economists have known since at least Pigou that the only way to determine wastefulness in resource use is by examining whether an activity has unpriced externalities which might lead agents to over- or underuse the resource. In those instances, these social costs must be incorporated into the price of the resource to motivate efficient production.

What this means for Bitcoin is that the cryptocurrency itself cannot be “socially wasteful.” The environmental impact of electricity use is unconcerned with the purpose of that use. Whether electric power is consumed for the mining of cryptocurrency or the production of cars has no bearing on the environmental effects. Therefore, the impact of Bitcoin depends on two factors over which the network has no control: the way in which power is generated and how electricity is priced.

Both vary widely across jurisdictions. Iceland, which due to its comparably low power costs and still lower temperatures is a favorite location for Bitcoin miners, generates nearly all of its electricity from renewable geothermal sources, which also emit much lower amounts of carbon than coal- or gas-fired plants. Iceland participates in the European Union’s emissions trading system, which despite its imperfect design does a good job of internalizing the social cost of power generation.

Canada, like Iceland a cold jurisdiction, uses hydroelectric power to generate 59 percent of its electricity. In the crypto-favorite province of Quebec, 95 percent of power is hydroelectric, and prices are particularly low. While the overall environmental impact of hydropower is contested, there is agreement that its carbon footprint is a fraction of those of gas- and coal-fired plants. Canada has also recently implemented a nationwide cap-and-trade scheme in a bid to price carbon emissions.

China, the largest jurisdiction for mining, offers a less encouraging picture, as it still generates close to half its electricity from coal. However, this is drastically down from 72 percent in 2015 and has dropped even in absolute terms. The People’s Republic’s attempts to reduce its carbon footprint per unit of GDP have relied more on command-and-control shifts from coal to other sources than on market forces.

Whichever way one looks at it, however, the environmental impact of Bitcoin and other electricity-intensive cryptocurrencies is a function not of their software architecture, but of the energy policies in the countries where miners operate.

Much ado about nothing

We can only conclude that reports of cryptocurrencies’ wreaking environmental havoc have been greatly exaggerated. An examination of transaction volumes shows that Bitcoin’s power use is not outside the league of intermediated payments systems. Moreover, it will be in the interest of Bitcoin miners to reduce the per-transaction electricity cost of mining, as otherwise the network will struggle to grow and compete with incumbents. Finally, there is no evidence that cryptocurrencies have environmental externalities beyond those that can be ascribed to any electricity user wherever electricity is inefficiently priced. But public policy, not cryptocurrency innovation, is at fault there.

[Cross-posted from Alt-M.org]

As a practicing physician I have long been frustrated with the Electronic Health Record (EHR) system the federal government required health care practitioners to adopt by 2014 or face economic sanctions. This manifestation of central planning compelled many doctors to scrap electronic record systems already in place because the planners determined they were not used “meaningfully.” They were forced to buy a government-approved electronic health system and conform their decision-making and practice techniques to algorithms the central planners deem “meaningful.”  Other professions and businesses make use of technology to enhance productivity and quality. This happens organically. Electronic programs are designed to fit around the unique needs and goals of the particular enterprise. But in this instance, it works the other way around: health care practitioners need to conform to the needs and goals of the EHR. This disrupts the thinking process, slows productivity, interrupts the patient-doctor relationship, and increases the risk of error. As Twila Brase, RN, PHN ably details in “Big Brother in the Exam Room,” things go downhill from there.

With painstaking, almost overwhelming detail that makes the reader feel the enormous complexity of the administrative state, Ms. Brase, who is president and co-founder of Citizens’ Council for Health Freedom (CCHF), traces the origins and motives that led to Congress passing the Health Information Technology for Economic and Clinical Health (HITECH) Act in 2009. The goal from the outset was for the health care regulatory bureaucracy to collect the private health data of the entire population and use it to create a one-size-fits-all standardization of the way medicine is practiced. This standardization is based upon population models, not individual patients. It uses the EHR design to nudge practitioners into surrendering their judgment to the algorithms and guidelines adopted by the regulators. Along the way, the meaningfully used EHR makes practitioners spend the bulk of their time entering data into forms and clicking boxes, providing the regulators with the data needed to generate further standardization.

Brase provides wide-ranging documentation of the way this “meaningful use” of the EHR has led to medical errors and the replication of false information in patients’ health records. She shows how the planners intend to morph the Electronic Health Record into a Comprehensive Health Record (CHR), through the continual addition of new data categories, delving into the details of lifestyle choices that may arguably relate indirectly to health: from sexual proclivities, to recreational behaviors, to gun ownership, to dietary choices. In effect, a meaningfully used Electronic Health Record is nothing more than a government health surveillance system.  As the old saying goes, “He who pays the piper calls the tune.” If the third party—especially a third party with the monopoly police power of the state—is paying for health care it may demand adherence to lifestyle choices that keep costs down.

All of this data collection and use is made possible by the Orwellian-named Health Insurance Portability and Accountability Act (HIPAA) of 1996.  Most patients think of HIPAA as a guarantee that their health records will remain private and confidential. They think all those “HIPAA Privacy” forms they are signing at their doctor’s office is to insure confidentiality. But, as Brase points out very clearly, HIPAA gives numerous exemptions to confidentiality requirements for the purposes of collecting data and enforcing laws. As Brase puts it, 

 It contains the word privacy, leaving most to believe it is what it says, rather than reading it to see what it really is. A more honest title would be “Notice of Federally Authorized Disclosures for Which Patient Consent Is Not Required.”

It should frighten any reader to learn just how exposed the personal medical information is to regulators in and out of government. Some of the data collected without the patients’ knowledge is generated by what Brase calls “forced hospital experiments” in health care delivery and payment models, also conducted without the patients’ knowledge. Brase documents how patients remain in the dark about being included in payment model experiments, even including whether or not they are patients being cared for by an Accountable Care Organization (ACO). 

Again quoting Brase, 

Congress’s insistence that physicians install government health surveillance systems in the exam room and use them for the care of patients, despite being untested and unproven—and an unfunded mandate—is disturbing at so many levels—from privacy to professional ethics to the patient-doctor relationship. 

As the book points out, more and more private practitioners are opting out of this surveillance system. Some are opting out of the third party payment system (including Medicare and Medicaid) and going to a “Direct Care” cash pay model, which exempts them from HIPAA and the government’s EHR mandate. Some are retiring early and/or leaving medical practice altogether. Many, if not most, are selling their practices to hospitals or large corporate clinics transferring the risk of severe penalties for non-compliance to those larger entities. 

Health information technology can and should be a good thing for patients and doctors alike. But when the government rather than individual patients and doctors decide what kind of technology that will be and how it will be used, health information technology can become a dangerous threat to liberty, autonomy, and health. 

“Big Brother In The Exam Room” is the first book to catalog in meticulous detail the dangerous ways in which health information technology is being weaponized against us all.  Everyone should read it. 

It has been a whirlwind week of negotiations on the North American Free Trade Agreement (NAFTA), ending on Friday in apparent deadlock. Canada was not able to reach a deal with the United States on some of the remaining contentious issues, but that did not stop President Trump from submitting a notice of intent to Congress to sign a deal with Mexico that was agreed to earlier this week. This action allows the new trade agreement to be signed by the end of November, before Mexican President Enrique Pena Nieto leaves office. While a high degree of uncertainty remains, it is premature to ring the alarm for the end of NAFTA as we know it.

Why? First, there is still some negotiating latitude built into the Trade Promotion Authority (TPA) legislation, which outlines the process for how the negotiations unfold. The full text of the agreement has to be made public thirty days after the notice of intent to sign is submitted to Congress. This means that the parties have until the end of September to finalize the contents of the agreement. What we have now is just an agreement in principle, which can be thought of as a draft of the agreement, with a lot of little details still needing to be filled in. Therefore, it is not surprising that the notice submitted to Congress today left open the possibility of Canada joining the agreement “if it is willing” at a later date. Canadian Foreign Minister Chrystia Freeland will resume talks with U.S. Trade Representative Robert Lighthizer next Wednesday, and this should be seen as a sign that the negotiations are far from over.

Relatedly, TPA legislation does not provide a clear answer as to whether the President can split NAFTA into two bilateral deals. The original letter of intent to re-open NAFTA, which was submitted by Amb. Lighthizer in May 2017, notified Congress that the President intended to “initiate negotiations with Canada and Mexico regarding modernization of the North American Free Trade Agreement (NAFTA).” This can be read as signaling that not only were the negotiations supposed to be with both Canada and Mexico, but also that Congress only agreed to this specific arrangement.  In addition, it could be argued that TPA would require President Trump to “restart the clock” on negotiations with a new notice of intent to negotiate with Mexico alone. The bottom line, however, is that it is entirely up to Congress to decide whether or not it will allow for a vote on a bilateral deal with Mexico only, and so far, it appears that Congress is opposed to this. 

In fact, Congress has been fairly vocal about the fact that a NAFTA without Canada simply does not make sense. Canada and Mexico are the top destination for U.S. exports and imports, with total trade reaching over $1 trillion annually. Furthermore, we don’t just trade things with each other in North America, we make things together. Taking Canada out of NAFTA is analogous to putting a wall in the middle of a factory floor. It is has been estimated that every dollar of imports from Mexico includes forty cents of U.S. value added, and for Canada that figure is twenty-five cents for every dollar of imports—these are U.S. inputs in products that come back to the United States.

While President Trump may claim that he’s playing hardball with Canada by presenting an offer they cannot reasonably accept, we should approach such negotiating bluster with caution. In fact, the reality is that there is still plenty of time to negotiate, and Canada seems willing to come back to the table next week. At a press conference at the Canadian Embassy in Washington D.C. after negotiations wrapped up for the week, Minister Freeland remarked that Canada wants a good deal, and not just any deal, adding that a win-win-win was still possible. Negotiations are sure to continue amidst the uncertainty, and it will be a challenging effort to parse the signal from the noise. However, we should remain optimistic that a trilateral deal is within reach and take Friday’s news as just another step in that direction.

A Massachusetts statute prohibits ownership of “assault weapons,” the statutory definition of which includes the most popular semi-automatic rifles in the country, as well as “copies or duplicates” of any such weapons. As for what that means, your guess is as good as ours. A group of plaintiffs, including two firearm dealers and the Gun Owners’ Action League challenged the law as a violation of the Second Amendment. Unfortunately, federal district court judge William Young upheld the ban.

Judge Young followed the lead of the Fourth Circuit case of Kolbe v. Hogan (in which Cato filed a brief supporting a petition to the Supreme Court) which misconstrued from a shred of the landmark 2008 District of Columbia v. Heller case that the test for whether a class of weapons could be banned was whether it was “like an M-16,” contravening the core of Heller—that all weapons in common civilian use are constitutionally protected. What’s worse is that Judge Young seemed to go a step further, rejecting the argument that an “M-16” is a machine gun, unlike the weapons banned by Massachusetts, and deciding that semi-automatics are “almost identical to the M16, except for the mode of firing.” (The mode of firing is, of course, the principle distinction between automatic and semi-automatic firearms.)

The plaintiffs are appealing to the U.S. Court of Appeals for the First Circuit. Cato, joined by several organizations interested in the protection of our civil liberties and a group of professors who teach the Second Amendment, has filed a brief supporting the plaintiffs. We point out that the Massachusetts law classifies the common semi-automatic firearms used by police officers as “dangerous and unusual” weapons of war, alienating officers from their communities and undermining policing by consent.

Where for generations Americans needed look no further than the belt of their local deputies for guidance in selecting a defensive firearm, Massachusetts’ restrictions prohibit these very same arms from civilians. Those firearms selected by experts for reliability and overall utility as defensive weapons, would be unavailable for the lawful purpose of self-defense. According to Massachusetts, these law enforcement tools aren’t defensive, but instead implements of war designed to inflict mass carnage.

Where tensions between police and policed are a sensitive issue, Massachusetts sets up a framework where the people can be fired upon by police with what the state fancies as an instrument of war, a suggestion that only serves to drive a wedge between police and citizenry.

Further, the district court incorrectly framed the question as whether the banned weapons were actually used in defensive shootings, instead of following Supreme Court precedent and asking whether the arms were possessed for lawful purposes (as they unquestionably were). This skewing of legal frameworks is especially troublesome where the Supreme Court has remained silent on the scope of the right to keep and bear arms for the last decade, leading to a fractured and unpredictable state of the law.

Today, the majority of firearms sold in the United States for self-defense are illegal in Massachusetts. The district court erred in upholding this abridgment of Bay State residents’ rights. The Massachusetts law is unconstitutional on its face and the reasoning upholding it lacks legal or historical foundation.

Last weekend the Federal Reserve Bank of Kansas City hosted its annual symposium in Jackson Hole. Despite being the Fed’s largest annual event, the symposium has been “fairly boring” for years, in terms of what can be learned about the future of actual policy. This year’s program, Changing Market Structures and Implications for Monetary Policy, was firmly in that tradition—making Jerome Powell’s speech, his first there as Fed Chair, the main event. In it, he covered familiar ground, suggesting that the changes he has begun as Chair are likely to continue.

Powell constructed his remarks around a nautical metaphor of “shifting stars.” In macroeconomic equations a variable has a star superscript (*) on it to indicate it is a fundamental structural feature of the economy. In Powell’s words, these starred values in conventional economic models are the “normal, or “natural,” or “desired” values (e.g. u* for the natural rate of unemployment, r* for the neutral rate of interest, and π* for the optimal inflation rate). In these models the actual data are supposed to fluctuate around these stars. However, the models require estimates for many star values (the exception being desired inflation, which the Fed has chosen to be a 2% annual rate) because they cannot be directly observed, and therefore must be inferred.

These models then use the gaps between actual values and the starred values to guide—or navigate, in Powell’s metaphor—the path of monetary policy. The most famous example being, of course, the Taylor Rule, which calls for interest rate adjustments depending on how far the actual inflation rate is from desired inflation and how far real GDP is from its estimated potential. Powell’s thesis is that as these fundamental values change, particularly as the estimates become more uncertain—as the stars shift so to speak—using them as guides to monetary policy becomes more difficult and less desirable.

His thesis echoes a point he made during his second press conference as Fed Chair when he said policymakers “can’t be too attached to these unobservable variables.” It also underscores Powell’s expressed desire to move the Fed in new directions: less wedded to formal models, open to a broader range of economic views, and potentially towards using monetary policy rules. To be clear, while Powell has outlined these new directions it remains to be seen how and whether such changes will actually be implemented.

A specific example of a new direction—and to my mind the most important comment in the Jackson Hole speech—was Powell’s suggestion that the Fed look beyond inflation in order to detect troubling signs in the economy. A preoccupation with inflation is a serious problem at the Fed, and one that had disastrous consequences in 2008. Indeed, Powell noted that the “destabilizing excesses,” (a term that he should have defined) in advance of the last two recessions showed up in financial market data rather than inflation metrics.

While Powell is more open to monetary policy rules than his predecessors, he’s yet to formally endorse them as anything other than helpful guides in the policymaking process. At Jackson Hole he remarked, “[o]ne general finding is that no single, simple approach to monetary policy is likely to be appropriate across a broad range of plausible scenarios.” This was seen as a rejection of rule-based monetary policy by Mark Spindel, noted Fed watcher and co-author of a political history of the Fed. However, given the shifting stars context of the speech, Powell’s comment should be interpreted as saying that when the uncertainty surrounding the stars is increasing, the usefulness of the policy rules that rely on those stars as inputs is decreasing. In other words, Powell is questioning the use of a mechanical rule, not monetary policy rules more generally.

Such an interpretation is very much in keeping with past statements made by Powell. For example, in 2015, as a Fed Governor, he said he was not in favor of a policy rule that was a simple equation for the Fed to follow in a mechanical fashion. Two years later, Powell said that traditional rules were backward looking, but that monetary policy needs to be forward looking and not overly reliant on past data. Upon becoming Fed Chair early this year, Powell made it a point to tell Congress he found monetary policy rules helpful—a sentiment he reiterated when testifying on the Hill last month.

The good news is that there is a monetary policy rule that is forward looking, not concerned with estimating the “stars,” and robust against an inflation fixation. I am referring to a nominal GDP level target, of course; a monetary policy rule that has been gaining advocates.

Like in years past, there was not a lot of discussion about the future of actual monetary policy at the Jackson Hole symposium. But if Powell really is moving the Federal Reserve towards adopting a rule, he is also beginning to outline a framework that should make a nominal GDP rule the first choice.

[Cross-posted from Alt-M.org]

It would have been natural to assume that partisan gerrymandering would not return as an issue to the Supreme Court until next year at the earliest, the election calendar for this year being too far advanced. But yesterday a federal judicial panel ruled that North Carolina’s U.S. House lines were unconstitutionally biased toward the interests of the Republican Party and suggested that it might impose new lines for November’s vote, even though there would be no time in which to hold a primary for the revised districts. Conducting an election without a primary might seem like a radical remedy, but the court pointed to other offices for which the state of North Carolina provides for election without a preceding primary stage.

If the court takes such a step, it would seem inevitable that defenders of the map will ask for a stay of the ruling from the U.S. Supreme Court. In June, as we know, the Court declined to reach the big constitutional issues on partisan gerrymandering, instead finding ways to send the two cases before it (Gill v. Whitford from Wisconsin and Benisek v. Lamone from Maryland) back to lower courts for more processing. 

In my forthcoming article on Gill and Benisek in the Cato Supreme Court Review, I suggest that with the retirement of Justice Anthony Kennedy, who’d been the swing vote on the issue, litigators from liberal good-government groups might find it prudent to refrain for a while from steering the question back up to the high court, instead biding their time in hopes of new appointments. After all, Kennedy’s replacement, given current political winds, is likely to side with the conservative bloc. But a contrasting and far more daring tactic would be to take advantage of the vacancy to make a move in lower courts now. To quote Rick Hasen’s new analysis at Election Law Blog, “given the current 4-4 split on the Supreme Court, any emergency action could well fail, leaving the lower court opinion in place.” And Hasen spells out the political implications: “if the lower court orders new districts for 2018, and the Supreme Court deadlocks 4-4 on an emergency request to overturn that order, we could have new districts for 2018 only, and that could help Democrats retake control of the U.S. House.”

Those are very big “ifs,” however. As Hasen concedes, “We know that the Supreme Court has not liked interim remedies in redistricting and election cases close to the election, and it has often rolled back such changes.” Moreover, Justices Breyer and Kagan in particular have lately shown considerable willingness to join with conservatives where necessary to find narrow grounds for decision that keep the Court’s steps small and incremental, so as not to risk landmark defeats at the hands of a mobilized 5-4 conservative court. It would not be surprising if one or more liberal Justices join a stay of a drastic order in the North Carolina case rather than set up a 2019 confrontation in such a way as to ensure a maximally ruffled conservative wing.

Some of these issues might come up at Cato’s 17th annual Constitution Day Sept. 17 – mark your calendar now! – where I’ll be discussing the gerrymandering cases on the mid-afternoon panel.

In the first of this series of posts, I explained that the mere presence of fractional-reserve banks itself has little bearing on an economy’s rate of money growth, which mainly depends on the growth rate of its stock of basic (commodity or fiat) money. The one exception to this rule, I said, consists of episodes in which growth in an economy’s money stock, defined broadly to include the public’s holdings of readily-redeemable bank IOUs as well as its holdings of basic money, is due in whole or in part to a decline in bank reserve ratios

In a second post, I pointed out that, while falling bank reserve ratios might in theory be to blame for business booms, a look at some of the more notorious booms shows that they did not in fact coincide with any substantial decline in bank reserve ratios.

In this third and final post, I complete my critique of the “Fractional Reserves lead to Austrian Business Cycles” (FR=ABC) thesis, by showing that, when fractional-reserve banking system reserve ratios do decline, the decline doesn’t necessarily result in a malinvestment boom.

Causes of Changed Bank Reserve Ratios

That historic booms haven’t typically been fueled by falling bank reserve ratios, meaning ratios of commercial bank reserves to commercial bank demand deposits and notes, doesn’t mean that those ratios never decline. In fact they may decline for several reasons. But when they do change, commercial bank reserve ratios usually change gradually rather than rapidly. In contrast central banks, and fiat-money issuing central banks especially, can and sometimes do occasionally expand their balance sheets quite rapidly, if not to a dramatic extent. It’s for this reason that monetary booms are more likely to be fueled by central bank credit operations than by commercial banks’ decision to “skimp” more than usual on reserves.

There are, however, some exceptions to the rule that reserve ratios tend to change only gradually. One of these stems from government regulations, changes in which can lead to reserve ratio changes that are both more substantial and more sudden. Thus in the U.S. during the 1990s changes to minimum bank reserve requirements and the manner of their enforcement led to a considerable decline in actual bank reserve ratios. In contrast, the Federal Reserve’s decision to begin paying interest on bank reserves starting in October 2008, followed by its various rounds of Quantitative Easing, caused bank reserve ratios to increase dramatically.

The other exception concerns cases in which fractional reserve banking is just developing. Obviously as that happens a switch from 100-percent reserves, or its equivalent, to some considerably lower fraction, might take place over a relatively short time span. In England during the last half of the 17th century, for example, the rise first of the goldsmith banks and then of the Bank of England led to a considerable reduction in the demand for monetary gold, its place being taken by a combination of paper notes and readily redeemable deposits.

Yet even that revolutionary change involved a less rapid increase in the role of fiduciary media, with even less significant cyclical implications, than one might first suppose, for several reasons. First, only a relatively small number of persons dealt with banks at first: for the vast majority of people, “money” still meant nothing other than copper and silver coins, plus (for the relatively well-heeled) the occasional gold guinea. Second, bank reserve ratios remained fairly high at first — the best estimates put them at around 30 percent or so — declining only gradually from that relatively high level. Finally, the fact that the change was as yet limited to England and one or two other economies meant that, instead of resulting in any substantial change England’s money stock, level of spending, or price level, it led to a largely contemporaneous outflow of now-surplus gold to the rest of the world. By allowing paper to stand in for specie, in other words, England was able to export that much more precious metal. The same thing occurred in Scotland over the course of the next century, only to a considerably greater degree thanks to the greater freedom enjoyed by Scotland’s banks. It was that development that caused Adam Smith to wax eloquent on the Scottish banking system’s contribution to Scottish economic growth.

Eventually, however, any fractional-reserve banking system tends to settle into a relatively “mature” state, after which, barring changes to government regulations, bank reserve ratios are likely to decline only gradually, if they decline at all, in response to numerous factors including improvements in settlement arrangements, economies of scale, and changes in the liquidity of marketability of banks’ non-reserve assets. For this reason it’s perfectly absurd to treat the relatively rapid expansion of fiduciary media in a fractional-reserve banking system that’s just taking root as illustrating tendencies present within established fractional-reserve banking systems.

Yet that’s just what some proponents of 100-percent banking appear to do. For example, in a relatively recent blog Robert Murphy serves-up the following “standard story of fractional reserve banking”:

Starting originally from a position of 100% reserve banking on demand deposits, the commercial banks look at all of their customers’ deposits of gold in their vaults, and take 80% of them, and lend them out into the community. This pushes down interest rates. But the original rich depositors don’t alter their behavior. Somebody who had planned on spending 8 of his 10 gold coins still does that. So aggregate consumption in the community doesn’t drop. Therefore, to the extent that the sudden drop in interest rates induces new investment projects that wouldn’t have occurred otherwise, there is an unsustainable boom that must eventually end in a bust.

Let pass Murphy’s unfounded — and by now repeatedly-refuted — suggestion that fractional reserve banking started out with bankers’ lending customers’ deposits without the customers knowing it. And forget as well, for the moment, that any banker who funds loans using deposits that the depositors themselves intend spend immediately will go bust in short order. The awkward fact remains that, once a fractional-reserve banking system is established, it cannot go on being established again and again, but instead settles down to a relatively stable reserve ratio. So instead of explaining how fractional reserve banking can give rise to recurring business cycles, the story Murphy offers is one that accounts for only a single, never to be repeated fractional-reserve based cyclical event.

Desirable and Undesirable Reserve Ratio Changes

Finally, a declining banking system reserve ratio doesn’t necessarily imply excessive money creation, lending, or bank maturity mismatching. That’s because, notwithstanding what Murphy and others claim, competing commercial banks generally can’t create money, or loans, out of thin air. Instead, their capacity to lend, like that of other intermediaries, depends crucially on their success at getting members of the public to hold on to their IOUs. The more IOUs bankers’ customers are willing to hold on to, and the fewer they choose to cash in, the more the bankers can afford to lend. If, on the other hand, instead of holding onto a competing bank’s IOUs, the bank’s customers all decide to spend them at once, the bank will fail in short order, and will do so even if its ordinary customers never stage a run on it. All of this goes for the readily redeemable bank IOUs that make up the stock of bank-supplied money no less than for IOUs of other sorts. In other words, contrary to what Robert Murphy suggests in his passage quoted above, it matters a great deal to any banker whether or not persons who have exchanged basic money for his banks’ redeemable claims plan to go on spending, thereby drawing on those claims, or not.

Furthermore, as I show in part II of my book on free banking, in a free or relatively free banking system, meaning one in which there are no legal reserve requirements and banks are free to issue their own circulating currency, bank reserve ratios will tend to change mainly in response to changes in the public’s demand to hold on to bank-money balances. When people choose to increase their holdings of (that is, to put off spending) bank deposits or notes or both, the banks can profitably “stretch” their reserves further, making them support a correspondingly higher quantity of bank money. If, on the other hand, people choose to reduce their holdings of bank money by trying to spend them more aggressively, the banks will be compelled to restrict their lending and raise their reserve ratios. The stock of bank-created money will, in other words, tend to adjust so as to offset opposite changes in money’s velocity, thereby stabilizing the product of the two.

This last result, far from implying a means by which fractional-reserve banks might fuel business cycles, suggests on the contrary that the equilibrium reserve ratio changes in a free banking system can actually help to avoid such cycles. For according to Friedrich Hayek’s writings of the 1930s, in which he develops his theory of the business cycle most fully, avoiding such cycles is a matter of maintaining, not a constant money stock (M), but a constant “total money stream” (MV).

Voluntary and Involuntary Saving

Hayek’s view is, of course, distinct from Murray Rothbard’s, and also from that of many other Austrian critics of fractional reserve banking. But it is also more intuitively appealing. For the Austrian theory of the business cycle attributes unsustainable booms to occasions when bank-financed investment exceeds voluntary saving. Such booms are unsustainable because the unnaturally low interest rates with which they’re associated inevitably give way to higher ones consistent with the public’s voluntary willingness to save. But why should rates rise? They rise because lending in excess of voluntary savings means adding more to the “total money stream” than savers take out of that stream. Eventually that increased money stream will serve to bid up prices. Higher prices will in term raise the demand for loans, pushing interest rates back up. The increase in rates in turn brings the boom to an end, launching the “bust” stage of the cycle.

If, in contrast, banks lend more only to the extent that doing so compensates for the public’s attempts to accumulate balances of bank money, the money stream remains constant. Consequently the increase in bank lending doesn’t result in any general increase in the demand for or prices of goods. There is, in this case, no tendency for either the demand for credit or interest rates to increase. Instead of being self-reversing, the investment “boom,” if it can be called such, is not inevitably self-reversing. Instead, it can go on for as long as the increased demand for fiduciary media persists, and perhaps forever.

As I’m not saying anything here that I haven’t said before, I have a pretty darn good idea what sort of counterarguments to anticipate. Among others I expect to see claims to the effect that people who hold onto balances of bank money (or fiduciary media or “money substitutes” or whatever one wishes to call bank -issued IOUs that serve as regularly-accepted means of exchange) are not “really” engaged in acts of voluntary saving, because they might choose to part with those balances at any time, or because a bank deposit balance or banknote is “neither a present nor a future good,” or something alone these lines.

Balderdash. To “save” is merely to refrain from spending one’s earnings; and one can save by holding on or adding to a bank deposit balance or redeemable banknote no less than by holding on to or accumulating Treasury bonds. That persons who choose to save by accumulating demand deposits do not commit themselves to saving any definite amount for any definite length of time does not make their decision to save any less real: so long as they hold on to bank-issued IOUs, they are devoting a quantity of savings precisely equal to the value of those IOUs to the banks that have them on their books: as Murray Rothbard himself might have put it — though he certainly never did so with regard to the case at hand — such persons have a “demonstrated preference” for not spending, that is, for saving, to the extent that they hold bank IOUs, where “demonstrated preference” refers to the (“praxeological”) insight that, regardless of what some outside expert might claim, peoples’ actual acts of choice supply the only real proof of what they desire or don’t desire.  According to that insight, so long as someone holds a bank balance or IOU, he desires the balance or IOU, and not the things that could be had for it, or any part of it. That is, he desires to be a creditor to the bank against which he holds the balance or IOU.

And so long as banks expand their lending in accord with their customers’ demonstrated preference such acts of saving, and no more, while contracting it as their customers’ willingness to direct their savings to them subsides, the banks’ lending will not contribute to business cycles, Austrian or otherwise.

Of course, real-world monetary systems don’t always conform to the ideal sort of banking system I’ve described, issuing more fiduciary media only to the extent that the public’s real demand for such media has itself increased. While  free banking systems of the sort I theorize about in my book tend to approximate this ideal, real world systems can and sometimes do create credit in excess of the public’s voluntary savings, occasionally without, though (as we’ve seen) most often with, the help of accommodative central banks. But that’s no reason to condemn fractional reserve banking. Instead it’s a reason for looking more deeply into the circumstances that sometimes allow banking and monetary systems to promote business cycles.

In other words, instead of repeating the facile cliché that fractional reserve banking causes business cycles, or condemning fiduciary media tout court, Austrian economists who want to put a stop to such cycles, and to do so without undermining beneficial bank undertakings, should inquire into the factors that sometimes cause banks to create more fiduciary media than their customers either want or need.

[Cross-posted from Alt-M.org]

As I reported before, a group of Chinese investors under the EB-5 immigration program have challenged the government’s illegal practice of counting spouses and minor children of investors against the immigration quota for investors. This practice, however, hurts all legal immigrants because the same provision governs the admission of derivatives of all legal immigrants. Counting derivatives dramatically reduces legal immigration, harming people trying to immigrate legally to the United States. The government finally responded to the lawsuit on Friday, and its response leaves much to be desired.

Background

Section 203 of the Immigration and Nationality Act (INA) provides three broad pathways for legal immigrants to receive green cards (i.e. permanent residence):

(a) Preference allocation for family-sponsored immigrants.—Aliens subject to the worldwide level specified in section 201(c) of this title for family-sponsored immigrants shall be allotted visas as follows …

(b) Preference allocation for employment-based immigrants.—Aliens subject to the worldwide level specified in section 201(d) of this title for employment-based immigrants in a fiscal year shall be allotted visas as follows …

(c) Diversity immigrants… aliens subject to the worldwide level specified in section 201(e) of this title for diversity immigrants shall be allotted visas each fiscal year as follows …

Subsections (a), (b), and (c) of section 203 make the spouses and minor children of the family members, employees-investors, or diversity lottery winners eligible for status. It is only subsection (d) that creates an opportunity for them to immigrate:

(d) Treatment of family members.—A spouse or child… shall, if not otherwise entitled to an immigrant status and the immediate issuance of a visa under subsection (a), (b), or (c), be entitled to the same status, and the same order of consideration provided in the respective subsection, if accompanying or following to join, the spouse or parent.

Nothing in subsection (d) of section 203 applies the “worldwide levels” (or quotas) under subsection (a), (b), or (c) to the spouses and minor children of immigrants. They are then presumptively not subject to those limits.

The Government’s Argument

1) “Although the Court’s analysis should begin with the INA’s text, the meaning the Court ascribes to the statutory text must reflect the statute’s ‘context.’” -P. 19

The most incredible thing about the government’s response is that it explicitly eschews any effort to explain its practice using the language of section 203(d). I expected them to make the incorrect argument that because an immigrant has the “same status” as another immigrant, they are both subject to the same quota. But this is obviously false, for reasons I explain here. Adult children of U.S. citizens are subject to a quota under subsection (a) of section 203, while minor children are not subject to a quota under section 201(b), yet both receive the same immigrant status. What matters is not the status an immigrant has, but under which provision they receive that status—one with a cap or one without a cap. Yet this bad argument is better than what the government argues in its response, which is nothing at all.

2) “Section 203(d) is a means by which a derivative spouse or child can obtain a visa under their principal’s applicable category in Section 203(a), (b) or (c).” Emphasis added, P. 22

This is as close to an explanation as the government gives for its interpretation. It is asserting that dependents don’t receive status under subsection (d) of section 203 which has no quota, but under subsections (a), (b), and (c) which do have quotas. Yet it provides zero textual support for this view. In fact, the investors’ brief (p. 17) cites several provisions where Congress explicitly describes dependents as receiving status under subsection (d): 8 U.S.C. 1101(a)(15)(V); 8 U.S.C. 1154(l)(2)(C); 8 U.S.C. 1186b; 8 U.S.C. 1255(i)(1)(B); and Public Law 107 – 56.

3) “The country cap also explicitly applies to derivatives, as stated in INA section 202(b)… . And since the country cap is a subset of the overall family and employment-based caps, then equally clearly, if the country cap applies to derivatives, then so too do the overall caps” -Pp. 25-26

There are two types of immigration quotas: 1) “worldwide levels” that limit the absolute number of immigrants, and 2) “per-country” levels that limit the share of the worldwide level that a single nationality can receive. Section 202 of the INA does mention rules for counting some spouses and minor children against the per-country limits, but it never references spouses and minor children admitted under section 203(d). That is notable because subsection (d) of section 203 explicitly describes two types of spouses and minor children—those entitled to status under subsection (d) and those “otherwise entitled to immigrant status… under subsection (a), (b), or (c).”

This second group includes, for example, certain special immigrants under subsection (b)(4). The reason that spouses and children of these special immigrants are counted against the limits is that they are part of the definition of a special immigrant (see section 101(a)(27)).  That means that these derivatives have to be counted because they receive status, not under subsection (d) of section 203 which has no cap, but under the capped sections of section 203. Under the government’s view, these provisions that include spouses and children as part of the definition of special immigrants serve no purpose at all, which violates a basic cannon of statutory interpretation. The government is attempting to confuse the two types of derivatives in order to save its erroneous interpretation.

4) “Congressional intent is further demonstrated by the fact that when Congress exempts derivative spouses and children from an applicable numerical cap, it almost always does so explicitly.” -P. 38.

This statement is the opposite of the truth. In support of its statement, it cites a number of categories of nonimmigrants (H-1Bs, H-2Bs, H1-B1s, E-3s, Ts, Us) and special immigrant Iraqis and Afghanis, but in almost every one of its cases that it cites, the spouse or child is part of the definition of the eligible category. For example, H-1Bs are defined in section 101(a)(15)(H) as “an alien… who is coming temporarily to the United States to perform services… in a specialty occupation… and the alien spouse and minor children of any such alien.” In other words, spouses and children start out eligible, and so subject to the quota, so if Congress wanted to exempt them, it had to do so explicitly. But in section 203, spouses and children start out ineligible, and so not subject to a cap, and are separately made eligible under subsection (d), which has no quota so there is no need to explicitly exempt them.

In every comparable case, where the spouses and children start out ineligible and then separately are made eligible, Congress specifically required them to be counted. The Refugee Act of 1980, section 207 of the INA, has a directly comparable provision. Spouses and children are not eligible under the definition of a refugee under section 101(a)(42) and so not subject to the cap on refugees in section 207(a), but section 207(c)(2)(A) makes them eligible, and when it does so, it explicitly states, “Upon the spouse’s or child’s admission to the United States, such admission shall be charged against the numerical limitation …” In other words, exactly the language that isn’t in section 203(d).

 5) “There is a particular reason why Congress would have specified derivative counting in this way in the Refugee Act: unlike the caps at issue in this case, which are set by statute, the refugee cap is established by the President. Thus, the specific derivative provision is a deliberate check on the very broad authority that Congress had otherwise delegated to the President in the Refugee Act.” -P. 40

This explanation simply doesn’t work for the government. Why would Congress need a special “check” on his authority to not count derivatives if, on the government’s theory, the statute requires them to be counted to begin with? It doesn’t make any sense.

6) “Plaintiffs point to a single allegedly contrary provision in the Refugee Act of 1980.” -P. 39

This is just false. The investors’ motion also cites three other directly comparable instances, all of which were enacted at the exact same time as section 203 in 1990 (pp. 21-22). These provisions provided green cards to Hong Kong employees, displaced Tibetans, and transitional diversity visa applicants. In each case, Congress created the category for principal applicants and separately created the eligibility for their spouses and minor children using almost exactly the same language as section 203(d). But in 1991, it amended each provision to require that spouses and children be counted against those quotas. Did the government not actually read the motion or did it misrepresent it?

7) “The 1990 Act contained exactly the same language as the 1965 Act… . When Congress repeats language with a well understood construction in a new statute, it is presumed to intend to continue that same construction.” -P. 23

This is also false. Under the Immigration Act of 1965, derivatives were explicitly required to be counted, being listed in a subsection that began “Aliens who are subject to the numerical limitations specified in section 201(a) shall be allotted visas … as follows:”. The last category was a “spouse or child” of a primary applicant. In 1990, spouses and children became their own subsection, not included in the categories subject to the worldwide limits. The government’s claim is simply untrue.

8) “Plaintiffs contend that the restructuring of Section 203 in the 1990 Act had huge substantive effects by taking EB-5 investors’ spouses and children… completely out of the preference system altogether.”

This is again false. Spouses and children of investors are still part of the preference system as their eligibility is tied to their parents or spouses, and they must wait alongside them. They cannot simply enter “outside the preference system.”

9) “Not once in the thirty years since the 1990 Act was passed has any court ever interpreted the INA in the way Plaintiffs now claim Congress intended all along.” -P. 35

First, the EB-5 backlog didn’t exist until 2014, so they never would have had standing to sue prior to then. Second, this isn’t the first time that the government has been caught miscounting green cards years after it implemented the policy. The Johnson, Nixon, and Ford administrations interpreted the Cuban Adjustment Act of 1966 to count Cubans against the immigration quotas, and almost a decade after the bill’s passage, the Ford administration was sued, and it admitted in court that it was wrong to count them all along. The fact that a practice has occurred for many years does not mean that the practice is correct.

10) “The D.C. Circuit has cautioned that ‘legislative posturing serves no useful purpose …’ …  The isolated floor statements that Plaintiffs cite thus carry little weight in constructing the meaning of the INA’s provisions respecting counting derivatives towards the annual allotments of EB-5 visas.” -P. 33

The government dismisses evidence that I reported on here that clearly indicates that members of Congress explicitly expected that the EB-5 program would admit 10,000 investors, not 3,500 investors and 6,500 derivatives. Some members explicitly described the process through which they envisioned spouses and children entering. While the government is correct that this shouldn’t trump the text of the law, it doesn’t—it reinforces what is already there. The government responds by citing an ambiguous conference committee report on the final bill that does not contradict the floor statements of the members and does not explicitly explain how it deals with the issue of derivatives. This could be because the conference committee was just as interested in determining the outcome of the bill as those individual members and didn’t want to lose members by stating one way or another.

11) “No legislative history even remotely supports the proposition that Congress meant to exclude all derivatives from applicable caps …” -P. 34

This is also false. As explained above, in the Immigration Act of 1990, Congress enacted provisions providing green cards for Hong Kong employees, displaced Tibetans, and transitional diversity visa applicants using the same language as section 203(d). In 1991, Congress amended the 1990 act to explicitly require counting of spouses and minor children of the principals. Here is an example with the change in bold:

(a) In General.–Notwithstanding the numerical limitations in sections 201 and 202 of the Immigration and Nationality Act, there shall be made available to qualified displaced Tibetans described in subsection (b) (or in subsection (d) as the spouse or child of such an alien) 1,000 immigrant visas in the 3-fiscal-year period beginning with fiscal year 1991.

(d) Derivative Status for Spouses and Children.–A spouse or child … shall, if not otherwise entitled to an immigrant status and the immediate issuance of a visa under this section, be entitled to the same status, and the same order of consideration, provided under this section, if accompanying, or following to join, his spouse or parent.

If derivatives were already required to be counted against the quota, it would not have needed to insert any language into this provision, but it did anyway, making its interpretation of this language manifest to all. Congress made this amendment in every relevant place except one: subsection (d) of section 203. This is as close as it gets to positive proof of Congress’s interpretation of the statute.

Conclusion

In summation, the government provides no theory at all of how the plain language of the statute requires counting. Its indirect textual evidence falls flat and even contradicts its claims, and it repeatedly misstates the legislative history. The government concludes by fearmongering about how much legal immigration would increase if it were forced to implement the statute Congress actually passed. But legal immigration isn’t scary, and even it were, it is even scarier to allow the government the power to amend the laws without Congress.

Hours ago, Illinois Gov. Bruce Rauner (R) vetoed legislation that would have subjected enrollees in short-term health insurance plans to higher deductibles, higher administrative costs, higher premiums, and lost coverage. The vetoed bill would have blocked the consumer protections made available in that market by a final rule issued earlier this month by the U.S. Department of Health and Human Services, and would have (further) jeopardized ObamaCare’s risk pools by forcing even more sick patients into those pools.

Short-term plans are exempt from federal health insurance regulations, and as a result offer broader access to providers at a cost that is often 70 percent less than ObamaCare plans.

Rather than allow open competition between those two ways of providing health-insurance protection, the Obama administration sabatoged short-term plans. It forced short-term plan deductibles to reset after three months, and forced consumers in those plans to reenroll every three months, changes that increased administrative costs in that market.

The Obama administration further subjected short-term plan enrollees to medical underwriting after they fell ill – which meant higher premiums and cancelled coverage for the sick. Prior to the Obama rule, a consumer who purchased a short-term plan in January and developed cancer in February would have coverage until the end of December, at which point she could enroll in an ObamaCare plan. The National Association of Insurance Commissioners complained that the Obama rule required that her coverage expire at the end of March – effectively cancelling her coverage and leaving her with no coverage for up to nine months. The Obama administration stripped consumer protections from this market by expanding medical underwriting after enrollees get sick – something Congress has consistently tried to reduce.

Earlier this month, HHS restored and expanded the consumer protections the Obama administration gutted. It allowed short-term plans to cover enrollees for up to 12 months, and allowed insurers to extend short-term plans for up to an additional 24 months, for a total of up to 36 months. These changes allow short-term plans to offer deductibles tallied on an annual basis, rather than deductibles that reset every three months. They spare enrollees and insurers the expense of re-enrolling every three months. Most important, they allow short-term plans to protect enrollees who get sick from medical underwriting at least until they again become eligible to enroll in an ObamaCare plan the following January.

Indeed, HHS clarified that because the agency has no authority to regulate standalone “renewal guarantees” that allow short-term plan enrollees who fall ill to continue paying healthy-person premiums, “it may be possible for a consumer to maintain coverage under short-term, limited-duration insurance policies for extended periods of time” by “stringing together coverage under separate policies offered by the same or different issuers, for total coverage periods that would exceed 36 months.” As HHS Secretary Alex Azar explains, this helps ObamaCare:

Our decision to allow renewability and separate premium protections could also allow consumers to hold on to their short-term coverage if they get sick, rather than going to the exchanges, which improves the exchange risk pools.

I made that very argument in my comments on the proposed rule.

Illinois law automatically adopts whatever rules and definitions the federal government creates for short-term plans. If Illinois legislators had just done nothing, millions of Illinois residents automatically would have had a health insurance option that is more affordable and provides better coverage than ObamaCare.

But this is Illinois.

In their infinite wisdom, Illinois legislators passed legislation that once again would have exposed short-term plan enrollees higher deductibles, higher administrative costs, higher premiums, and cancelled coverage. The bill would have:

  • Required that initial contract terms for short-term plans last no longer than six months. It further provided that such plans could be extended for no more than six additional months.
  • Mandated that consumers who wish to keep purchasing consecutive short-term plans go uninsured for 60 days. Some consumers would inevitably develop expensive conditions during that period, and therefore be left with no coverage until the next ObamaCare open enrollment period. 
  • Prohibited renewal guarantees. The legislation specifically cut off this option. As a result, it would have dumped every single short-term plan enrollee with an expensive illness into the Exchanges. Ironically, Illinois legislators who thought they were bolstering ObamaCare actually passed a bill that would have sabotaged it.

Thankfully, Gov. Rauner stopped this ignorant, ridiculous effort to deny consumer protections to short-term plan enrollees. All eyes now turn to California, where Gov. Jerry Brown (D) must sign or veto legislation that would deny medical care to those who miss ObamaCare’s open enrollment period – by banning short-term plans altogether.

The Trump administration reached a deal with Mexico today on some bilateral issues in the renegotiation of the North American Free Trade Agreement (NAFTA). Some details of what was agreed are here. Other issues have been reported by the press as having been agreed, but until we see official government announcements, we are skeptical that those issues have been fully resolved.

This is not the conclusion of the NAFTA talks, because there are a number of outstanding issues, and Canada has to be brought back to the table as well. Nevertheless, today’s United States - Mexico deal is in some sense, “progress.” In another sense, however, it is a step backwards. To illustrate this, let’s look at the example of what was agreed on auto tariffs.

NAFTA eliminates tariffs on trade between Canada, Mexico, and the United States, but only for products that meet specific requirements to qualify as being made in North America. For example, you couldn’t make a car in China, ship it to Mexico and put the tires on, and then export it to the United States at the NAFTA zero tariff. Under current NAFTA rules, in order to qualify for duty free treatment, 62.5 percent of the content of a vehicle has to be from the NAFTA countries. 

The Trump administration has been opposed to this content threshold, arguing that it needs to be higher. A key part of the bilateral talks between the United States and Mexico was to address this issue, and also add some conditions related to wage levels.

With regard to the content threshold, the United States has asked for this requirement to be raised, and according to the fact sheet released by USTR, the new content requirement will be increased to 75 percent. On the wage levels, the United States has pushed for a provision that requires 40 percent of the content of light trucks and 45 percent of pickup trucks to be made by workers that earn at least $16 an hour, and Mexico appears to have agreed to this as well. These changes make it harder for Mexican producers to satisfy the conditions to get the zero tariffs, while Canada and the United States would not be affected by this change.

So what’s the point of all this? The goal of the Trump administration’s negotiators was to make it more difficult for autos to qualify for the zero tariffs. In other words, they are taking some of the free trade out of NAFTA.

Along the same lines, reports suggest there is a provision that would allow the United States to charge tariffs above the normal 2.5 percent tariff rate (which applies to countries that don’t have a trade agreement with the United States) for any new auto factories built in Mexico. It is not clear from today’s announcement whether this is included in the newly agreed provisions.

The impact of these changes, if the NAFTA talks are completed and the new rules go into effect, will vary by producer, so it is hard to give a precise assessment of how much it will raise costs overall. The 25 percent auto tariff that the Trump administration is currently considering – ostensibly based on national security, but really just protectionism – is likely to raise auto prices a lot. A recent study estimates that if Trump implements his proposed 25 percent tariff on auto imports, the average price of a compact car would increase by $1,408 to $2,057, while luxury SUVs and crossover vehicle prices could increase by $4,708 to $6,972. No similar study has yet been done for the new NAFTA content requirements, but whatever the final figure may be, it is clear that these stricter content requirements will raise prices to some extent, which will make autos more expensive for consumers and potentially make North American production less competitive.

The NAFTA renegotiation has led to great market uncertainty, and it would be nice to get this all resolved. But before we applaud the completion of any deal, what matters most is in the details. From what we know at the moment, those details suggest that NAFTA may have been made worse, not better. And there are a still a lot of details to work out. A full assessment of the new NAFTA will have to await all of the final terms.

Furthermore, President Trump suggested that while Canada can join soon, it may not be part of the agreement at all, and instead could face a tariff on car exports to the United States. Leaving Canada out of a new NAFTA would be a mistake. On the phone during the announcement, Mexican President Enrique Pena Nieto remarked on more than one occasion that he was looking forward to Canada rejoining the talks. This should be received positively as it suggests Mexico is still committed to a trilateral deal.  What happens next is anyone’s guess, but we should keep our eyes open for the return of Canada’s Foreign Minister Chrystia Freeland to Washington to wrap up the discussions soon. Let’s hope the other changes still under discussion point us in a more positive direction.

Sometimes it’s worth reading the fine print in obscure regulatory proposals. One such example is contained in a “proposed rulemaking” by the EPA on what are called “dose-response models.”

Buried in the Federal Register a few months back (on April 30) is this seemingly innocuous verbiage:

EPA should also incorporate the concept of model uncertainty when needed as a default to optimize dose risk estimation based on major competing models, including linear, threshold, U-shaped, J-shaped and bell-shaped models.

Your eyes glaze over, right?

Instead they should be popping out. EPA is proposing a major change in the way we regulate radiation, carcinogens, and toxic substances in our environment.

Since the 1950s, environmental regulations are largely based upon something called the “linearity-no threshold” (LNT) model, which holds, for example, that the first photon of ionizing radiation has the same probability of causing cancer as the bazillionth one.

The hypothetical mechanism is that each photon has an equal probability of zapping a single base pair in our DNA, and that can result in a breakage which may have the very remote possibility of inducing cancer.

The LNT is in fact fallout from, well, fallout—the radioactive nuclides that were the widely dispersed byproduct of atmospheric testing of nuclear weapons from the 1940s through the early 1960s. University of Massachusetts toxicologist Ed Calabrese has painstakingly built a remarkable literature showing that the LNT was largely the work of one man, a Nobel-prize winning mutation geneticist named Herman Muller. His work was classified by the old Atomic Energy Commission, and, amazingly, as shown by Calabrese, likely not peer reviewed.

His work was also wrong. Since Muller’s early work on x-ray induced mutations in fruit flies, it has since been discovered that DNA breaks all of the time, and that our cells carry their own repair kit. Cancer can occur when they are overwhelmed by large numbers of breakages.

Think of the sun’s radiation, which includes the ionizing ultraviolet wavelengths. The LNT model implies the first photon we experience can cause cancer. In reality there is clearly a threshold above which the cancer probability rises. Everyone is exposed to the sun, but only some populations are prone to basal cell skin cancers and they are in very sunny environments, which explains why it is so prevalent in Australia, and pretty much nonexistent in the planet’s northernmost populated areas.

In fact the LNT model isn’t just wrong—nature actually works opposite to it. Small amounts of exposure to things that are toxic in large amounts can actually be beneficial. Again, consider sunlight. It’s required to catalyze the final synthesis of Vitamin D. Absent Vitamin D, pretty terrible diseases, like Ricketts, with terrible disfigurement and sometimes fatal sequelae.

The alternative model is also largely the handiwork of Dr. Calabrese, which he calls the “biphasic dose-response,” or “hormetic” model. Note this is not homeopathy, which erroneously holds that the smaller the dose of something, the greater the therapeutic effect, which is nonsensical.

The biphasic response is so ubiquitous that it forms much of the basis for modern pharmacology. Small doses of things like certain snake venoms can reduce high blood pressure with all the resultant clinical benefits. Large doses are obviously fatal. In fact, the first Angiotensin Converting Enzyme inhibitor antihypertensives (referred to as ACE inhibitors) were derived from the venom of a Brazilian viper.

The obtuse verbiage quoted from the Federal Register means that EPA is proposing to, where appropriate (and that may be for a large number of instances), substitute other models, including the hormetic one, for the inappropriate LNT.

This rather momentous proposed change owes itself largely to one man—Ed Calabrese, who just happens to be a Cato adjunct scholar in the Center for the Study of Science. Calabrese has hundreds of papers in the scientific literature largely documenting the prevalence of hormesis and the rather shoddy way that the LNT was established and maintained, as well as a tremendous review article on it published last year in one of the Nature family of journals.

Last Thursday, Tucker Carlson invited Peter Kirsanow onto his top-rated Fox News show Tucker Carlson Tonight to discuss illegal immigration and crime. They began the segment by playing a recent clip of me and Carlson arguing about data on illegal immigrant criminality in Texas. In that earlier segment, Carlson said we don’t have good data on illegal immigrant criminality and I said we do, specifically from the state of Texas. The data show that illegal immigrants have a lower murder conviction rate than native-born Americans. 

Kirsanow responded to my clip in a multi-minute near-monologue. Unfortunately, Kirsanow made many errors and misstatements. His comments on television parroted a piece that he wrote earlier this year in National Review. That piece made so many mathematical, definitional, and logical errors that I rebutted it in detail in Reason this February.

Since I was not invited on Thursday’s segment to debate Kirsanow while he criticized my points and presented his own, I’ve decided to respond here.  Below are Kirsanow’s quotes from his recent appearance on Tucker Carlson Tonight, followed by my rebuttal.

There’s something called the State Criminal Alien Assistance Program and you can extrapolate from that and get pretty reliable data.

No, you cannot extrapolate from the State Criminal Alien Assistance Program (SCAAP) data to get reliable national estimates of illegal immigrant criminality. The subsequent statistics that Kirsanow uses in his segment are nearly all from a 2011 Government Accountability Office (GAO) report that specifically says, “[w]hile our analysis provides insight into the costs associated with incarcerating criminal aliens in these states and localities, the results of this analysis are not generalizable to other states and localities.” A follow-up GAO report on SCAAP in 2018 repeated the same warning that “[o]verall, our findings are not generalizable to criminal aliens not included in our federal and state and local study populations.” Data from the report that Kirsanow relies upon cannot be used for Kirsanow’s purposes.

SCAAP is a federal program that is supposed to compensate states and localities for incarcerating some illegal immigrants, but it is not a reliable program. As Kirsanow himself admitted, SCAAP only “partially reimburses states and localities for the cost of incarcerating certain criminal aliens [emphasis added].”  States also must choose whom to report to the federal government for SCAAP refunds, which are often small compared to the cost of incarceration, so requests are inconsistent, partial, and the criteria for reporting vary considerably by state. 

He [Alex Nowrasteh] conveniently mentioned Texas to claim that the homicide rates among illegal aliens is 44 percent lower than that of lawful residents. He chose the one state where it is true that the homicide rate is lower for illegal aliens, by 15 percent, not 44 percent.

Kirsanow is mixing and matching his sources here. First, I said that the homicide conviction rate for illegal immigrants in Texas was 44 percent below that of natives in 2016. Unique among all American states, Texas records criminal convictions by crime and the immigration status of the person convicted or arrested. I requested and received data on this from the Texas Department of Public Safety and then made public information requests to every state to see if they kept similar data, but none had. 

Second, Kirsanow said that Texas is the one state where illegal immigrant homicide rates are below those of natives. Even if we analyze the SCAAP data in the GAO reports in the incorrect way that Kirsanow does, there is no evidence for his claim or that the homicide rate for illegal immigrates in Texas in 15 percent below that of native-born Americans.  Kirsanow was likely citing a Cato Immigration Research and Policy Brief that looked at the relative rates of homicide convictions in Texas in 2015, but he got the percentage wrong. In 2015, illegal immigrants had a homicide conviction rate that was 16 percent below that of native-born Americans according to our Brief. 

There are over 300,000 illegal aliens incarcerated.

Kirsanow got this number from the 2011 GAO report mentioned above. That GAO report does state that there were 295,959 incarcerations of criminal aliens in state and local prisons over the course of 2009. Kirsanow incorrectly interpreted what that number meant and made many other errors. 

First, the GAO’s definition of criminal aliens is “[n]oncitizens who are residing in the United States legally or illegally and are convicted of a crime.” Thus, the data on criminal aliens also include legal immigrants who have not yet become citizens. On television, Kirsanow erroneously assumed that the term criminal aliens is synonymous with illegal immigrants, even though he previously acknowledged the distinction in a National Review article, in which he wrote “[a]ccording to GAO, in FY 2009 295,959 SCAAP criminal aliens, of whom approximately 227,600 are illegal aliens, were incarcerated in state jails and prisons.”  

Second, the 295,959 number is the total number of incarcerations of criminal aliens in 2011, not the number of individual criminal aliens incarcerated. The 2011 GAO report states this bluntly: “SCAAP data do not represent the number of unique individuals since these individuals could be incarcerated in multiple SCAAP jurisdictions during the reporting period.”

In other words, the 295,959 number includes many of the same people who have been incarcerated multiple times. If an individual criminal alien was incarcerated for 10 short sentences, released after each one, and then re-incarcerated, then that single alien would account for 10 incarcerations. But Kirsnaow counted him as 10 separate individuals. In Kirsanow’s piece for National Review on this subject, he then compared the number of native-born individuals incarcerated with their total population to estimate relative incarceration rates. In other words, Kirsanow compares the flow of criminal aliens into prison to the stock of all aliens with the stock of natives in prison compared to the stock of all natives in the entire population. Kirsanow confused stocks and flows and his nonsensical apples-to-oranges comparison produced a relatively higher, but incorrect, illegal immigrant incarceration rate.

Third, a quick look at the American Community Survey shows just how wrong Kirsanow is. In 2009, the ACS reported that there were 162,579 non-citizens incarcerated in all federal, state, and local adult correctional facilities (S2601B, 1-year). This is slightly-more than half of the 295,959 incarcerations that SCAAP reports in just state and local prisons. That makes it logically impossible for the 295,959 number to refer to the total number of criminal aliens incarcerated. The ACS counts stocks at a specific time, the GAO counted some flows. Kirsanow is incorrect for talking about the SCAAP figures as if they are stocks of illegal immigrants incarcerated. 

They don’t count the millions of offenses and crimes committed by illegal aliens.

I wish we could count the millions of crimes committed by people that are unsolved or unreported and then study the demographics of the people who committed them, but that’s impossible. Furthermore, we would have to also have that information for all native-born Americans to make a comparison between the illegal immigrant and native-born crime rates. To go even further, I wish we could count everything that didn’t happen as it would immensely improve our world and social science. In the real world, Kirsanow’s statement does not have much relevance. 

John Lott did probably the most methodologically rigorous and comprehensive examination of this type using Arizona Department of Corrections Data.

Kirsanow approvingly cited this working paper by economist John R. Lott Jr. of the Crime Prevention Research Center, in which he purported to find that illegal immigrants in Arizona from 1985 through 2017 have a far higher prison admission rate than U.S. citizens. However, Lott made a small but fatal error that undermined his entire finding: He misidentified a variable in the dataset. Lott wrote his paper based on a dataset he obtained from the Arizona Department of Corrections (ADC) that lists all admitted prisoners in Arizona. According to Lott, the data allowed him to identify “whether they [the prisoners] are illegal or legal residents.” Yet the dataset does not allow him or anybody else to identify illegal immigrants.

The variable that Lott focused on is “CITIZEN.” That variable is broken down into seven categories. Lott erroneously assumed that the third category, called “non-US citizen and deportable,” only counted illegal immigrants. That is not true because non-US citizen and deportable immigrants are not all illegal immigrants, as confirmed by the ADC – the source of Lott’s data. A significant proportion of non-U.S. citizens who are deported every year are legal immigrants who violate the terms of their visas in one way or the other, frequently by committing crimes. According to the American Immigration Council, about 10 percent of people deported annually are Lawful Permanent Residents or green card holders—and that doesn’t include the non-immigrants on other visas who were lawfully present in the United States and then deported. 

Lott mistakenly chose a variable that combines an unknown number of legal immigrants with an unknown number of illegal immigrants and assumed that it only counted illegal immigrants. Lott correctly observed that “[l]umping together documented and undocumented immigrants (and often naturalized citizens) may mean combining very different groups of people.” Unfortunately, the variable he chose also lumped together legal immigrants and illegal immigrants. I wrote about the fatal flaw in Lott’s paper here in February. Lott and I had an exchange here. Kirsanow should have known that Lott’s paper was not methodologically sound because he misidentified the only variable that mattered for his analysis. Lott’s working paper is not the slam dunk that Kirsanow claimed it was.   

Alex is very knowledgeable and that’s why it’s puzzling that he won’t acknowledge the overwhelming amount of data that shows that illegal aliens not only commit more crimes, at a higher rate that is, than lawful residents but more serious crimes at a far higher rate than legal residents.

As I’ve shown above, Kirsanow misread, misinterpreted, and incorrectly defined numerous terms in the GAO report that was his near-exclusive source of information to make an intellectually indefensible case that illegal immigrants are more likely to be criminals than native-born Americans. What’s even more puzzling is that Kirsanow is aware of his errors after a previous exchange that he and I had on this very issue but he chose to repeat them on television regardless. 

Cato scholars have produced much original research on illegal immigrant criminality.  Based on data from the state of Texas in 2015, we found that illegal immigrants have a lower criminal conviction rate than native-born Americans for most crimes in that state (number of convictions), the rate of homicide convictions for illegal immigrants is below that of native-born Americans in 2016 (the number of people in each subpopulation convicted), and that the incarceration rates for illegal immigrants are below those of native-born Americans (but above those of legal immigrants).  Peer-reviewed research also points in roughly the same direction.   

Policy analysts, commentators, politicians, and members of the media have a duty to honestly parse the facts and debate these complex issues in good faith.

Late last week UPI news ran a report by E.J. Mundell with the headline, “Government efforts to curb opioid prescriptions might have backfired.” It cites two separate studies published online in JAMA Surgery on August 22 that examined two different restrictive opioid policies that fell victim to the Law of Unintended Consequences.

The first study, by researchers at the University of Michigan, evaluated the impact of the Drug Enforcement Administration’s 2014 rescheduling of hydrocodone (Vicodin) from Schedule III to Schedule II. Prescriptions for Schedule III narcotics may be phoned or faxed in by providers, but Schedule II narcotics require the patient to see the prescriber in person in order to obtain a prescription. The DEA’s goal was to reduce the number of Vicodin pills, popular with non-medical users, available for diversion to the black market.

The study looked at 21,955 post-surgical patients across 75 hospitals in Michigan between 2012 and 2015 and found that the number of hydrocodone pills prescribed after the 2014 schedule change increased by an average of seven 5mg tablets. The total Oral Morphine Equivalent of prescribed hydrocodone did not change significantly after the DEA made hydrocodone Schedule II. However, the refill rate decreased after the change. The study’s abstract concluded, “Changing hydrocodone from schedule III to schedule II was associated with an increase in the amount of opioids filled in the initial prescription following surgery.”

As a practicing general surgeon, my initial reaction to this study was: “Tell me something I don’t know.” Prior to the 2014 schedule change, I would often start off prescribing a small amount of hydrocodone to some of my post-op patients (depending upon the procedure and the patient’s medical history) with the knowledge that I can phone in a refill for those patients who were still in need of it for their pain after the initial supply ran out. Once it was rescheduled, I changed my prescribing habits. Not wanting any of my patients to run out after hours, over a weekend, or on a holiday—when the office is closed and their only recourse would be to go to an emergency room or urgent care center to get a prescription refill—I increased the amount I prescribe (based on my best estimate of the maximum amount of days any individual patient might need hydrocodone) to reduce the chances of them needing a refill. This results in some patients having leftover Vicodin pills in their medicine cabinet. On the other hand, fewer of those patients need refills.

Not surprisingly, many of my clinical peers have done the same thing. It’s not a surprise because most physicians place the interests of their patients ahead of the interests of regulators and bureaucrats. So the adjustment made in postoperative hydrocodone prescribing was basically a “no brainer.” 

Unfortunately, in the past couple of years, many states have gone to restricting the number and dosage of pills that can be prescribed postoperatively—in some states the limit is 5 days, in others as few as 3 days—so many patients now must go to the office (or emergency room or urgent care) just a few days after their operation to get that refill after all. The American Medical Association and most medical specialty associations oppose a proposal before the US Senate to impose a national 3-day limit on opioid pill prescriptions

The second study, from researchers at Dartmouth Medical School, evaluated the impact of New Hampshire’s Prescription Drug Monitoring Program on the number of opioid pills prescribed. At this point every state has a PDMP, a program that surveils opioid prescribing and use by providers and patients. New Hampshire’s PDMP went active January 1, 2017. The goal again is to reduce the amount of pills prescribed. 

As I have written here and here there is evidence in the peer-reviewed literature that PDMPs may indeed be intimidating doctors into reducing the number and dosage of pain pills they prescribe—but this is only serving to drive non-medical users to cheaper and more dangerous heroin (often laced with fentanyl) while making patients needlessly suffer.

However, this latest study, which looked at the number of opioids prescribed for postoperative pain to 1057 patients at the Dartmouth-Hitchcock Medical Center during the six months preceding and the six months following the activation of New Hampshire’s PDMP, came to a different conclusion. It found that the mean number of pills prescribed during the six months preceding the PDMP had decreased 22.1 percent, but that during the six months after the PDMP the rate of decrease slowed to just 3.9 percent. It concluded, “A mandatory PDMP query requirement was not significantly associated with the overall rate of opioid prescribing or the mean number of pills prescribed for patients undergoing general surgical procedures.” 

The study is limited by the small number of patients, the limitation to just one hospital, and the short length of follow up. But it does add to the growing body of evidence suggesting that PDMPs are not achieving their mission: reducing the overdose death rate while, at the same time, assuring that patients receive adequate treatment of their pain.

Alas, despite the immutable presence of the Law of Unintended Consequences, don’t expect policymakers to rethink their misguided prohibitionist approach to the opioid overdose problem any time soon.

Pages