Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Despite $8.6 billion spent on the eradication of opium in Afghanistan over the past seventeen years, the US military has failed to stem the flow of Taliban revenue from  the illicit drug trade. Afghanistan produces the majority of the world’s opium, and recent U.S. military escalations have failed to alter the situation. According to a recent piece in the Wall Street Journal:

“Nine months of targeted airstrikes on opium production sites across Afghanistan have failed to put a significant dent in the illegal drug trade that provides the Taliban with hundreds of millions of dollars, according to figures provided by the U.S. military.”

This foreign war on drugs has been no more successful than its domestic counterpart. If U.S. military might cannot suppress the underground market, local police forces have no hope.  Supply side repression does not seem to work, and its costs and unintended consequences are large.

 Research assistant Erin Partin contributed to this blog post.

In 1985, Reason Foundation co-founder and then-president Robert Poole heard about a variable road pricing experiment in Hong Kong. In 1986, he learned that France and other European countries were offering private concessions to build toll roads. In 1987, he interviewed officials of Amtech, which had just invented electronic transponders that could be used for road tolling. He put these three ideas together in a pioneering 1988 paper suggesting that Los Angeles, the city with the worst congestion in America, could solve its traffic problems by adding private, variable-priced toll lanes to existing freeways.

Although Poole’s proposal has since been carried out successfully on a few freeways in southern California and elsewhere, it is nowhere near as ubiquitous as it ought to be given that thirty years have passed and congestion is worse today in dozens of urban areas than it was in Los Angeles in 1988. So Poole has written Rethinking America’s Highways, a 320-page review of his research on the subject since that time. Poole will speak about his book at a livestreamed Cato event this Friday at noon, eastern time.

Because Poole has influenced my thinking in many ways (and, to a very small degree, the reverse is true), many of the concepts in the book will be familiar to readers of Gridlock or some of my Cato policy analyses. For example, Poole describes elevated highways such as the Lee Roy Selmon Expressway in Tampa as a way private concessionaires could add capacity to existing roads. He also looks at the state of autonomous vehicles and their potential contributions to congestion reduction.

France’s Millau Viaduct, by many measures the largest bridge in the world, was built entirely with private money at no risk to French taxpayers. The stunning beauty, size, and price of the bridge are an inspiration to supporters of public-private partnerships everywhere.

Beyond these details, Poole is primarily concerned with fixing congestion and rebuilding the nation’s aging Interstate Highway System. His “New Vision for U.S. Highways,” the subject of the book’s longest chapters, is that congested roads should be tolled and new construction and reconstruction should be done by private concessionaires, not  public agencies. The book’s cover shows France’s Millau Viaduct, which a private concessioner opened in 2004 at a cost of more than $400 million. Poole compares the differences between demand-risk and availability-payment partnerships – in the former, the private partner takes the risk and earns any profits; in the latter, the public takes the risk and the private partner is guaranteed a profit – coming down on the side of the former.

This chart showing throughput on a freeway lane is based on the same data as a chart on page 256 of Rethinking America’s Highways. It suggests that, by keeping speeds from falling below 50 mph, variable-priced tolling can greatly increase throughput during rush hours.

The tolling chapter answers arguments against tolling, responses Poole has no doubt made so many times he is tired of giving them. He mentions (but doesn’t emphasize enough, in my opinion) that variable pricing can keep traffic moving at 2,000 to 2,500 vehicles per hour per freeway lane, while throughout can slow to as few as 500 vehicles per hour in congestion. This is the most important and unanswerable argument for tolling, for – contrary to those who say that tolling will keep poor people off the roads – it means that tolling will allow more, not fewer, people to use roads during rush hours.

While I agree with Poole that private partners would be more efficient at building new capacity than public agencies, I don’t think this idea is as important as tolling. County toll road authorities in Texas, such as the Fort Bend Toll Road Authority, have been very efficient at building new highways that are fully financed by tolls.

Despite considerable (and uninformed) opposition to tolling, an unusual coalition of environmentalists and fiscal conservatives has persuaded the Oregon Transportation Commission to begin tolling Portland freeways. As a result, Portland may become the first city in America to toll all its freeways during rush-hour, a goal that would be thwarted if conservatives insisted on private toll concessions.

Tolling can end congestion, but Poole points out that this isn’t the only problem we face: the Interstate Highway System is at the end of its 50-year expected lifespan and some method will be needed to rebuild it. He places his faith in public-private partnerships for such reconstruction.

Tolling and public-private partnerships are two different questions, but of the two only tolling (or mileage-based user fees, which uses the same technology to effectively toll all roads) is essential to eliminating congestion. It is also the best alternative to what Poole argues are increasingly obsolescent gas taxes. Anyone who talks about congestion relief without including road pricing isn’t serious about solving the problem. Poole’s book should be required reading for all politicians and policymakers who deal with transportation.

The environmental impact of cryptocurrencies looms large among the many concerns voiced by sceptics. Earlier this year, Agustín Carstens, who runs the influential Bank for International Settlements, called Bitcoin “a combination of a bubble, a Ponzi scheme and an environmental disaster.”

Carstens’ first two indictments have been challenged. Contrary to his assertion, while the true market potential of Bitcoin, Ethereum and other such decentralized networks remains uncertain, by now it is clear to most people that they are more than mere instruments for short-term speculation and the fleecing of unwitting buyers.

That Bitcoin damages the environment without countervailing benefits is, on the other hand, an allegation still widely believed even by many cryptocurrency fans. Sustaining it is the indisputable fact that the electricity now consumed by  the Bitcoin network, at 73 TWh per year at last count, rivals the amount consumed by countries like Austria and the Philippines.

Computing power is central to the success of Bitcoin

Bitcoin’s chief innovation is enabling payments without recourse to an intermediary. Before Bitcoin, any attempt to devise an electronic payments network without a middleman suffered from a double-spend problem: There was no easy way for peers to verify that funds promised to them had not also been committed in other transactions. Thus, a central authority was inescapable.

“Satoshi Nakamoto”’s 2008 white paper proposing “a peer-to-peer electronic cash system” changed that. Nakamoto suggested using cryptography and a public ledger to resolve the double-spend problem. Yet, in order to ensure that only truthful transactions were added to the ledger, this decentralized payments system needed to encourage virtuous behavior and make fraud costly.

Bitcoin achieves this by using a proof-of-work consensus algorithm to reach agreement among users about which transactions should go on the ledger. Proof-of-work means that users expend computing power as they validate transactions. The reward from validation are newly minted bitcoins, as well as a transaction fee. Nakamoto writes:

Once the CPU effort has been expended, to make it satisfy the proof-of-work, the block cannot be changed without redoing the work. As later blocks are chained after it, the work to change the block would include redoing all the blocks after it.

[…] Proof-of-work is essentially one-CPU-one-vote. The majority decision is represented by the longest chain, which has the greatest proof-of-work effort invested in it.

Because consensus is required for transactions to go on the ledger, defrauding the system – forcing one user’s false transactions on the public ledger, against other users’ disagreement – would require vast expenditures of computing power. Thus, Bitcoin renders fraud uneconomical.

Electricity powers governance on Bitcoin

Bitcoin and other cryptocurrencies replace payments intermediation with an open network of independent users, called ‘miners’, who compete to validate transactions and whose majority agreement is required for any transaction to be approved.

Intermediation is not costless. Payment networks typically have large corporate structures and expend large amounts of resources to facilitate transactions. Mastercard, which as of 2016 accounted for 23 percent of the credit card and 30 percent of the debit card market in the U.S., employs more than 13,000 staff worldwide. Its annual operating expenses reached $5.4 billion in fiscal year 2017. Its larger competitor Visa had running costs of $6.2 billion.

Equally, doing away with intermediaries such as Mastercard has costs. Bitcoin miners require hardware and electricity to fulfill their role on the network. A recent study puts the share of electricity costs in all mining costs at 60 to 70 percent.

Electricity prices vary widely across countries, and miners will tend to locate in countries where electricity is comparably cheap, since the bitcoin price is the same all over the world. One kilowatt-hour of electricity in China, reportedly the location of 80 percent of Bitcoin mining capacity, costs 8.6 U.S. cents, 50 percent below the average price in America. Assuming an average price of 10 cents per kWh, the Bitcoin network would consume $7.3 billion of electricity per year, based on current mining intensity. This yields total Bitcoin annual running costs of $10 to 12 billion.

The value of Bitcoin’s electricity use

Bitcoin total operating costs do not differ much from those of intermediated payment networks such as Mastercard and Visa. Yet these card networks facilitate many more transactions than Bitcoin: Digiconomist reports that Bitcoin uses 550,000 times as much electricity per transaction as Visa.

However, the number of transactions is a poor standard for judging the value exchanged on competing networks. Mastercard and Visa handle large numbers of small-dollar exchanges, whereas the Bitcoin transactions are $16,000 on average. The slow speed of the Bitcoin network and the large fluctuations in average transaction fees make low-value exchanges unattractive. Moreover, unlike card networks, cryptocurrencies are still not generally accepted and therefore used more as a store of value than a medium of exchange.

With that in mind, if we compare Bitcoin and the card networks by the volume of transactions processed, a different picture emerges. The volume of Bitcoin transactions over the 24 hours to August 27 was $3.6 billion, which is not an outlier. That yields annual transaction volume of $1.33 trillion. This is below Mastercard’s approximate $6 trillion and Visa’s $7.8 trillion in payments volume over 2017. But it is not orders of magnitude below.

In fact, as ARK Invest reported over the weekend, Bitcoin has already surpassed the smaller card network Discover, and online payments pioneer Paypal, in online transactions volume. Overtaking the most successful payment network of the internet era is quite a milestone.

Source: ARK Invest newsletter, Aug. 26

Long-term prospects for the Bitcoin network

As mentioned above, comparisons between Bitcoin and intermediated payment networks must be conducted with caution, because most transactions on Mastercard, Paypal and Visa are for the exchange of goods and services, whereas much of the dollar value of Bitcoin transactions has to do with speculative investment in the cryptocurrency and the mining of new bitcoins. Only a fraction of Bitcoin payments involve goods and services.

However, that people are eager to get a hold of bitcoins today shows that some firmly believe Bitcoin has the potential to become more widely demanded.

The prospects for Bitcoin as an investment, on the other hand, are perhaps more questionable than its proponents assume. After all, the value of a medium of exchange is given by the equation MV = PQ, where M is the money supply, V the velocity at which money units change hands, P the price level and Q the real volume of transactions.

Bitcoin bulls posit, quite plausibly, that Q will only grow in coming years. But unless Bitcoin becomes a store-of-value cryptocurrency that is not frequently exchanged, V will also grow as Bitcoin users transact more on the network. This will push up the price level P and depress the value of individual bitcoins. Thus, Bitcoin’s very success as a medium of exchange may doom it as an investment.

But that is orthogonal to the policy discussion as to whether Bitcoin’s admittedly large power requirements are a matter of concern. Whereas competing payment systems rely on a many inputs, from physical buildings to a skilled workforce to reputational and financial capital, Bitcoin’s primary input is electricity. What Bitcoin illustrates is that achieving successful governance without the role of an intermediary is costly – costly enough that Bitcoin may struggle to outcompete payment intermediaries.

On the other hand, there are efforts afoot to introduce innovations into the Bitcoin network to increase its energy efficiency. The Lightning Network project, which seeks to enable transactions to happen outside the Bitcoin blockchain over the course of (for example) a trading day, and to record only the starting and closing balance, is such an initiative. Other, more controversial ideas are to rapidly increase the maximum size of a transaction block, which would speed up transaction processing but might not make much of a dent in power usage. Others have addressed the issue more directly, by building renewable generation capacity specifically aimed at cryptocurrency mining.

Is Bitcoin’s electricity use socially wasteful?

Behind claims like Carstens’ that Bitcoin is “an environmental disaster” lies the veiled accusation that the cryptocurrency’s electricity use is somehow less legitimate, or socially less valuable, than electricity use by schools, hospitals, households and offices. Is there any truth to this claim?

Economists have known since at least Pigou that the only way to determine wastefulness in resource use is by examining whether an activity has unpriced externalities which might lead agents to over- or underuse the resource. In those instances, these social costs must be incorporated into the price of the resource to motivate efficient production.

What this means for Bitcoin is that the cryptocurrency itself cannot be “socially wasteful.” The environmental impact of electricity use is unconcerned with the purpose of that use. Whether electric power is consumed for the mining of cryptocurrency or the production of cars has no bearing on the environmental effects. Therefore, the impact of Bitcoin depends on two factors over which the network has no control: the way in which power is generated and how electricity is priced.

Both vary widely across jurisdictions. Iceland, which due to its comparably low power costs and still lower temperatures is a favorite location for Bitcoin miners, generates nearly all of its electricity from renewable geothermal sources, which also emit much lower amounts of carbon than coal- or gas-fired plants. Iceland participates in the European Union’s emissions trading system, which despite its imperfect design does a good job of internalizing the social cost of power generation.

Canada, like Iceland a cold jurisdiction, uses hydroelectric power to generate 59 percent of its electricity. In the crypto-favorite province of Quebec, 95 percent of power is hydroelectric, and prices are particularly low. While the overall environmental impact of hydropower is contested, there is agreement that its carbon footprint is a fraction of those of gas- and coal-fired plants. Canada has also recently implemented a nationwide cap-and-trade scheme in a bid to price carbon emissions.

China, the largest jurisdiction for mining, offers a less encouraging picture, as it still generates close to half its electricity from coal. However, this is drastically down from 72 percent in 2015 and has dropped even in absolute terms. The People’s Republic’s attempts to reduce its carbon footprint per unit of GDP have relied more on command-and-control shifts from coal to other sources than on market forces.

Whichever way one looks at it, however, the environmental impact of Bitcoin and other electricity-intensive cryptocurrencies is a function not of their software architecture, but of the energy policies in the countries where miners operate.

Much ado about nothing

We can only conclude that reports of cryptocurrencies’ wreaking environmental havoc have been greatly exaggerated. An examination of transaction volumes shows that Bitcoin’s power use is not outside the league of intermediated payments systems. Moreover, it will be in the interest of Bitcoin miners to reduce the per-transaction electricity cost of mining, as otherwise the network will struggle to grow and compete with incumbents. Finally, there is no evidence that cryptocurrencies have environmental externalities beyond those that can be ascribed to any electricity user wherever electricity is inefficiently priced. But public policy, not cryptocurrency innovation, is at fault there.

[Cross-posted from Alt-M.org]

As a practicing physician I have long been frustrated with the Electronic Health Record (EHR) system the federal government required health care practitioners to adopt by 2014 or face economic sanctions. This manifestation of central planning compelled many doctors to scrap electronic record systems already in place because the planners determined they were not used “meaningfully.” They were forced to buy a government-approved electronic health system and conform their decision-making and practice techniques to algorithms the central planners deem “meaningful.”  Other professions and businesses make use of technology to enhance productivity and quality. This happens organically. Electronic programs are designed to fit around the unique needs and goals of the particular enterprise. But in this instance, it works the other way around: health care practitioners need to conform to the needs and goals of the EHR. This disrupts the thinking process, slows productivity, interrupts the patient-doctor relationship, and increases the risk of error. As Twila Brase, RN, PHN ably details in “Big Brother in the Exam Room,” things go downhill from there.

With painstaking, almost overwhelming detail that makes the reader feel the enormous complexity of the administrative state, Ms. Brase, who is president and co-founder of Citizens’ Council for Health Freedom (CCHF), traces the origins and motives that led to Congress passing the Health Information Technology for Economic and Clinical Health (HITECH) Act in 2009. The goal from the outset was for the health care regulatory bureaucracy to collect the private health data of the entire population and use it to create a one-size-fits-all standardization of the way medicine is practiced. This standardization is based upon population models, not individual patients. It uses the EHR design to nudge practitioners into surrendering their judgment to the algorithms and guidelines adopted by the regulators. Along the way, the meaningfully used EHR makes practitioners spend the bulk of their time entering data into forms and clicking boxes, providing the regulators with the data needed to generate further standardization.

Brase provides wide-ranging documentation of the way this “meaningful use” of the EHR has led to medical errors and the replication of false information in patients’ health records. She shows how the planners intend to morph the Electronic Health Record into a Comprehensive Health Record (CHR), through the continual addition of new data categories, delving into the details of lifestyle choices that may arguably relate indirectly to health: from sexual proclivities, to recreational behaviors, to gun ownership, to dietary choices. In effect, a meaningfully used Electronic Health Record is nothing more than a government health surveillance system.  As the old saying goes, “He who pays the piper calls the tune.” If the third party—especially a third party with the monopoly police power of the state—is paying for health care it may demand adherence to lifestyle choices that keep costs down.

All of this data collection and use is made possible by the Orwellian-named Health Insurance Portability and Accountability Act (HIPAA) of 1996.  Most patients think of HIPAA as a guarantee that their health records will remain private and confidential. They think all those “HIPAA Privacy” forms they are signing at their doctor’s office is to insure confidentiality. But, as Brase points out very clearly, HIPAA gives numerous exemptions to confidentiality requirements for the purposes of collecting data and enforcing laws. As Brase puts it, 

 It contains the word privacy, leaving most to believe it is what it says, rather than reading it to see what it really is. A more honest title would be “Notice of Federally Authorized Disclosures for Which Patient Consent Is Not Required.”

It should frighten any reader to learn just how exposed the personal medical information is to regulators in and out of government. Some of the data collected without the patients’ knowledge is generated by what Brase calls “forced hospital experiments” in health care delivery and payment models, also conducted without the patients’ knowledge. Brase documents how patients remain in the dark about being included in payment model experiments, even including whether or not they are patients being cared for by an Accountable Care Organization (ACO). 

Again quoting Brase, 

Congress’s insistence that physicians install government health surveillance systems in the exam room and use them for the care of patients, despite being untested and unproven—and an unfunded mandate—is disturbing at so many levels—from privacy to professional ethics to the patient-doctor relationship. 

As the book points out, more and more private practitioners are opting out of this surveillance system. Some are opting out of the third party payment system (including Medicare and Medicaid) and going to a “Direct Care” cash pay model, which exempts them from HIPAA and the government’s EHR mandate. Some are retiring early and/or leaving medical practice altogether. Many, if not most, are selling their practices to hospitals or large corporate clinics transferring the risk of severe penalties for non-compliance to those larger entities. 

Health information technology can and should be a good thing for patients and doctors alike. But when the government rather than individual patients and doctors decide what kind of technology that will be and how it will be used, health information technology can become a dangerous threat to liberty, autonomy, and health. 

“Big Brother In The Exam Room” is the first book to catalog in meticulous detail the dangerous ways in which health information technology is being weaponized against us all.  Everyone should read it. 

It has been a whirlwind week of negotiations on the North American Free Trade Agreement (NAFTA), ending on Friday in apparent deadlock. Canada was not able to reach a deal with the United States on some of the remaining contentious issues, but that did not stop President Trump from submitting a notice of intent to Congress to sign a deal with Mexico that was agreed to earlier this week. This action allows the new trade agreement to be signed by the end of November, before Mexican President Enrique Pena Nieto leaves office. While a high degree of uncertainty remains, it is premature to ring the alarm for the end of NAFTA as we know it.

Why? First, there is still some negotiating latitude built into the Trade Promotion Authority (TPA) legislation, which outlines the process for how the negotiations unfold. The full text of the agreement has to be made public thirty days after the notice of intent to sign is submitted to Congress. This means that the parties have until the end of September to finalize the contents of the agreement. What we have now is just an agreement in principle, which can be thought of as a draft of the agreement, with a lot of little details still needing to be filled in. Therefore, it is not surprising that the notice submitted to Congress today left open the possibility of Canada joining the agreement “if it is willing” at a later date. Canadian Foreign Minister Chrystia Freeland will resume talks with U.S. Trade Representative Robert Lighthizer next Wednesday, and this should be seen as a sign that the negotiations are far from over.

Relatedly, TPA legislation does not provide a clear answer as to whether the President can split NAFTA into two bilateral deals. The original letter of intent to re-open NAFTA, which was submitted by Amb. Lighthizer in May 2017, notified Congress that the President intended to “initiate negotiations with Canada and Mexico regarding modernization of the North American Free Trade Agreement (NAFTA).” This can be read as signaling that not only were the negotiations supposed to be with both Canada and Mexico, but also that Congress only agreed to this specific arrangement.  In addition, it could be argued that TPA would require President Trump to “restart the clock” on negotiations with a new notice of intent to negotiate with Mexico alone. The bottom line, however, is that it is entirely up to Congress to decide whether or not it will allow for a vote on a bilateral deal with Mexico only, and so far, it appears that Congress is opposed to this. 

In fact, Congress has been fairly vocal about the fact that a NAFTA without Canada simply does not make sense. Canada and Mexico are the top destination for U.S. exports and imports, with total trade reaching over $1 trillion annually. Furthermore, we don’t just trade things with each other in North America, we make things together. Taking Canada out of NAFTA is analogous to putting a wall in the middle of a factory floor. It is has been estimated that every dollar of imports from Mexico includes forty cents of U.S. value added, and for Canada that figure is twenty-five cents for every dollar of imports—these are U.S. inputs in products that come back to the United States.

While President Trump may claim that he’s playing hardball with Canada by presenting an offer they cannot reasonably accept, we should approach such negotiating bluster with caution. In fact, the reality is that there is still plenty of time to negotiate, and Canada seems willing to come back to the table next week. At a press conference at the Canadian Embassy in Washington D.C. after negotiations wrapped up for the week, Minister Freeland remarked that Canada wants a good deal, and not just any deal, adding that a win-win-win was still possible. Negotiations are sure to continue amidst the uncertainty, and it will be a challenging effort to parse the signal from the noise. However, we should remain optimistic that a trilateral deal is within reach and take Friday’s news as just another step in that direction.

A Massachusetts statute prohibits ownership of “assault weapons,” the statutory definition of which includes the most popular semi-automatic rifles in the country, as well as “copies or duplicates” of any such weapons. As for what that means, your guess is as good as ours. A group of plaintiffs, including two firearm dealers and the Gun Owners’ Action League challenged the law as a violation of the Second Amendment. Unfortunately, federal district court judge William Young upheld the ban.

Judge Young followed the lead of the Fourth Circuit case of Kolbe v. Hogan (in which Cato filed a brief supporting a petition to the Supreme Court) which misconstrued from a shred of the landmark 2008 District of Columbia v. Heller case that the test for whether a class of weapons could be banned was whether it was “like an M-16,” contravening the core of Heller—that all weapons in common civilian use are constitutionally protected. What’s worse is that Judge Young seemed to go a step further, rejecting the argument that an “M-16” is a machine gun, unlike the weapons banned by Massachusetts, and deciding that semi-automatics are “almost identical to the M16, except for the mode of firing.” (The mode of firing is, of course, the principle distinction between automatic and semi-automatic firearms.)

The plaintiffs are appealing to the U.S. Court of Appeals for the First Circuit. Cato, joined by several organizations interested in the protection of our civil liberties and a group of professors who teach the Second Amendment, has filed a brief supporting the plaintiffs. We point out that the Massachusetts law classifies the common semi-automatic firearms used by police officers as “dangerous and unusual” weapons of war, alienating officers from their communities and undermining policing by consent.

Where for generations Americans needed look no further than the belt of their local deputies for guidance in selecting a defensive firearm, Massachusetts’ restrictions prohibit these very same arms from civilians. Those firearms selected by experts for reliability and overall utility as defensive weapons, would be unavailable for the lawful purpose of self-defense. According to Massachusetts, these law enforcement tools aren’t defensive, but instead implements of war designed to inflict mass carnage.

Where tensions between police and policed are a sensitive issue, Massachusetts sets up a framework where the people can be fired upon by police with what the state fancies as an instrument of war, a suggestion that only serves to drive a wedge between police and citizenry.

Further, the district court incorrectly framed the question as whether the banned weapons were actually used in defensive shootings, instead of following Supreme Court precedent and asking whether the arms were possessed for lawful purposes (as they unquestionably were). This skewing of legal frameworks is especially troublesome where the Supreme Court has remained silent on the scope of the right to keep and bear arms for the last decade, leading to a fractured and unpredictable state of the law.

Today, the majority of firearms sold in the United States for self-defense are illegal in Massachusetts. The district court erred in upholding this abridgment of Bay State residents’ rights. The Massachusetts law is unconstitutional on its face and the reasoning upholding it lacks legal or historical foundation.

Last weekend the Federal Reserve Bank of Kansas City hosted its annual symposium in Jackson Hole. Despite being the Fed’s largest annual event, the symposium has been “fairly boring” for years, in terms of what can be learned about the future of actual policy. This year’s program, Changing Market Structures and Implications for Monetary Policy, was firmly in that tradition—making Jerome Powell’s speech, his first there as Fed Chair, the main event. In it, he covered familiar ground, suggesting that the changes he has begun as Chair are likely to continue.

Powell constructed his remarks around a nautical metaphor of “shifting stars.” In macroeconomic equations a variable has a star superscript (*) on it to indicate it is a fundamental structural feature of the economy. In Powell’s words, these starred values in conventional economic models are the “normal, or “natural,” or “desired” values (e.g. u* for the natural rate of unemployment, r* for the neutral rate of interest, and π* for the optimal inflation rate). In these models the actual data are supposed to fluctuate around these stars. However, the models require estimates for many star values (the exception being desired inflation, which the Fed has chosen to be a 2% annual rate) because they cannot be directly observed, and therefore must be inferred.

These models then use the gaps between actual values and the starred values to guide—or navigate, in Powell’s metaphor—the path of monetary policy. The most famous example being, of course, the Taylor Rule, which calls for interest rate adjustments depending on how far the actual inflation rate is from desired inflation and how far real GDP is from its estimated potential. Powell’s thesis is that as these fundamental values change, particularly as the estimates become more uncertain—as the stars shift so to speak—using them as guides to monetary policy becomes more difficult and less desirable.

His thesis echoes a point he made during his second press conference as Fed Chair when he said policymakers “can’t be too attached to these unobservable variables.” It also underscores Powell’s expressed desire to move the Fed in new directions: less wedded to formal models, open to a broader range of economic views, and potentially towards using monetary policy rules. To be clear, while Powell has outlined these new directions it remains to be seen how and whether such changes will actually be implemented.

A specific example of a new direction—and to my mind the most important comment in the Jackson Hole speech—was Powell’s suggestion that the Fed look beyond inflation in order to detect troubling signs in the economy. A preoccupation with inflation is a serious problem at the Fed, and one that had disastrous consequences in 2008. Indeed, Powell noted that the “destabilizing excesses,” (a term that he should have defined) in advance of the last two recessions showed up in financial market data rather than inflation metrics.

While Powell is more open to monetary policy rules than his predecessors, he’s yet to formally endorse them as anything other than helpful guides in the policymaking process. At Jackson Hole he remarked, “[o]ne general finding is that no single, simple approach to monetary policy is likely to be appropriate across a broad range of plausible scenarios.” This was seen as a rejection of rule-based monetary policy by Mark Spindel, noted Fed watcher and co-author of a political history of the Fed. However, given the shifting stars context of the speech, Powell’s comment should be interpreted as saying that when the uncertainty surrounding the stars is increasing, the usefulness of the policy rules that rely on those stars as inputs is decreasing. In other words, Powell is questioning the use of a mechanical rule, not monetary policy rules more generally.

Such an interpretation is very much in keeping with past statements made by Powell. For example, in 2015, as a Fed Governor, he said he was not in favor of a policy rule that was a simple equation for the Fed to follow in a mechanical fashion. Two years later, Powell said that traditional rules were backward looking, but that monetary policy needs to be forward looking and not overly reliant on past data. Upon becoming Fed Chair early this year, Powell made it a point to tell Congress he found monetary policy rules helpful—a sentiment he reiterated when testifying on the Hill last month.

The good news is that there is a monetary policy rule that is forward looking, not concerned with estimating the “stars,” and robust against an inflation fixation. I am referring to a nominal GDP level target, of course; a monetary policy rule that has been gaining advocates.

Like in years past, there was not a lot of discussion about the future of actual monetary policy at the Jackson Hole symposium. But if Powell really is moving the Federal Reserve towards adopting a rule, he is also beginning to outline a framework that should make a nominal GDP rule the first choice.

[Cross-posted from Alt-M.org]

It would have been natural to assume that partisan gerrymandering would not return as an issue to the Supreme Court until next year at the earliest, the election calendar for this year being too far advanced. But yesterday a federal judicial panel ruled that North Carolina’s U.S. House lines were unconstitutionally biased toward the interests of the Republican Party and suggested that it might impose new lines for November’s vote, even though there would be no time in which to hold a primary for the revised districts. Conducting an election without a primary might seem like a radical remedy, but the court pointed to other offices for which the state of North Carolina provides for election without a preceding primary stage.

If the court takes such a step, it would seem inevitable that defenders of the map will ask for a stay of the ruling from the U.S. Supreme Court. In June, as we know, the Court declined to reach the big constitutional issues on partisan gerrymandering, instead finding ways to send the two cases before it (Gill v. Whitford from Wisconsin and Benisek v. Lamone from Maryland) back to lower courts for more processing. 

In my forthcoming article on Gill and Benisek in the Cato Supreme Court Review, I suggest that with the retirement of Justice Anthony Kennedy, who’d been the swing vote on the issue, litigators from liberal good-government groups might find it prudent to refrain for a while from steering the question back up to the high court, instead biding their time in hopes of new appointments. After all, Kennedy’s replacement, given current political winds, is likely to side with the conservative bloc. But a contrasting and far more daring tactic would be to take advantage of the vacancy to make a move in lower courts now. To quote Rick Hasen’s new analysis at Election Law Blog, “given the current 4-4 split on the Supreme Court, any emergency action could well fail, leaving the lower court opinion in place.” And Hasen spells out the political implications: “if the lower court orders new districts for 2018, and the Supreme Court deadlocks 4-4 on an emergency request to overturn that order, we could have new districts for 2018 only, and that could help Democrats retake control of the U.S. House.”

Those are very big “ifs,” however. As Hasen concedes, “We know that the Supreme Court has not liked interim remedies in redistricting and election cases close to the election, and it has often rolled back such changes.” Moreover, Justices Breyer and Kagan in particular have lately shown considerable willingness to join with conservatives where necessary to find narrow grounds for decision that keep the Court’s steps small and incremental, so as not to risk landmark defeats at the hands of a mobilized 5-4 conservative court. It would not be surprising if one or more liberal Justices join a stay of a drastic order in the North Carolina case rather than set up a 2019 confrontation in such a way as to ensure a maximally ruffled conservative wing.

Some of these issues might come up at Cato’s 17th annual Constitution Day Sept. 17 – mark your calendar now! – where I’ll be discussing the gerrymandering cases on the mid-afternoon panel.

In the first of this series of posts, I explained that the mere presence of fractional-reserve banks itself has little bearing on an economy’s rate of money growth, which mainly depends on the growth rate of its stock of basic (commodity or fiat) money. The one exception to this rule, I said, consists of episodes in which growth in an economy’s money stock, defined broadly to include the public’s holdings of readily-redeemable bank IOUs as well as its holdings of basic money, is due in whole or in part to a decline in bank reserve ratios

In a second post, I pointed out that, while falling bank reserve ratios might in theory be to blame for business booms, a look at some of the more notorious booms shows that they did not in fact coincide with any substantial decline in bank reserve ratios.

In this third and final post, I complete my critique of the “Fractional Reserves lead to Austrian Business Cycles” (FR=ABC) thesis, by showing that, when fractional-reserve banking system reserve ratios do decline, the decline doesn’t necessarily result in a malinvestment boom.

Causes of Changed Bank Reserve Ratios

That historic booms haven’t typically been fueled by falling bank reserve ratios, meaning ratios of commercial bank reserves to commercial bank demand deposits and notes, doesn’t mean that those ratios never decline. In fact they may decline for several reasons. But when they do change, commercial bank reserve ratios usually change gradually rather than rapidly. In contrast central banks, and fiat-money issuing central banks especially, can and sometimes do occasionally expand their balance sheets quite rapidly, if not to a dramatic extent. It’s for this reason that monetary booms are more likely to be fueled by central bank credit operations than by commercial banks’ decision to “skimp” more than usual on reserves.

There are, however, some exceptions to the rule that reserve ratios tend to change only gradually. One of these stems from government regulations, changes in which can lead to reserve ratio changes that are both more substantial and more sudden. Thus in the U.S. during the 1990s changes to minimum bank reserve requirements and the manner of their enforcement led to a considerable decline in actual bank reserve ratios. In contrast, the Federal Reserve’s decision to begin paying interest on bank reserves starting in October 2008, followed by its various rounds of Quantitative Easing, caused bank reserve ratios to increase dramatically.

The other exception concerns cases in which fractional reserve banking is just developing. Obviously as that happens a switch from 100-percent reserves, or its equivalent, to some considerably lower fraction, might take place over a relatively short time span. In England during the last half of the 17th century, for example, the rise first of the goldsmith banks and then of the Bank of England led to a considerable reduction in the demand for monetary gold, its place being taken by a combination of paper notes and readily redeemable deposits.

Yet even that revolutionary change involved a less rapid increase in the role of fiduciary media, with even less significant cyclical implications, than one might first suppose, for several reasons. First, only a relatively small number of persons dealt with banks at first: for the vast majority of people, “money” still meant nothing other than copper and silver coins, plus (for the relatively well-heeled) the occasional gold guinea. Second, bank reserve ratios remained fairly high at first — the best estimates put them at around 30 percent or so — declining only gradually from that relatively high level. Finally, the fact that the change was as yet limited to England and one or two other economies meant that, instead of resulting in any substantial change England’s money stock, level of spending, or price level, it led to a largely contemporaneous outflow of now-surplus gold to the rest of the world. By allowing paper to stand in for specie, in other words, England was able to export that much more precious metal. The same thing occurred in Scotland over the course of the next century, only to a considerably greater degree thanks to the greater freedom enjoyed by Scotland’s banks. It was that development that caused Adam Smith to wax eloquent on the Scottish banking system’s contribution to Scottish economic growth.

Eventually, however, any fractional-reserve banking system tends to settle into a relatively “mature” state, after which, barring changes to government regulations, bank reserve ratios are likely to decline only gradually, if they decline at all, in response to numerous factors including improvements in settlement arrangements, economies of scale, and changes in the liquidity of marketability of banks’ non-reserve assets. For this reason it’s perfectly absurd to treat the relatively rapid expansion of fiduciary media in a fractional-reserve banking system that’s just taking root as illustrating tendencies present within established fractional-reserve banking systems.

Yet that’s just what some proponents of 100-percent banking appear to do. For example, in a relatively recent blog Robert Murphy serves-up the following “standard story of fractional reserve banking”:

Starting originally from a position of 100% reserve banking on demand deposits, the commercial banks look at all of their customers’ deposits of gold in their vaults, and take 80% of them, and lend them out into the community. This pushes down interest rates. But the original rich depositors don’t alter their behavior. Somebody who had planned on spending 8 of his 10 gold coins still does that. So aggregate consumption in the community doesn’t drop. Therefore, to the extent that the sudden drop in interest rates induces new investment projects that wouldn’t have occurred otherwise, there is an unsustainable boom that must eventually end in a bust.

Let pass Murphy’s unfounded — and by now repeatedly-refuted — suggestion that fractional reserve banking started out with bankers’ lending customers’ deposits without the customers knowing it. And forget as well, for the moment, that any banker who funds loans using deposits that the depositors themselves intend spend immediately will go bust in short order. The awkward fact remains that, once a fractional-reserve banking system is established, it cannot go on being established again and again, but instead settles down to a relatively stable reserve ratio. So instead of explaining how fractional reserve banking can give rise to recurring business cycles, the story Murphy offers is one that accounts for only a single, never to be repeated fractional-reserve based cyclical event.

Desirable and Undesirable Reserve Ratio Changes

Finally, a declining banking system reserve ratio doesn’t necessarily imply excessive money creation, lending, or bank maturity mismatching. That’s because, notwithstanding what Murphy and others claim, competing commercial banks generally can’t create money, or loans, out of thin air. Instead, their capacity to lend, like that of other intermediaries, depends crucially on their success at getting members of the public to hold on to their IOUs. The more IOUs bankers’ customers are willing to hold on to, and the fewer they choose to cash in, the more the bankers can afford to lend. If, on the other hand, instead of holding onto a competing bank’s IOUs, the bank’s customers all decide to spend them at once, the bank will fail in short order, and will do so even if its ordinary customers never stage a run on it. All of this goes for the readily redeemable bank IOUs that make up the stock of bank-supplied money no less than for IOUs of other sorts. In other words, contrary to what Robert Murphy suggests in his passage quoted above, it matters a great deal to any banker whether or not persons who have exchanged basic money for his banks’ redeemable claims plan to go on spending, thereby drawing on those claims, or not.

Furthermore, as I show in part II of my book on free banking, in a free or relatively free banking system, meaning one in which there are no legal reserve requirements and banks are free to issue their own circulating currency, bank reserve ratios will tend to change mainly in response to changes in the public’s demand to hold on to bank-money balances. When people choose to increase their holdings of (that is, to put off spending) bank deposits or notes or both, the banks can profitably “stretch” their reserves further, making them support a correspondingly higher quantity of bank money. If, on the other hand, people choose to reduce their holdings of bank money by trying to spend them more aggressively, the banks will be compelled to restrict their lending and raise their reserve ratios. The stock of bank-created money will, in other words, tend to adjust so as to offset opposite changes in money’s velocity, thereby stabilizing the product of the two.

This last result, far from implying a means by which fractional-reserve banks might fuel business cycles, suggests on the contrary that the equilibrium reserve ratio changes in a free banking system can actually help to avoid such cycles. For according to Friedrich Hayek’s writings of the 1930s, in which he develops his theory of the business cycle most fully, avoiding such cycles is a matter of maintaining, not a constant money stock (M), but a constant “total money stream” (MV).

Voluntary and Involuntary Saving

Hayek’s view is, of course, distinct from Murray Rothbard’s, and also from that of many other Austrian critics of fractional reserve banking. But it is also more intuitively appealing. For the Austrian theory of the business cycle attributes unsustainable booms to occasions when bank-financed investment exceeds voluntary saving. Such booms are unsustainable because the unnaturally low interest rates with which they’re associated inevitably give way to higher ones consistent with the public’s voluntary willingness to save. But why should rates rise? They rise because lending in excess of voluntary savings means adding more to the “total money stream” than savers take out of that stream. Eventually that increased money stream will serve to bid up prices. Higher prices will in term raise the demand for loans, pushing interest rates back up. The increase in rates in turn brings the boom to an end, launching the “bust” stage of the cycle.

If, in contrast, banks lend more only to the extent that doing so compensates for the public’s attempts to accumulate balances of bank money, the money stream remains constant. Consequently the increase in bank lending doesn’t result in any general increase in the demand for or prices of goods. There is, in this case, no tendency for either the demand for credit or interest rates to increase. Instead of being self-reversing, the investment “boom,” if it can be called such, is not inevitably self-reversing. Instead, it can go on for as long as the increased demand for fiduciary media persists, and perhaps forever.

As I’m not saying anything here that I haven’t said before, I have a pretty darn good idea what sort of counterarguments to anticipate. Among others I expect to see claims to the effect that people who hold onto balances of bank money (or fiduciary media or “money substitutes” or whatever one wishes to call bank -issued IOUs that serve as regularly-accepted means of exchange) are not “really” engaged in acts of voluntary saving, because they might choose to part with those balances at any time, or because a bank deposit balance or banknote is “neither a present nor a future good,” or something alone these lines.

Balderdash. To “save” is merely to refrain from spending one’s earnings; and one can save by holding on or adding to a bank deposit balance or redeemable banknote no less than by holding on to or accumulating Treasury bonds. That persons who choose to save by accumulating demand deposits do not commit themselves to saving any definite amount for any definite length of time does not make their decision to save any less real: so long as they hold on to bank-issued IOUs, they are devoting a quantity of savings precisely equal to the value of those IOUs to the banks that have them on their books: as Murray Rothbard himself might have put it — though he certainly never did so with regard to the case at hand — such persons have a “demonstrated preference” for not spending, that is, for saving, to the extent that they hold bank IOUs, where “demonstrated preference” refers to the (“praxeological”) insight that, regardless of what some outside expert might claim, peoples’ actual acts of choice supply the only real proof of what they desire or don’t desire.  According to that insight, so long as someone holds a bank balance or IOU, he desires the balance or IOU, and not the things that could be had for it, or any part of it. That is, he desires to be a creditor to the bank against which he holds the balance or IOU.

And so long as banks expand their lending in accord with their customers’ demonstrated preference such acts of saving, and no more, while contracting it as their customers’ willingness to direct their savings to them subsides, the banks’ lending will not contribute to business cycles, Austrian or otherwise.

Of course, real-world monetary systems don’t always conform to the ideal sort of banking system I’ve described, issuing more fiduciary media only to the extent that the public’s real demand for such media has itself increased. While  free banking systems of the sort I theorize about in my book tend to approximate this ideal, real world systems can and sometimes do create credit in excess of the public’s voluntary savings, occasionally without, though (as we’ve seen) most often with, the help of accommodative central banks. But that’s no reason to condemn fractional reserve banking. Instead it’s a reason for looking more deeply into the circumstances that sometimes allow banking and monetary systems to promote business cycles.

In other words, instead of repeating the facile cliché that fractional reserve banking causes business cycles, or condemning fiduciary media tout court, Austrian economists who want to put a stop to such cycles, and to do so without undermining beneficial bank undertakings, should inquire into the factors that sometimes cause banks to create more fiduciary media than their customers either want or need.

[Cross-posted from Alt-M.org]

As I reported before, a group of Chinese investors under the EB-5 immigration program have challenged the government’s illegal practice of counting spouses and minor children of investors against the immigration quota for investors. This practice, however, hurts all legal immigrants because the same provision governs the admission of derivatives of all legal immigrants. Counting derivatives dramatically reduces legal immigration, harming people trying to immigrate legally to the United States. The government finally responded to the lawsuit on Friday, and its response leaves much to be desired.

Background

Section 203 of the Immigration and Nationality Act (INA) provides three broad pathways for legal immigrants to receive green cards (i.e. permanent residence):

(a) Preference allocation for family-sponsored immigrants.—Aliens subject to the worldwide level specified in section 201(c) of this title for family-sponsored immigrants shall be allotted visas as follows …

(b) Preference allocation for employment-based immigrants.—Aliens subject to the worldwide level specified in section 201(d) of this title for employment-based immigrants in a fiscal year shall be allotted visas as follows …

(c) Diversity immigrants… aliens subject to the worldwide level specified in section 201(e) of this title for diversity immigrants shall be allotted visas each fiscal year as follows …

Subsections (a), (b), and (c) of section 203 make the spouses and minor children of the family members, employees-investors, or diversity lottery winners eligible for status. It is only subsection (d) that creates an opportunity for them to immigrate:

(d) Treatment of family members.—A spouse or child… shall, if not otherwise entitled to an immigrant status and the immediate issuance of a visa under subsection (a), (b), or (c), be entitled to the same status, and the same order of consideration provided in the respective subsection, if accompanying or following to join, the spouse or parent.

Nothing in subsection (d) of section 203 applies the “worldwide levels” (or quotas) under subsection (a), (b), or (c) to the spouses and minor children of immigrants. They are then presumptively not subject to those limits.

The Government’s Argument

1) “Although the Court’s analysis should begin with the INA’s text, the meaning the Court ascribes to the statutory text must reflect the statute’s ‘context.’” -P. 19

The most incredible thing about the government’s response is that it explicitly eschews any effort to explain its practice using the language of section 203(d). I expected them to make the incorrect argument that because an immigrant has the “same status” as another immigrant, they are both subject to the same quota. But this is obviously false, for reasons I explain here. Adult children of U.S. citizens are subject to a quota under subsection (a) of section 203, while minor children are not subject to a quota under section 201(b), yet both receive the same immigrant status. What matters is not the status an immigrant has, but under which provision they receive that status—one with a cap or one without a cap. Yet this bad argument is better than what the government argues in its response, which is nothing at all.

2) “Section 203(d) is a means by which a derivative spouse or child can obtain a visa under their principal’s applicable category in Section 203(a), (b) or (c).” Emphasis added, P. 22

This is as close to an explanation as the government gives for its interpretation. It is asserting that dependents don’t receive status under subsection (d) of section 203 which has no quota, but under subsections (a), (b), and (c) which do have quotas. Yet it provides zero textual support for this view. In fact, the investors’ brief (p. 17) cites several provisions where Congress explicitly describes dependents as receiving status under subsection (d): 8 U.S.C. 1101(a)(15)(V); 8 U.S.C. 1154(l)(2)(C); 8 U.S.C. 1186b; 8 U.S.C. 1255(i)(1)(B); and Public Law 107 – 56.

3) “The country cap also explicitly applies to derivatives, as stated in INA section 202(b)… . And since the country cap is a subset of the overall family and employment-based caps, then equally clearly, if the country cap applies to derivatives, then so too do the overall caps” -Pp. 25-26

There are two types of immigration quotas: 1) “worldwide levels” that limit the absolute number of immigrants, and 2) “per-country” levels that limit the share of the worldwide level that a single nationality can receive. Section 202 of the INA does mention rules for counting some spouses and minor children against the per-country limits, but it never references spouses and minor children admitted under section 203(d). That is notable because subsection (d) of section 203 explicitly describes two types of spouses and minor children—those entitled to status under subsection (d) and those “otherwise entitled to immigrant status… under subsection (a), (b), or (c).”

This second group includes, for example, certain special immigrants under subsection (b)(4). The reason that spouses and children of these special immigrants are counted against the limits is that they are part of the definition of a special immigrant (see section 101(a)(27)).  That means that these derivatives have to be counted because they receive status, not under subsection (d) of section 203 which has no cap, but under the capped sections of section 203. Under the government’s view, these provisions that include spouses and children as part of the definition of special immigrants serve no purpose at all, which violates a basic cannon of statutory interpretation. The government is attempting to confuse the two types of derivatives in order to save its erroneous interpretation.

4) “Congressional intent is further demonstrated by the fact that when Congress exempts derivative spouses and children from an applicable numerical cap, it almost always does so explicitly.” -P. 38.

This statement is the opposite of the truth. In support of its statement, it cites a number of categories of nonimmigrants (H-1Bs, H-2Bs, H1-B1s, E-3s, Ts, Us) and special immigrant Iraqis and Afghanis, but in almost every one of its cases that it cites, the spouse or child is part of the definition of the eligible category. For example, H-1Bs are defined in section 101(a)(15)(H) as “an alien… who is coming temporarily to the United States to perform services… in a specialty occupation… and the alien spouse and minor children of any such alien.” In other words, spouses and children start out eligible, and so subject to the quota, so if Congress wanted to exempt them, it had to do so explicitly. But in section 203, spouses and children start out ineligible, and so not subject to a cap, and are separately made eligible under subsection (d), which has no quota so there is no need to explicitly exempt them.

In every comparable case, where the spouses and children start out ineligible and then separately are made eligible, Congress specifically required them to be counted. The Refugee Act of 1980, section 207 of the INA, has a directly comparable provision. Spouses and children are not eligible under the definition of a refugee under section 101(a)(42) and so not subject to the cap on refugees in section 207(a), but section 207(c)(2)(A) makes them eligible, and when it does so, it explicitly states, “Upon the spouse’s or child’s admission to the United States, such admission shall be charged against the numerical limitation …” In other words, exactly the language that isn’t in section 203(d).

 5) “There is a particular reason why Congress would have specified derivative counting in this way in the Refugee Act: unlike the caps at issue in this case, which are set by statute, the refugee cap is established by the President. Thus, the specific derivative provision is a deliberate check on the very broad authority that Congress had otherwise delegated to the President in the Refugee Act.” -P. 40

This explanation simply doesn’t work for the government. Why would Congress need a special “check” on his authority to not count derivatives if, on the government’s theory, the statute requires them to be counted to begin with? It doesn’t make any sense.

6) “Plaintiffs point to a single allegedly contrary provision in the Refugee Act of 1980.” -P. 39

This is just false. The investors’ motion also cites three other directly comparable instances, all of which were enacted at the exact same time as section 203 in 1990 (pp. 21-22). These provisions provided green cards to Hong Kong employees, displaced Tibetans, and transitional diversity visa applicants. In each case, Congress created the category for principal applicants and separately created the eligibility for their spouses and minor children using almost exactly the same language as section 203(d). But in 1991, it amended each provision to require that spouses and children be counted against those quotas. Did the government not actually read the motion or did it misrepresent it?

7) “The 1990 Act contained exactly the same language as the 1965 Act… . When Congress repeats language with a well understood construction in a new statute, it is presumed to intend to continue that same construction.” -P. 23

This is also false. Under the Immigration Act of 1965, derivatives were explicitly required to be counted, being listed in a subsection that began “Aliens who are subject to the numerical limitations specified in section 201(a) shall be allotted visas … as follows:”. The last category was a “spouse or child” of a primary applicant. In 1990, spouses and children became their own subsection, not included in the categories subject to the worldwide limits. The government’s claim is simply untrue.

8) “Plaintiffs contend that the restructuring of Section 203 in the 1990 Act had huge substantive effects by taking EB-5 investors’ spouses and children… completely out of the preference system altogether.”

This is again false. Spouses and children of investors are still part of the preference system as their eligibility is tied to their parents or spouses, and they must wait alongside them. They cannot simply enter “outside the preference system.”

9) “Not once in the thirty years since the 1990 Act was passed has any court ever interpreted the INA in the way Plaintiffs now claim Congress intended all along.” -P. 35

First, the EB-5 backlog didn’t exist until 2014, so they never would have had standing to sue prior to then. Second, this isn’t the first time that the government has been caught miscounting green cards years after it implemented the policy. The Johnson, Nixon, and Ford administrations interpreted the Cuban Adjustment Act of 1966 to count Cubans against the immigration quotas, and almost a decade after the bill’s passage, the Ford administration was sued, and it admitted in court that it was wrong to count them all along. The fact that a practice has occurred for many years does not mean that the practice is correct.

10) “The D.C. Circuit has cautioned that ‘legislative posturing serves no useful purpose …’ …  The isolated floor statements that Plaintiffs cite thus carry little weight in constructing the meaning of the INA’s provisions respecting counting derivatives towards the annual allotments of EB-5 visas.” -P. 33

The government dismisses evidence that I reported on here that clearly indicates that members of Congress explicitly expected that the EB-5 program would admit 10,000 investors, not 3,500 investors and 6,500 derivatives. Some members explicitly described the process through which they envisioned spouses and children entering. While the government is correct that this shouldn’t trump the text of the law, it doesn’t—it reinforces what is already there. The government responds by citing an ambiguous conference committee report on the final bill that does not contradict the floor statements of the members and does not explicitly explain how it deals with the issue of derivatives. This could be because the conference committee was just as interested in determining the outcome of the bill as those individual members and didn’t want to lose members by stating one way or another.

11) “No legislative history even remotely supports the proposition that Congress meant to exclude all derivatives from applicable caps …” -P. 34

This is also false. As explained above, in the Immigration Act of 1990, Congress enacted provisions providing green cards for Hong Kong employees, displaced Tibetans, and transitional diversity visa applicants using the same language as section 203(d). In 1991, Congress amended the 1990 act to explicitly require counting of spouses and minor children of the principals. Here is an example with the change in bold:

(a) In General.–Notwithstanding the numerical limitations in sections 201 and 202 of the Immigration and Nationality Act, there shall be made available to qualified displaced Tibetans described in subsection (b) (or in subsection (d) as the spouse or child of such an alien) 1,000 immigrant visas in the 3-fiscal-year period beginning with fiscal year 1991.

(d) Derivative Status for Spouses and Children.–A spouse or child … shall, if not otherwise entitled to an immigrant status and the immediate issuance of a visa under this section, be entitled to the same status, and the same order of consideration, provided under this section, if accompanying, or following to join, his spouse or parent.

If derivatives were already required to be counted against the quota, it would not have needed to insert any language into this provision, but it did anyway, making its interpretation of this language manifest to all. Congress made this amendment in every relevant place except one: subsection (d) of section 203. This is as close as it gets to positive proof of Congress’s interpretation of the statute.

Conclusion

In summation, the government provides no theory at all of how the plain language of the statute requires counting. Its indirect textual evidence falls flat and even contradicts its claims, and it repeatedly misstates the legislative history. The government concludes by fearmongering about how much legal immigration would increase if it were forced to implement the statute Congress actually passed. But legal immigration isn’t scary, and even it were, it is even scarier to allow the government the power to amend the laws without Congress.

Hours ago, Illinois Gov. Bruce Rauner (R) vetoed legislation that would have subjected enrollees in short-term health insurance plans to higher deductibles, higher administrative costs, higher premiums, and lost coverage. The vetoed bill would have blocked the consumer protections made available in that market by a final rule issued earlier this month by the U.S. Department of Health and Human Services, and would have (further) jeopardized ObamaCare’s risk pools by forcing even more sick patients into those pools.

Short-term plans are exempt from federal health insurance regulations, and as a result offer broader access to providers at a cost that is often 70 percent less than ObamaCare plans.

Rather than allow open competition between those two ways of providing health-insurance protection, the Obama administration sabatoged short-term plans. It forced short-term plan deductibles to reset after three months, and forced consumers in those plans to reenroll every three months, changes that increased administrative costs in that market.

The Obama administration further subjected short-term plan enrollees to medical underwriting after they fell ill – which meant higher premiums and cancelled coverage for the sick. Prior to the Obama rule, a consumer who purchased a short-term plan in January and developed cancer in February would have coverage until the end of December, at which point she could enroll in an ObamaCare plan. The National Association of Insurance Commissioners complained that the Obama rule required that her coverage expire at the end of March – effectively cancelling her coverage and leaving her with no coverage for up to nine months. The Obama administration stripped consumer protections from this market by expanding medical underwriting after enrollees get sick – something Congress has consistently tried to reduce.

Earlier this month, HHS restored and expanded the consumer protections the Obama administration gutted. It allowed short-term plans to cover enrollees for up to 12 months, and allowed insurers to extend short-term plans for up to an additional 24 months, for a total of up to 36 months. These changes allow short-term plans to offer deductibles tallied on an annual basis, rather than deductibles that reset every three months. They spare enrollees and insurers the expense of re-enrolling every three months. Most important, they allow short-term plans to protect enrollees who get sick from medical underwriting at least until they again become eligible to enroll in an ObamaCare plan the following January.

Indeed, HHS clarified that because the agency has no authority to regulate standalone “renewal guarantees” that allow short-term plan enrollees who fall ill to continue paying healthy-person premiums, “it may be possible for a consumer to maintain coverage under short-term, limited-duration insurance policies for extended periods of time” by “stringing together coverage under separate policies offered by the same or different issuers, for total coverage periods that would exceed 36 months.” As HHS Secretary Alex Azar explains, this helps ObamaCare:

Our decision to allow renewability and separate premium protections could also allow consumers to hold on to their short-term coverage if they get sick, rather than going to the exchanges, which improves the exchange risk pools.

I made that very argument in my comments on the proposed rule.

Illinois law automatically adopts whatever rules and definitions the federal government creates for short-term plans. If Illinois legislators had just done nothing, millions of Illinois residents automatically would have had a health insurance option that is more affordable and provides better coverage than ObamaCare.

But this is Illinois.

In their infinite wisdom, Illinois legislators passed legislation that once again would have exposed short-term plan enrollees higher deductibles, higher administrative costs, higher premiums, and cancelled coverage. The bill would have:

  • Required that initial contract terms for short-term plans last no longer than six months. It further provided that such plans could be extended for no more than six additional months.
  • Mandated that consumers who wish to keep purchasing consecutive short-term plans go uninsured for 60 days. Some consumers would inevitably develop expensive conditions during that period, and therefore be left with no coverage until the next ObamaCare open enrollment period. 
  • Prohibited renewal guarantees. The legislation specifically cut off this option. As a result, it would have dumped every single short-term plan enrollee with an expensive illness into the Exchanges. Ironically, Illinois legislators who thought they were bolstering ObamaCare actually passed a bill that would have sabotaged it.

Thankfully, Gov. Rauner stopped this ignorant, ridiculous effort to deny consumer protections to short-term plan enrollees. All eyes now turn to California, where Gov. Jerry Brown (D) must sign or veto legislation that would deny medical care to those who miss ObamaCare’s open enrollment period – by banning short-term plans altogether.

The Trump administration reached a deal with Mexico today on some bilateral issues in the renegotiation of the North American Free Trade Agreement (NAFTA). Some details of what was agreed are here. Other issues have been reported by the press as having been agreed, but until we see official government announcements, we are skeptical that those issues have been fully resolved.

This is not the conclusion of the NAFTA talks, because there are a number of outstanding issues, and Canada has to be brought back to the table as well. Nevertheless, today’s United States - Mexico deal is in some sense, “progress.” In another sense, however, it is a step backwards. To illustrate this, let’s look at the example of what was agreed on auto tariffs.

NAFTA eliminates tariffs on trade between Canada, Mexico, and the United States, but only for products that meet specific requirements to qualify as being made in North America. For example, you couldn’t make a car in China, ship it to Mexico and put the tires on, and then export it to the United States at the NAFTA zero tariff. Under current NAFTA rules, in order to qualify for duty free treatment, 62.5 percent of the content of a vehicle has to be from the NAFTA countries. 

The Trump administration has been opposed to this content threshold, arguing that it needs to be higher. A key part of the bilateral talks between the United States and Mexico was to address this issue, and also add some conditions related to wage levels.

With regard to the content threshold, the United States has asked for this requirement to be raised, and according to the fact sheet released by USTR, the new content requirement will be increased to 75 percent. On the wage levels, the United States has pushed for a provision that requires 40 percent of the content of light trucks and 45 percent of pickup trucks to be made by workers that earn at least $16 an hour, and Mexico appears to have agreed to this as well. These changes make it harder for Mexican producers to satisfy the conditions to get the zero tariffs, while Canada and the United States would not be affected by this change.

So what’s the point of all this? The goal of the Trump administration’s negotiators was to make it more difficult for autos to qualify for the zero tariffs. In other words, they are taking some of the free trade out of NAFTA.

Along the same lines, reports suggest there is a provision that would allow the United States to charge tariffs above the normal 2.5 percent tariff rate (which applies to countries that don’t have a trade agreement with the United States) for any new auto factories built in Mexico. It is not clear from today’s announcement whether this is included in the newly agreed provisions.

The impact of these changes, if the NAFTA talks are completed and the new rules go into effect, will vary by producer, so it is hard to give a precise assessment of how much it will raise costs overall. The 25 percent auto tariff that the Trump administration is currently considering – ostensibly based on national security, but really just protectionism – is likely to raise auto prices a lot. A recent study estimates that if Trump implements his proposed 25 percent tariff on auto imports, the average price of a compact car would increase by $1,408 to $2,057, while luxury SUVs and crossover vehicle prices could increase by $4,708 to $6,972. No similar study has yet been done for the new NAFTA content requirements, but whatever the final figure may be, it is clear that these stricter content requirements will raise prices to some extent, which will make autos more expensive for consumers and potentially make North American production less competitive.

The NAFTA renegotiation has led to great market uncertainty, and it would be nice to get this all resolved. But before we applaud the completion of any deal, what matters most is in the details. From what we know at the moment, those details suggest that NAFTA may have been made worse, not better. And there are a still a lot of details to work out. A full assessment of the new NAFTA will have to await all of the final terms.

Furthermore, President Trump suggested that while Canada can join soon, it may not be part of the agreement at all, and instead could face a tariff on car exports to the United States. Leaving Canada out of a new NAFTA would be a mistake. On the phone during the announcement, Mexican President Enrique Pena Nieto remarked on more than one occasion that he was looking forward to Canada rejoining the talks. This should be received positively as it suggests Mexico is still committed to a trilateral deal.  What happens next is anyone’s guess, but we should keep our eyes open for the return of Canada’s Foreign Minister Chrystia Freeland to Washington to wrap up the discussions soon. Let’s hope the other changes still under discussion point us in a more positive direction.

Sometimes it’s worth reading the fine print in obscure regulatory proposals. One such example is contained in a “proposed rulemaking” by the EPA on what are called “dose-response models.”

Buried in the Federal Register a few months back (on April 30) is this seemingly innocuous verbiage:

EPA should also incorporate the concept of model uncertainty when needed as a default to optimize dose risk estimation based on major competing models, including linear, threshold, U-shaped, J-shaped and bell-shaped models.

Your eyes glaze over, right?

Instead they should be popping out. EPA is proposing a major change in the way we regulate radiation, carcinogens, and toxic substances in our environment.

Since the 1950s, environmental regulations are largely based upon something called the “linearity-no threshold” (LNT) model, which holds, for example, that the first photon of ionizing radiation has the same probability of causing cancer as the bazillionth one.

The hypothetical mechanism is that each photon has an equal probability of zapping a single base pair in our DNA, and that can result in a breakage which may have the very remote possibility of inducing cancer.

The LNT is in fact fallout from, well, fallout—the radioactive nuclides that were the widely dispersed byproduct of atmospheric testing of nuclear weapons from the 1940s through the early 1960s. University of Massachusetts toxicologist Ed Calabrese has painstakingly built a remarkable literature showing that the LNT was largely the work of one man, a Nobel-prize winning mutation geneticist named Herman Muller. His work was classified by the old Atomic Energy Commission, and, amazingly, as shown by Calabrese, likely not peer reviewed.

His work was also wrong. Since Muller’s early work on x-ray induced mutations in fruit flies, it has since been discovered that DNA breaks all of the time, and that our cells carry their own repair kit. Cancer can occur when they are overwhelmed by large numbers of breakages.

Think of the sun’s radiation, which includes the ionizing ultraviolet wavelengths. The LNT model implies the first photon we experience can cause cancer. In reality there is clearly a threshold above which the cancer probability rises. Everyone is exposed to the sun, but only some populations are prone to basal cell skin cancers and they are in very sunny environments, which explains why it is so prevalent in Australia, and pretty much nonexistent in the planet’s northernmost populated areas.

In fact the LNT model isn’t just wrong—nature actually works opposite to it. Small amounts of exposure to things that are toxic in large amounts can actually be beneficial. Again, consider sunlight. It’s required to catalyze the final synthesis of Vitamin D. Absent Vitamin D, pretty terrible diseases, like Ricketts, with terrible disfigurement and sometimes fatal sequelae.

The alternative model is also largely the handiwork of Dr. Calabrese, which he calls the “biphasic dose-response,” or “hormetic” model. Note this is not homeopathy, which erroneously holds that the smaller the dose of something, the greater the therapeutic effect, which is nonsensical.

The biphasic response is so ubiquitous that it forms much of the basis for modern pharmacology. Small doses of things like certain snake venoms can reduce high blood pressure with all the resultant clinical benefits. Large doses are obviously fatal. In fact, the first Angiotensin Converting Enzyme inhibitor antihypertensives (referred to as ACE inhibitors) were derived from the venom of a Brazilian viper.

The obtuse verbiage quoted from the Federal Register means that EPA is proposing to, where appropriate (and that may be for a large number of instances), substitute other models, including the hormetic one, for the inappropriate LNT.

This rather momentous proposed change owes itself largely to one man—Ed Calabrese, who just happens to be a Cato adjunct scholar in the Center for the Study of Science. Calabrese has hundreds of papers in the scientific literature largely documenting the prevalence of hormesis and the rather shoddy way that the LNT was established and maintained, as well as a tremendous review article on it published last year in one of the Nature family of journals.

Last Thursday, Tucker Carlson invited Peter Kirsanow onto his top-rated Fox News show Tucker Carlson Tonight to discuss illegal immigration and crime. They began the segment by playing a recent clip of me and Carlson arguing about data on illegal immigrant criminality in Texas. In that earlier segment, Carlson said we don’t have good data on illegal immigrant criminality and I said we do, specifically from the state of Texas. The data show that illegal immigrants have a lower murder conviction rate than native-born Americans. 

Kirsanow responded to my clip in a multi-minute near-monologue. Unfortunately, Kirsanow made many errors and misstatements. His comments on television parroted a piece that he wrote earlier this year in National Review. That piece made so many mathematical, definitional, and logical errors that I rebutted it in detail in Reason this February.

Since I was not invited on Thursday’s segment to debate Kirsanow while he criticized my points and presented his own, I’ve decided to respond here.  Below are Kirsanow’s quotes from his recent appearance on Tucker Carlson Tonight, followed by my rebuttal.

There’s something called the State Criminal Alien Assistance Program and you can extrapolate from that and get pretty reliable data.

No, you cannot extrapolate from the State Criminal Alien Assistance Program (SCAAP) data to get reliable national estimates of illegal immigrant criminality. The subsequent statistics that Kirsanow uses in his segment are nearly all from a 2011 Government Accountability Office (GAO) report that specifically says, “[w]hile our analysis provides insight into the costs associated with incarcerating criminal aliens in these states and localities, the results of this analysis are not generalizable to other states and localities.” A follow-up GAO report on SCAAP in 2018 repeated the same warning that “[o]verall, our findings are not generalizable to criminal aliens not included in our federal and state and local study populations.” Data from the report that Kirsanow relies upon cannot be used for Kirsanow’s purposes.

SCAAP is a federal program that is supposed to compensate states and localities for incarcerating some illegal immigrants, but it is not a reliable program. As Kirsanow himself admitted, SCAAP only “partially reimburses states and localities for the cost of incarcerating certain criminal aliens [emphasis added].”  States also must choose whom to report to the federal government for SCAAP refunds, which are often small compared to the cost of incarceration, so requests are inconsistent, partial, and the criteria for reporting vary considerably by state. 

He [Alex Nowrasteh] conveniently mentioned Texas to claim that the homicide rates among illegal aliens is 44 percent lower than that of lawful residents. He chose the one state where it is true that the homicide rate is lower for illegal aliens, by 15 percent, not 44 percent.

Kirsanow is mixing and matching his sources here. First, I said that the homicide conviction rate for illegal immigrants in Texas was 44 percent below that of natives in 2016. Unique among all American states, Texas records criminal convictions by crime and the immigration status of the person convicted or arrested. I requested and received data on this from the Texas Department of Public Safety and then made public information requests to every state to see if they kept similar data, but none had. 

Second, Kirsanow said that Texas is the one state where illegal immigrant homicide rates are below those of natives. Even if we analyze the SCAAP data in the GAO reports in the incorrect way that Kirsanow does, there is no evidence for his claim or that the homicide rate for illegal immigrates in Texas in 15 percent below that of native-born Americans.  Kirsanow was likely citing a Cato Immigration Research and Policy Brief that looked at the relative rates of homicide convictions in Texas in 2015, but he got the percentage wrong. In 2015, illegal immigrants had a homicide conviction rate that was 16 percent below that of native-born Americans according to our Brief. 

There are over 300,000 illegal aliens incarcerated.

Kirsanow got this number from the 2011 GAO report mentioned above. That GAO report does state that there were 295,959 incarcerations of criminal aliens in state and local prisons over the course of 2009. Kirsanow incorrectly interpreted what that number meant and made many other errors. 

First, the GAO’s definition of criminal aliens is “[n]oncitizens who are residing in the United States legally or illegally and are convicted of a crime.” Thus, the data on criminal aliens also include legal immigrants who have not yet become citizens. On television, Kirsanow erroneously assumed that the term criminal aliens is synonymous with illegal immigrants, even though he previously acknowledged the distinction in a National Review article, in which he wrote “[a]ccording to GAO, in FY 2009 295,959 SCAAP criminal aliens, of whom approximately 227,600 are illegal aliens, were incarcerated in state jails and prisons.”  

Second, the 295,959 number is the total number of incarcerations of criminal aliens in 2011, not the number of individual criminal aliens incarcerated. The 2011 GAO report states this bluntly: “SCAAP data do not represent the number of unique individuals since these individuals could be incarcerated in multiple SCAAP jurisdictions during the reporting period.”

In other words, the 295,959 number includes many of the same people who have been incarcerated multiple times. If an individual criminal alien was incarcerated for 10 short sentences, released after each one, and then re-incarcerated, then that single alien would account for 10 incarcerations. But Kirsnaow counted him as 10 separate individuals. In Kirsanow’s piece for National Review on this subject, he then compared the number of native-born individuals incarcerated with their total population to estimate relative incarceration rates. In other words, Kirsanow compares the flow of criminal aliens into prison to the stock of all aliens with the stock of natives in prison compared to the stock of all natives in the entire population. Kirsanow confused stocks and flows and his nonsensical apples-to-oranges comparison produced a relatively higher, but incorrect, illegal immigrant incarceration rate.

Third, a quick look at the American Community Survey shows just how wrong Kirsanow is. In 2009, the ACS reported that there were 162,579 non-citizens incarcerated in all federal, state, and local adult correctional facilities (S2601B, 1-year). This is slightly-more than half of the 295,959 incarcerations that SCAAP reports in just state and local prisons. That makes it logically impossible for the 295,959 number to refer to the total number of criminal aliens incarcerated. The ACS counts stocks at a specific time, the GAO counted some flows. Kirsanow is incorrect for talking about the SCAAP figures as if they are stocks of illegal immigrants incarcerated. 

They don’t count the millions of offenses and crimes committed by illegal aliens.

I wish we could count the millions of crimes committed by people that are unsolved or unreported and then study the demographics of the people who committed them, but that’s impossible. Furthermore, we would have to also have that information for all native-born Americans to make a comparison between the illegal immigrant and native-born crime rates. To go even further, I wish we could count everything that didn’t happen as it would immensely improve our world and social science. In the real world, Kirsanow’s statement does not have much relevance. 

John Lott did probably the most methodologically rigorous and comprehensive examination of this type using Arizona Department of Corrections Data.

Kirsanow approvingly cited this working paper by economist John R. Lott Jr. of the Crime Prevention Research Center, in which he purported to find that illegal immigrants in Arizona from 1985 through 2017 have a far higher prison admission rate than U.S. citizens. However, Lott made a small but fatal error that undermined his entire finding: He misidentified a variable in the dataset. Lott wrote his paper based on a dataset he obtained from the Arizona Department of Corrections (ADC) that lists all admitted prisoners in Arizona. According to Lott, the data allowed him to identify “whether they [the prisoners] are illegal or legal residents.” Yet the dataset does not allow him or anybody else to identify illegal immigrants.

The variable that Lott focused on is “CITIZEN.” That variable is broken down into seven categories. Lott erroneously assumed that the third category, called “non-US citizen and deportable,” only counted illegal immigrants. That is not true because non-US citizen and deportable immigrants are not all illegal immigrants, as confirmed by the ADC – the source of Lott’s data. A significant proportion of non-U.S. citizens who are deported every year are legal immigrants who violate the terms of their visas in one way or the other, frequently by committing crimes. According to the American Immigration Council, about 10 percent of people deported annually are Lawful Permanent Residents or green card holders—and that doesn’t include the non-immigrants on other visas who were lawfully present in the United States and then deported. 

Lott mistakenly chose a variable that combines an unknown number of legal immigrants with an unknown number of illegal immigrants and assumed that it only counted illegal immigrants. Lott correctly observed that “[l]umping together documented and undocumented immigrants (and often naturalized citizens) may mean combining very different groups of people.” Unfortunately, the variable he chose also lumped together legal immigrants and illegal immigrants. I wrote about the fatal flaw in Lott’s paper here in February. Lott and I had an exchange here. Kirsanow should have known that Lott’s paper was not methodologically sound because he misidentified the only variable that mattered for his analysis. Lott’s working paper is not the slam dunk that Kirsanow claimed it was.   

Alex is very knowledgeable and that’s why it’s puzzling that he won’t acknowledge the overwhelming amount of data that shows that illegal aliens not only commit more crimes, at a higher rate that is, than lawful residents but more serious crimes at a far higher rate than legal residents.

As I’ve shown above, Kirsanow misread, misinterpreted, and incorrectly defined numerous terms in the GAO report that was his near-exclusive source of information to make an intellectually indefensible case that illegal immigrants are more likely to be criminals than native-born Americans. What’s even more puzzling is that Kirsanow is aware of his errors after a previous exchange that he and I had on this very issue but he chose to repeat them on television regardless. 

Cato scholars have produced much original research on illegal immigrant criminality.  Based on data from the state of Texas in 2015, we found that illegal immigrants have a lower criminal conviction rate than native-born Americans for most crimes in that state (number of convictions), the rate of homicide convictions for illegal immigrants is below that of native-born Americans in 2016 (the number of people in each subpopulation convicted), and that the incarceration rates for illegal immigrants are below those of native-born Americans (but above those of legal immigrants).  Peer-reviewed research also points in roughly the same direction.   

Policy analysts, commentators, politicians, and members of the media have a duty to honestly parse the facts and debate these complex issues in good faith.

Late last week UPI news ran a report by E.J. Mundell with the headline, “Government efforts to curb opioid prescriptions might have backfired.” It cites two separate studies published online in JAMA Surgery on August 22 that examined two different restrictive opioid policies that fell victim to the Law of Unintended Consequences.

The first study, by researchers at the University of Michigan, evaluated the impact of the Drug Enforcement Administration’s 2014 rescheduling of hydrocodone (Vicodin) from Schedule III to Schedule II. Prescriptions for Schedule III narcotics may be phoned or faxed in by providers, but Schedule II narcotics require the patient to see the prescriber in person in order to obtain a prescription. The DEA’s goal was to reduce the number of Vicodin pills, popular with non-medical users, available for diversion to the black market.

The study looked at 21,955 post-surgical patients across 75 hospitals in Michigan between 2012 and 2015 and found that the number of hydrocodone pills prescribed after the 2014 schedule change increased by an average of seven 5mg tablets. The total Oral Morphine Equivalent of prescribed hydrocodone did not change significantly after the DEA made hydrocodone Schedule II. However, the refill rate decreased after the change. The study’s abstract concluded, “Changing hydrocodone from schedule III to schedule II was associated with an increase in the amount of opioids filled in the initial prescription following surgery.”

As a practicing general surgeon, my initial reaction to this study was: “Tell me something I don’t know.” Prior to the 2014 schedule change, I would often start off prescribing a small amount of hydrocodone to some of my post-op patients (depending upon the procedure and the patient’s medical history) with the knowledge that I can phone in a refill for those patients who were still in need of it for their pain after the initial supply ran out. Once it was rescheduled, I changed my prescribing habits. Not wanting any of my patients to run out after hours, over a weekend, or on a holiday—when the office is closed and their only recourse would be to go to an emergency room or urgent care center to get a prescription refill—I increased the amount I prescribe (based on my best estimate of the maximum amount of days any individual patient might need hydrocodone) to reduce the chances of them needing a refill. This results in some patients having leftover Vicodin pills in their medicine cabinet. On the other hand, fewer of those patients need refills.

Not surprisingly, many of my clinical peers have done the same thing. It’s not a surprise because most physicians place the interests of their patients ahead of the interests of regulators and bureaucrats. So the adjustment made in postoperative hydrocodone prescribing was basically a “no brainer.” 

Unfortunately, in the past couple of years, many states have gone to restricting the number and dosage of pills that can be prescribed postoperatively—in some states the limit is 5 days, in others as few as 3 days—so many patients now must go to the office (or emergency room or urgent care) just a few days after their operation to get that refill after all. The American Medical Association and most medical specialty associations oppose a proposal before the US Senate to impose a national 3-day limit on opioid pill prescriptions

The second study, from researchers at Dartmouth Medical School, evaluated the impact of New Hampshire’s Prescription Drug Monitoring Program on the number of opioid pills prescribed. At this point every state has a PDMP, a program that surveils opioid prescribing and use by providers and patients. New Hampshire’s PDMP went active January 1, 2017. The goal again is to reduce the amount of pills prescribed. 

As I have written here and here there is evidence in the peer-reviewed literature that PDMPs may indeed be intimidating doctors into reducing the number and dosage of pain pills they prescribe—but this is only serving to drive non-medical users to cheaper and more dangerous heroin (often laced with fentanyl) while making patients needlessly suffer.

However, this latest study, which looked at the number of opioids prescribed for postoperative pain to 1057 patients at the Dartmouth-Hitchcock Medical Center during the six months preceding and the six months following the activation of New Hampshire’s PDMP, came to a different conclusion. It found that the mean number of pills prescribed during the six months preceding the PDMP had decreased 22.1 percent, but that during the six months after the PDMP the rate of decrease slowed to just 3.9 percent. It concluded, “A mandatory PDMP query requirement was not significantly associated with the overall rate of opioid prescribing or the mean number of pills prescribed for patients undergoing general surgical procedures.” 

The study is limited by the small number of patients, the limitation to just one hospital, and the short length of follow up. But it does add to the growing body of evidence suggesting that PDMPs are not achieving their mission: reducing the overdose death rate while, at the same time, assuring that patients receive adequate treatment of their pain.

Alas, despite the immutable presence of the Law of Unintended Consequences, don’t expect policymakers to rethink their misguided prohibitionist approach to the opioid overdose problem any time soon.

As you’ve no doubt heard by now, on Tuesday, Michael Cohen, President Trump’s erstwhile “fixer,” pled guilty to, among other charges, making an illegal campaign contribution in the form of a $130,000 “hush money” payment to adult film star Stormy Daniels. That payment was made, Cohen affirmed, “at the direction of a candidate for federal office”—Donald J. Trump—“for the principal purpose of influencing the election.” 

If that’s true, would Trump’s participation in that scheme rise to the level of “high Crimes and Misdemeanors”? Maybe: you can argue it both ways, so I will.

The case against the Stormy payoff as impeachable offense would characterize it as the sort of de minimis legal violation impeachment isn’t concerned with. Just as you don’t need a crime to have an impeachable offense, the commission of a crime doesn’t automatically provide grounds for impeachment. Murder is a crime and an impeachable offense—even according to Rudy Giuliani—but you wouldn’t impeach a president for, say, importing crocodile feet in opaque containers or misappropriating the likeness of “Smokey Bear,” because those offenses don’t speak to his fitness for high office.

Impeachment opponents can argue that the criminal offense alleged here depends on a contested application of the Federal Election Campaign Act. In the 2012 prosecution of John Edwards, three former FEC commissioners testified that third-party payments to Edwards’ pregnant mistress would not have been considered campaign contributions.

The president’s defenders can also—though this may be awkward for some—compare Trump’s troubles to Bill Clinton’s two decades ago: unlawful acts committed as part of a scheme to conceal a private sexual affair. Though many of them sang a different tune in the ‘90s, they can appeal to the dominant historical consensus that impeaching Clinton for that was like wheeling out the proverbial hundred-ton gun to blast a squirrel.

The case that the Stormy payoff is an impeachable offense depends on a different, but equally plausible framing. In Trump’s case, the unlawful act quite plausibly affected the outcome of the 2016 election. Cohen made the payment less than two weeks before Election Day, in what turned out to be an extraordinarily close contest. As Laurence Tribe and Joshua Matz note, the Framers repeatedly identified “corrupt acquisition of the presidency as a paradigm case for impeachment.” One of the Framers’ key concerns was the possibility of a candidate bribing the Electors—an imperfect analogy to what’s alleged in Trump’s case. But impeachment advocates might also point to our most recent impeachment case: Judge G. Thomas Porteous, removed by the Senate in 2010, in part for corrupt acquisition of his post. Article IV of the Porteous impeachment charged the judge with lying to the Senate about his past in order to secure confirmation to the federal bench, thus “depriv[ing] the United States Senate and the public of information that would have had a material impact on his confirmation.”

Jerry Ford went too far when he said that an impeachable offense is “whatever a majority of the House considers it to be at a given moment in history.” Still, the scope of the impeachment power is much broader than is commonly recognized. It covers what Hamilton described as “those offenses which proceed from the misconduct of public men, or, in other words, from the abuse or violation of some public trust.” As the legal scholar Frank Bowman sums up: “‘high crimes and misdemeanors’ are serious offenses that either endanger the political order or demonstrate an official’s manifest unfitness to continue in office.” That leaves ample room for argument and interpretation. Moreover, while legal analysis may be able to tell you when impeachment is permissible in a given case, it can’t tell you whether it’s a good idea.

The fact that Michael Cohen has potentially implicated Donald Trump in a felony violation of federal election law has increased the president’s chances of facing a serious impeachment effort after November. But if impeachment is about guarding the public from officials dangerously unfit to wield power, “broke the law to pay off a mistress” has to be pretty far down the list behind, say, “makes off the cuff threats of nuclear annihilation.” That any impeachment inquiry will likely spend more time parsing the intricacies of federal election law than examining the president’s public conduct is yet another reason to rue the “Overcriminalization of Impeachment.”

This morning, as anticipated, the Trump administration broadened the scope of its punitive tariffs on imports from China. The list of products subject to 25 percent duties increased from 818 to 1,097 harmonized tariff schedule (HTS) subheadings. Last year, the value of these imports from China amounted to roughly $50 billion, so the tax incidence (ceteris paribus), for the sake of the argument, will be roughly $12.5 billion. 

As expected, Beijing retaliated in kind, assessing similar duties on a commensurate value of U.S. exports, which is certain to cause revenues to fall for U.S. producers of the industrial goods and agricultural products subject to those retaliatory tariffs. But let’s not forget the adverse impact of our own tariffs on our own manufacturers, farmers, construction firms, transportation providers, miners, wholesalers, retailers, and just about every other sector of the U.S. economy.

About half the value of U.S. imports consists of intermediate goods (raw materials, industrial inputs, machine parts, etc.) and capital equipment. These are the purchases of U.S. businesses, not households. The vast majority of the Chinese products on the tariff list fit this description. They are nearly all inputs to U.S. production. By hitting these products with tariffs at the border, the Trump administration is, in essence, imposing a tax on U.S. producers. Trump is raising the costs of production in the United States in sector after sector.

How significant is a roughly $12.5 billion tax in a $19 trillion economy? Well, not especially significant when put in that context. But that context masks the burdens directly imposed on the companies that rely on these inputs and indirectly imposed on their workers, vendors, suppliers, and downstream customers.

The Input-Output tables produced by the U.S. Bureau of Economic Analysis reveal—among other things—information about the relationships between industries in the United States. The “Use” tables map the output of all industries to their uses by other industries as inputs, as well as by end users.

The most recent “detailed” tables present the U.S. economy in 2007. The value of total commodity output at the time was $26.2 trillion, of which $14.5 trillion was consumed for end use and $11.7 trillion was consumed as intermediate inputs to further production. The $11.7 trillion dollar value of output from each of 389 industries (defined at the 6-digit NAICS level) is mapped to the input of each of the other 388 industries. In other words, $11.7 trillion of commodity output from 389 industries is simultaneously depicted as $11.7 trillion of intermediate inputs to 389 industries. Although the values of that industry-specific output and input certainly have changed over 10 years, it is not unreasonable to assume a roughly similar composition of input use on a percentage basis.  (Sure, production processes change and, consequently, the inputs demanded change too. But the 2007 table provides the best information available and it should produce some useful results.)

Trump’s tariffs apply to 1,097 products as defined by the Harmonized Tariff Schedule’s 8-digit subheadings. Those 1,097 HTS numbers map to 102 (of 389) 6-digit NAICS codes on the BEA’s Input-Output table. By aggregating the value of those 102 “tariffed” NAICS codes and taking that sum as a percentage of the total intermediate goods consumed, we can get a rough estimate of the burden of the tariffs on each of the 389 industries. (Note: In most cases, the estimate is likely to be higher because many NAICS-6 codes include more HTS-8 codes than are subject to the tariffs.)

Table 1 ($ millions)

As the first line in Table 1 shows, the total value of intermediate goods consumed (in 2007) was $11.7 trillion; the 2007 value of the 102 NAICS codes that include HTS numbers hit with Trump’s tariffs was $1.8 trillion; and the percentage of intermediate goods affected is 15.1%. That’s the average.

The subsequent lines in the table are presented in descending order of impact by sector (2-digit NAICS).  So, the transportation sector consumed $389 billion of intermediate goods in 2007 and $121 billion (or 31.1%) of that consumption is now subject to tariffs. The manufacturing sector consumed $3.5 trillion (almost 30% of the total) of intermediate goods in 2007 and $892 billion (or 25.7%) is now subject to tariffs.

Among those likely to be most burdened by Trump’s tariffs are industries within the manufacturing sector. In fact, every industry with more than 50 percent of their intermediate inputs subject to the tariffs is in the manufacturing sector. Table 2 shows the 25 most exposed industries (at the NAICS-6 level)—those with the greatest percentage of intermediate inputs subject to the tariffs.

Table 2($ millions)

 

Although this analysis doesn’t attempt to get at the actual cost increases that many industries will have to endure, it reinforces and makes clear the fact that tariffs are always about politicians bestowing favors on the few at the expense of many other industries.

Rep. Duncan Hunter is not pleased with the Cato Institute’s efforts to repeal the Jones Act. Taking notice of a recent op-ed I penned criticizing the California congressman’s support of this costly law, Hunter took to the pages of the same newspaper last weekend to defend his stance. It’s worth reviewing the piece in full, as it recycles several arguments typically offered in support of the Jones Act—and exposes some glaring weaknesses.

Hunter begins his defense of the Jones Act by disputing accusations that the law negatively impacts Puerto Rico’s economy:

Like many opponents of the Jones Act, the CATO Institute attempts to conflate this 100-year old law with the struggles of Puerto Rico’s economy. They repeat the same tired argument that the Jones Act is responsible for high prices and economic instability, going so far as to make the ridiculous implication that the Jones Act adds $5 to the cost of a pint of ice cream.

A recent economic study disputed these price discrepancies but if concerns remain, it is important to recognize that Puerto Ricans have other options. Most of the ships that call on Puerto Rico are foreign flagged and current law allows them to deliver as many goods from foreign ports as Puerto Ricans can consume. A 2013 Government Accountability Office Study failed to conclude that removing the Jones Act would benefit Puerto Rico and, in fact, acknowledged that the regulation provides a number of advantages. Other studies have found that the Virgin Islands — approximately 100 miles from Puerto Rico — has no Jones Act requirement, but has higher shipping prices than Puerto Rico from the mainland.

There’s a lot to unpack here, but let’s begin by noting that the “recent economic study” Hunter refers to was funded by a pro-Jones Act special interest group with a questionable methodological approach. Pointing out that Puerto Ricans have options for obtaining needed goods that are not subject to the Jones Act, meanwhile, is essentially telling them to eat cake. The rest of the United States is, by far, Puerto Rico’s largest trading partner. Simply doing business with other countries instead of the world’s largest economy with which Puerto Rico shares deep political and cultural links is oftentimes not a feasible option.

But that doesn’t mean Puerto Ricans don’t try to hunt for cheaper alternatives. The 2013 GAO report cited by Hunter highlights numerous examples of this dynamic, including farmers who purchase feed from Canada instead of New Jersey due to lower shipping costs and the sourcing of jet fuel from Venezuela rather than domestically for the same reason.  

As for the fact that shipping rates are higher to the U.S. Virgin Islands than Puerto Rico, this is an apples to oranges comparison. The U.S. Virgin Islands have a population and economy roughly 30 times smaller than Puerto Rico. With its smaller size comes smaller trade flows, smaller economies of scale, and reduced efficiency in servicing this market that is reflected in higher transport costs.

Hunter then pivots from discounting the Jones Act’s economic cost to Puerto Rico to highlighting its alleged national security benefits:

In a time of war, without the Jones Act, quickly rebuilding our shipyard industrial base would be next to impossible and training the American merchant mariners to man ships would take precious time we will not have. Instead, we would have to rely on shipyards overseas to supply our ships and we would likely have to pay foreign mariners to operate those ships. Is this really a position in the best national security interest of our nation?

We can have the strongest military in the world, but without the ships and U.S. merchant mariners to bring supplies to service members overseas, our capabilities would be severely limited, a position acknowledged by Gen. Darren McDew, Commander of U.S. Transportation Command.

The reality is that the Jones Act has presided over the steady decline of the U.S. shipyard industrial base. Since the early 1980s the United States has lost more than 300 shipyards. It’s a statistic Hunter should be familiar with given that it was mentioned on several occasions during a 2013 congressional hearing that he chaired. Furthermore, claims that the United States would be forced to rely on foreign shipyards without the Jones Act overlooks the existing reliance on foreign components and know-how to produce the few and vastly overpriced commercial ships that emerge from American shipyards.

Dependence on foreigners to support the U.S. military, meanwhile, is a description of the status quo. When the United States found itself needing to quickly transport vast amounts of equipment and war matériel to Saudi Arabia following Iraq’s invasion of Kuwait, foreign-flagged and crewed ships played a key role. According to the U.S. Transportation Command’s official history of its role in that conflict, 26.6 percent of total unit cargo was carried by foreign-flagged vessels.

While Jones Act supporters claim that the law assures the United States of ready access to ships and merchant mariners in times of war, the Pentagon found its transport capabilities so strained during that conflict that it twice requested the use of a transport ship from the Soviet Union—and was rejected both times. On the mariner front, the United States was forced to press two octogenarians into service and one 92-year-old sailor. Turning to the present day, meanwhile, the U.S. Maritime Administration published a 2017 report which, in its own words, “documents a deficit of mariners with unlimited credentials to meet the national security and force projection needs.”

Hunter continues in a similar vein:

The Jones Act also improves our safety and security. Rather than having unvetted foreign workers sail ships on our inland waterways, the Jones Act mitigates safety risks by ensuring that vessels are operated by U.S. mariners only.

Pure demagoguery. Foreign mariners already operate in U.S. waters on a daily basis and present no established threat. As a 2011 GAO report noted, overwhelmingly foreign maritime crews already make millions of entries into U.S. ports each year and yet there has never been a reported terrorist attack involving one of these seafarers. What reason is there to think these same foreign mariners would suddenly become a menace if permitted to operate on inland waters?

Furthermore, Hunter is factually wrong. Foreign mariners are already allowed to work on Jones Act vessels, with the minimum number of American crew set at 75 percent, not 100. As for safety, let’s note that it was a Jones Act ship with an American captain, the Exxon Valdez, that is responsible for one of the worst environmental disasters in U.S. history.

Congressman Hunter concludes with some comments about the Jones Act’s economic impact:

The Jones Act also provides significant economic benefits, including here in Southern California. The thousands of Jones Act vessels support nearly 500,000 domestic jobs with nearly $100 billion in economic impact.

The fact is, there are cheaper places to build ships than the United States, and I am sure that there are plenty of Chinese shipyards with cheap Chinese steel eager to undercut ones here at home. I am also sure we could find cheaper foreign workers to operate the ships currently sailed by Americans.

If building the cheapest ship manned by the lowest paid workers is the end goal, then by all means, let’s get rid of the Jones Act. If you are like me, and you value good paying U.S. jobs in American Shipyards and the Americans who can man those ships in times of conflict, then the Jones Act is clearly worth protecting.

By acknowledging that Americans could buy ships at lower cost abroad—as much as eight times cheaper—Hunter essentially concedes that the law imposes a significant economic cost. Paying vastly more to obtain the same good is the path to impoverishment, not prosperity. The secret to economic growth and an improved standard of living lies in increased efficiency and doing more with less. By blocking Americans from lower-cost alternatives, the Jones Act does the opposite.

The national security advantages we allegedly receive in exchange for the Jones Act’s price tab, meanwhile, are a figment of the imagination. If anything, the law makes us less secure, not more.

The Jones Act isn’t working. Its promised benefits have failed to materialize while the law imposes significant burdens on the U.S. economy. The time for its repeal is long past due.

On August 22, Food and Drug Commissioner Scott Gottlieb issued a press release announcing the FDA plans to contract with the National Academies of Sciences, Engineering, and Medicine (NASEM) to develop evidence-based guidelines for the appropriate prescribing of opioids for acute and post-surgical pain. The press release stated:

The primary scope of this work is to understand what evidence is needed to ensure that all current and future clinical practice guidelines for opioid analgesic prescribing are sufficient, and what research is needed to generate that evidence in a practical and feasible manner.

The FDA will ask NASEM to consult a “broad range of stakeholders” to contribute expert knowledge and opinions regarding existing guidelines and point out emerging evidence and public policy concerns related to the prescribing of opioids, utilizing the expertise within the various medical specialties. 

Recognizing the work of the Centers for Disease Control and Prevention for having “taken an initial step in developing federal guidelines,” Commissioner Gottlieb diplomatically stated the FDA initiative intends to “build on that work by generating evidence-based guidelines where needed” that would differ from the CDC’s endeavor because it would be “indication-specific” and based on “prospectively gathered evidence drawn from evaluations of clinical practice and the treatment of pain.”

The CDC guidelines for prescribing opioids, released in early 2016 and updated in 2017, have been criticized by addiction and pain medicine specialists for not being evidence-based. Unfortunately, these guidelines have been used as the basis for many new prescribing regulations instituted at the state-level and proposed on the federal level. The American Medical Association and other medical specialty organizations have spoken out against proposed federal prescription limits that are based upon an inaccurate interpretation of the flawed CDC guidelines. 

In May, Commissioner Gottlieb, in a blog post, mentioned he was aware of criticisms as well as complaints by patient and patient-advocacy groups and was interested in developing more “evidence-based information” on the matter of opioids and pain management. 

Now it appears he is taking the next step. While the press release language was diplomatic and avoided any notion of disrespect for the CDC’s efforts, it is difficult not to infer that the Commissioner agrees with many who have been criticizing the CDC guidelines over the past couple of years.

 

Pages