9/26/2019: Chief Justice Warren Burger resigns.

from Latest – Reason.com https://ift.tt/3hZN1KI
via IFTTT
another site
9/26/2019: Chief Justice Warren Burger resigns.

from Latest – Reason.com https://ift.tt/3hZN1KI
via IFTTT
9/26/2019: Chief Justice Warren Burger resigns.

from Latest – Reason.com https://ift.tt/3hZN1KI
via IFTTT
Lessons On Inflation From The Past
Tyler Durden
Sat, 09/26/2020 – 07:00
Authored by Alasdair Macleod via GoldMoney.com,
This article examines two inflationary experiences in the past in an attempt to predict the likely outcome of today’s monetary policies.
The German hyperinflation of 1923 demonstrated that it took surprisingly little monetary inflation to collapse the purchasing power of the paper mark. This is relevant to the fate of the “whatever it takes” inflationary policies of today’s governments and their central banks.
The management of John Law’s Mississippi bubble, when he used paper money to rig the market is precisely what central bank policy is aimed at achieving today. By binding the fate of the currency to that of financial assets, as John Law proved, it is the currency that is destroyed.
At the outset, I shall make a point about the relevance of the chart below, a screengrab from Constantino Bresciani-Turroni’s The Economics of Inflation, which has been frequently reproduced and will be familiar to many who have read about Germany’s post-First World War inflation.
Looking at the progress of the collapse of the paper mark from its parity with the gold mark, we can take a punt on where the dollar might be today on this scale. The dollar has lost 98.2% of its purchasing power since the failure of the London gold pool in the late 1960s. That puts the dollar at 56 on the chart, which is approximately the equivalent of Germany’s paper mark valuation relative to gold in the first half of 1922. If it follows the same course as the paper mark, in five- or six-months’ time it will be 100 and in ten- or twelve-months about 12,000. Instead of the paper mark’s original pre-1914 parity to the gold mark, the dollar started at $35 to the ounce, so the gold price in dollars would be $1960, $3,500 and $42,000 respectively. The final price at which the German inflation was stopped on 20 November 1923 when it was fixed to the rentenmark at a trillion to one would be the equivalent today of $35 trillion to the ounce.
Playing around with figures like these is not a replacement for sound reasoning, but it does impart an interesting perspective. A better understanding of the possible demise of the unbacked dollar is not to think of the numbers of dollars per ounce of gold rising or gold potentially hitting $42,000 within a year, a seemingly ridiculous number, but to think of gold as being broadly stable while the dollar loses its purchasing power. The presentation of an impossibly steep and accelerating uptrend is less believable than a collapsing one. Furthermore, the commonality of the paper mark and the dollar is that they were and are unbacked state-issued currencies liable to the same influences, a fact the consequences of which are becoming increasingly apparent.
For the paper mark it all started in 1905, when a German economist and leader of the Chartalist movement, Georg Knapp, published a book whose title translated as the State Theory of Money. Thus encouraged, under the direction of Bismarck the Prussian administration financed the military build-up to the war to end all wars by utilising the state’s seigniorage. And when Germany lost, any thoughts of raiding the wealth of the vanquished came to nought. Instead, it was Germany that faced reparations and a post-war crisis. Just as the Fed is responding to the covid crisis today, the answer was to print money. Monetary inflation became the principal source of government finance, just as it is now in America and elsewhere.
There is hardly an economist today who does not condemn the Reichsbank for its inflationary policies. Yet they are supportive of similar monetary policies by the Fed, the European Central Bank, the Bank of Japan and the Bank of England. We should compare the stewardship of Rudolf Havenstein at the Reichsbank with that of Jay Powell, who after reducing interest rates the previous week, on 23 March issued an FOMC statement promising an inflationary policy of “whatever it takes”. And Rishi Sunak, the British Chancellor, used the phrase multiple times in his emergency budget.
But there is a difference. Today, alternatives to inflationism are never discussed amongst policy makers, who are like a blind cult believing entirely, with only minor variations, that monetary inflation is the cure for all economic ills. At least in Germany, the actions of the government were the subject of wider debate both in Germany and without, even though the answers were mostly ill-informed.
Part of the problem was the quantity theory of money was dismissed in a confusion between cause and effect. As Bresciani-Turroni put it, a great number of writers and German politicians thought that government deficits and paper inflation were not the cause, but the consequence of the external depreciation of the mark. A financier, politician and one of the leading German economists at the time, Karl Helfferich put it this way:
“The increase of the circulation has not preceded the rise of prices and the depreciation of the exchange, but it followed slowly and at great distance. The circulation increased from May 1921 to the end of January 1923 by 23 times; it is not possible that this increase had caused the rise in the prices of imported goods and of the dollar, which in that period increased by 344 times.”
It is a valid and important point, but not in the way Helfferich thought. The disparity between the increase in the money quantity and the increase in the general level of prices should be noted by observers today. Crucially, it did not require hyperinflation of the money supply to cause a hyperinflation of prices, a point we address later.
As well as dealing with the post-war economy and the capital dislocation that needed to be corrected, there was the burden of reparations. Many blamed the collapse of the paper mark on the latter, which is an inadequate explanation, when the Austrian crown, the Hungarian crown, the Russian rouble and the Polish mark all collapsed at roughly the same time.
Having resorted to monetary inflation as the means of marginal finance it rapidly became the principal source of government revenue. The German authorities then observed a dislocation between the increase in the quantity of money and the effect on its purchasing power, as described by Helfferich. It was taken as evidence against the quantity theory, as expounded by David Ricardo a century before, and upon which Peel’s Bank Charter Act of 1844 in England was based. Clearly, the dismissal of the quantity theory paved the way for more inflationary financing in 1920s Germany in the manner of today’s monetary planning. It led to the observation that the money supply was insufficient for an economy faced with rapidly escalating prices for imported goods.
The disparity between increases in the money supply in Germany and the effect on the paper mark’s purchasing power was so great that the accuracy of the underlying numbers does not matter. But today, while we can presumably rely on monetary statistics being reasonably accurate, the statistics that reflect the effect on prices are not. Today’s suppression of increases in the general price level simply disqualifies any statistical analysis, and in that sense, Helfferich’s observation is a more honest appraisal than those of today’s monetary planners.
On the surface, his deduction appeared to have some merit. He goes on to say,
“The depreciation of the German mark in terms of foreign currencies was caused by the excessive burdens thrust on to Germany and by the policy of violence adopted by France; the increase of the prices of all imported goods was caused by the depreciation of the exchanges; then followed the general increase of internal prices and of wages, the increased need for means of circulation on the part of the public and of the State, greater demands on the Reichsbank by private business and the State and the increase of the paper mark issues. Contrary to the widely held conception, not inflation but the depredation of the mark was the beginning of this chain of cause and effect; inflation is not the cause of the increase of prices and of the depreciation of the mark; but the depreciation of the mark is the cause of the increase of prices and of the paper mark issues. The decomposition of the German monetary system has been the primary and decisive cause of the financial collapse.”
The starting point in this logic is it is never the government’s fault but always the fault of external factors and markets. And doubtless, as the dollar declines in the foreign exchanges over the coming months and commodity prices rise, we shall continue to see similar arguments embedded in future FOMC statements.
The error common to both is to misunderstand the underlying subjectivity of money. Money takes its value from the marginal value placed upon it relative to owning goods. If money is widely regarded as sound, an economising man is happy to hold a reserve of it, only exchanging it for goods and services when they are needed. This is the most important quality of metallic money, to which people have always returned when government money fails.
A further benefit, which state currencies lack, is that gold and silver as money are accepted everywhere, having the same values in New York, London, and Mumbai. With the exception of cross-border trade, investment, and perhaps longer-term strategic considerations, government currencies are generally restricted to national boundaries. Paper currencies are therefore vulnerable to changes in demand in the foreign exchanges in a way gold and silver are not; if the foreigners don’t like your currency, they will reduce their exposure by selling it, irrespective of fundamental considerations.
In a currency collapse, the foreign exchanges are often the first to be blamed, as a press cutting from Germany towards the end of 1922 illustrates:
“Since the summer of 1921 the foreign exchange rate has lost all connection with the internal inflation. The increase of the floating debt, which represents the creation by the State of new purchasing-power, follows at some distance the depreciation of the mark. Furthermore, the level of internal prices is not determined by the paper inflation or credit inflation, but exclusively by the depreciation of the mark in terms of foreign currencies. To tell the truth, the astonishing thing is not the great quantity but the small quantity of money which circulates in Germany, a quantity extraordinarily small from a relative point of view; even more surprising is it that the floating debt has not increased much more rapidly”
Blaming a falling currency on foreign influences is the oldest excuse in the fiat book, but generally, foreigners who do not have much attachment to a national currency are only the first to sell. Initially, domestic users notice that prices have generally risen and that their income and savings buy less. It is a cause for complaint instead of a reasoned assessment, and of the logic employed in the press cutting above. And despite the evidence that it is the currency losing purchasing power instead of prices rising, the purchasing power can fall substantially before a currency’s users abandon it altogether.
Given upcoming events, we can see a similar trend for today’s paper money, particularly when represented by the American dollar. The first covid wave was assumed to be a one-off, hitting the American economy but to be followed by a rapid return to normal — the V-shaped recovery. Everywhere the official story was the same, that following lockdowns the economy, wherever it was, would return to normality. But it drove the US budget deficit to over $3.3 trillion in the fiscal year just ending, up from a previously forecast trillion or so. The Federal deficit is already one hundred per cent of Federal tax revenues.
Now we face a second covid wave, which will require more money-printing. The US Government budget deficit in the next fiscal year will again exceed revenues by a substantial margin. From last March, it has been in the position the German government faced in the early 1920s: monetary inflation has become the dominant source of government funding over tax revenue.
The slide in global cross-border trade, which is the consequence of the imposition of trade tariffs between America and China, comes at the end of a decade-long period of bank credit expansion, replicating the fragile position in America at the end of the roaring twenties. The stock market and economic collapses that followed had limited inflationary effects at the price level only due to a working gold standard; but even that could not withstand the political consequences of the depression, leading to a dollar devaluation in January 1934. This time, there is no check for the dollar, which is doubly afflicted by coronavirus lockdowns.
In Germany, the collapse of the paper mark ended by being stabilised at the rate of a trillion to one gold marks on 20 November 1923, the equivalent of 4.2 trillion to the US dollar. The paper mark was then replaced by a new unit, the rentenmark which was simply given the value of the gold mark. This arrangement only became legal on 11 October 1924. The success of the stabilisation, despite an inflation of the rentenmark — the quantity increasing from 501 million on 30 November 1923 to 1,803 million by the following July — has confused economists ever since.
Students of the Austrian school, and particularly of the writings of Ludwig von Mises should deduce that after the final flight out of money into goods, the emergence of a new money requires its users to accumulate a reserve of it. All that was required was a growing acceptance that the rentenmark would stick. The increase in cash and savings balances in the economy absorbed the increased inflation of the rentenmark with the result that consumer prices remained broadly stable.
If the stabilisation arrangement had been introduced before foreigners, businesses and the wider public had not discarded the paper mark entirely, the stabilisation would have failed. Those who think a German-style inflationary collapse today can be avoided by an early currency reset with a different form of fiat should take note.
The collapse of the paper mark is not the sole representation of how a government currency loses its facility. The advantage of its comparison with today is that a substantial cache of books, records and statistics exist on the subject, prompting economic historians to use it as a template for all the other hyperinflations of fiat money recorded since.
The economic history of John Law’s experiment in France in not so blessed in this regard. Exactly 300 years ago, his Mississippi bubble deflated, taking his currency, the livre, down with it. But to understand the relevance to the situation today, we must first delve into the facts behind his scheme.
The death of Louis XIV in 1715 left France’s state finances (which were the royal finances) insolvent. The royal debts were three billion livres, annual income 145 million, and expenditure 142 million. That meant only three million livres were available to pay the 220 million interest on the debt, and consequently the debt traded at a discount of as much as 80% of face value.
Following Louis XIV’s death, the Duke of Orleans had been appointed Regent to the seven-year old Louis XV, and so had to find a solution to the royal finances. The earlier attempt in 1713 was the often tried and repeatedly failed expedient of recoining the currency, depreciating it by one-fifth. The result was as one might expect: the short-term gain in state revenue was at the expense of the French economy by taxing it 20%. Furthermore, the Controller General of Finances foolishly announced the intention of further debasements of the coinage with a view to raising funds. This bizarre plan was announced in advance as an attempt to somehow stimulate the economy, but the effect was to increase hoarding of the existing coinage instead.
At about this time, John Law presented himself at court and offered his considered solution to the Regent. He diagnosed France’s problem as there being insufficient money in circulation, restricted by it being only gold and silver. He recommended the addition of a paper currency, such as that in Britain and Holland, and its use to extend credit.
Banknotes did not previously exist in France, all payments being made in specie, and Law persuaded the Regent of the circulatory benefits of paper money. He requested the Regent’s permission to establish a bank which would manage the royal revenues and issue banknotes backed by them as well as notes secured on property. These notes could be used as a loan from the bank to the king at 3% interest instead of the 7½% currently being paid on billets d’etat.
On 5 May 1716 he gained permission to establish Banque Generale as a private bank and to issue banknotes. Law succeeded in persuading the public to swap specie for his banknotes. He was so successful that after only eleven months, in April 1717 it was decreed that taxes and revenues of the state could be paid in banknotes, of which Law was the only issuer.
Law could now capitalise his bank. Besides his own money, this was done mostly with billets d’etat, in the books at their face value but obtained at a discount of 70% or so. He used public anticipation of future currency debasement to encourage the public to swap metallic money for his notes, which he guaranteed were repayable in coins that had the silver content at the time of the note issue. Law’s banknotes became an escape route for the general public from further debasement of silver coins.
The banknotes rose to a fifteen per cent nominal premium over coins within a year. The bank was exempt from taxes, and by decree foreigners were guaranteed their deposits in the case of war. The bank could open deposit accounts, loan money, arrange for transfers between accounts, discount bills and write letters of credit. Law’s banknotes could be used to settle taxes. There was no limitation placed on the total number of banknotes issued.
Money that had been hoarded for fear of further debasement was liberated by the premium on Law’s banknotes, and the improved circulation of money rapidly benefited the economy. Other private banks and moneylenders used Law’s banknotes as the basis of extending credit. This success meant his credibility with the Regent, the French establishment, and the commercial community was secured.
The use of his banknotes to settle taxes gave the bank the status of a modern note-issuing central bank. The expansion of circulating money stimulated trade, particularly given the banknotes’ convenience compared with using coin. It is worth noting that the earliest stages of monetary inflation usually produce the most beneficial effects, and this combined with Law’s apparent financial and economic expertise, particularly measured against the ineptitude of the Controller-General of Finances, gave the economy a much-needed boost.
It is worth noting that at this stage, there was no material inflation of the currency, banknotes being issued only against coins. However (and this appears to have generally escaped economic historians) it was clear that a loan business was facilitated on the back of Law’s paper money, which inflated the quantity of bank credit in the economy.
Law could now turn his attention to raising asset prices to pay down the royal debts, to enhance the public’s riches, and thereby his own wealth and that of his bank.
The Regent was understandably impressed by Banque Générale’s apparent success at issuing paper currency and rejuvenating the economy. The bank was being run on prudent lines, with banknotes being exchanged only for specie, and the quantity of what today would be called narrow money had not expanded materially beyond the release of hoarded specie. But Law had a problem: the note issue and the fact the bank had been capitalised on a mixture of partial subscriptions and billets d’etats at face value meant the bank had insufficient capital and profits to achieve its ultimate objective, which was to reduce the royal debts and the interest rates that applied to them.
Consequently, Law developed a plan to increase the bank’s assets as well as those under its indirect control. In August 1717, Law had requested of the Regent and was granted a trading and tax-raising monopoly over the French territory of Louisiana and the other French dependencies accessed by the Mississippi River, the existing trading lease having lapsed. A major attraction was supposed to be precious metals as well as the tobacco trade.
The Mississippi venture’s corporate title was Compaigne de la Louisiane ou d’Occident, but ever since has been commonly referred to as the Mississippi venture. For nearly two years, Law kept the project on hold while he established his bank. The shares languished at a discount to their nominal price of 500 livres, and what was needed was a scheme of arrangement to beef up the both the bank and the company.
As a first step, in the summer of 1719 he acquired three other companies to merge with the Mississippi venture. These had exclusive trading rights to China, the East Indies and Africa, which effectively gave Law’s Mississippi company a monopoly on all France’s foreign trade. To pay off these companies’ debts and to build the ships required for transport, Law proposed a share issue of 50,000 shares at 500 livres per share, 10% payable on application. By the time legal permissions were granted, the shares stood at 650 livres, making the new shares worth three times their subscription price in their partly paid form.
Law’s earlier success with his banknote issue, and the contribution made to improving the French economy, coupled with his ability to enhance the share price by issuing bank notes, were a guarantee that his scheme would be spectacularly profitable for anyone lucky enough to have a subscription accepted.
The bank was re-authorised as a public institution and renamed Banque Royale in December 1718. At the same time, the Regent authorised the further issue of up to a billion livres of notes, which was achieved by the end of 1719. While it had been the Banque Generale, notes had only been issued in return for specie to the extent of 60 million livres, but this new inflationary issue was entirely different. While it is impossible at this distance to forensically track the course of this money, we can be certain that it was used to manage the share price of the Mississippi venture, and it fuelled much of the public’s panic buying of shares that year.
But it was not only the printing of money to push the share price that fuelled the bubble. Law’s skills as a promoter took its inflation to a new level, with further issues of 50,000 shares approved in the summer of 1719 and executed as rights issues that autumn. Existing shareholders were offered the opportunity to subscribe for one share for every four old shares held, to be partly paid with an initial payment of 50 livres, the next payment deferred for over a month. These could be sold for an immediate profit, while providing a low-price entry point for new investors.
The expansion of the banknote issue without an offsetting acquisition of specie was used by Law to assemble and finance a total monopoly of France’s foreign trade. As well as this monetary expansion, we can be sure that private banks and moneylenders used it as a base to expand credit. We know this to be the case from court documents in London when Richard Cantillon in 1720 successfully sued English clients in the Court of Exchequer for £50,000 owed to him (about £18 million today), despite having already sold the Mississippi shares as soon as they were deposited as collateral.
It seems obvious to us that to give to one man both the monopoly of the note issue and monopolies on trade, and then for him to use the notes to create wealth out of thin air is extraordinarily dangerous. It seems equally obvious that such an arrangement was certain to collapse when the excitement died down and investors on balance sought to encash their profits.
It seems less obvious to us today that the principal elements of Law’s monopolies exist in modern government finances, which use paper money to inflate assets providing their electorates with the illusion of wealth.[xi] The difference is not in the methods employed, but the gradualness of today’s asset inflation, and the claim by the state that it is acting in the public interest, rather than one individual making the same claim on the state’s behalf.
Meanwhile, the Mississippi venture share price had continued rising, and by the end of 1719 it stood at 10,000 livres. Increasing pressure from share sales by people who sought to take profits had to be discouraged. The announcement of a 200 livres dividend per share was undoubtedly with that in mind, to be paid, like in any Ponzi scheme, not out of earnings but out of capital subscriptions. The price finally peaked at 11,000 livres on 8th January 1720.
By late-1719, Law had found it increasingly difficult to sustain the bubble. The banknote issues continued. In late-February 1720, the Mississippi Company and the Banque Royale merged. Afterwards, the shares began their precipitous fall, and by May, Law lost his position as Controller-General and was demoted. By the end of October that year, the shares had fallen to 3,200 livres, and a large portion of them had faced further unpaid calls throughout that year.
The year 1719 saw monetary inflation take off, directly fuelling asset prices. The decline of the Mississippi share price the following year was not as sharp as might otherwise have been expected, but against that must be put the fall in the paper livre’s purchasing power, particularly in the later months. The exchange rate against English sterling fell from nine old pence to 2 ½ pence in September 1720, most of that fall occurring after April as the price effect of the previous year’s inflation worked its way through into the exchange rates.
In the last three months of 1720 there was no sterling price quoted for paper livres, indicating they had become worthless.
John Law’s ramping of a single financial asset by monetary inflation correlates with the Fed’s monetary policy today. The material differences are the suppression of interest rates, and therefore the market costs of government funding, and the far wider range of financial assets being inflated on the back of government bonds. The importance of maintaining financial asset prices is not only Fed policy, but it is increasingly realised that it is a policy that cannot be allowed to fail.
To the extent that other central banks are suppressing yields on their government bonds, this policy extends beyond America. This time, the John Law strategy has gone truly global, with the consequence that the future of fiat currencies is tied to the perpetuation of current financial bubbles.
In this regard it is interesting to note that the most astute banker in John Law’s time, Richard Cantillon, never played Law’s game on the bull tack. He made his first fortune extending credit to others for the purchase of John Law’s stock, which as collateral he promptly sold. Subsequently, he sued for the return of the loans to those who refused to pay up, thereby getting two bites of the cherry. His second fortune was shorting Law’s scheme in 1719, not by selling shares in the scheme, but by selling the currency for foreign exchange. In other words, he calculated that when the scheme failed, it would be the currency that collapsed more than the shares. He was right.
The two empirical models by which we can judge the collapse of a fiat currency offer food for thought in our current situation.
The policy of deliberately rigging financial markets replicates that of John Law’s scheme, suggesting the collapse of currencies will be tightly bound to the end of the government bond bubble. Today’s bubbles in financial assets are sustained by equally artificial means, even more transparent than Law’s market rigging — quantitative easing, suppressed and negative interest rates etc., to which we can add the manipulation of price inflation statistics.
The German experience in the early 1920s showed how it did not take as much monetary inflation as monetarists might think to collapse a currency. Karl Helfferich’s quote about the relationship between the 23 times increase in the money quantity while the number of paper marks to the dollar increased 344 times gives us an important perspective: it will not require a hyperinflation of the money supply to destroy paper currencies today.
A fundamental difference is that the greatest sinner, if not on scale but likely effect, is the Fed in its puffery of the dollar, everyone else’s reserve currency. And unlike Germany a century ago and unlike France three centuries ago, there is no foreign currency against which to measure the dollar’s decline, except perhaps in the short run, because all central banks follow similar inflationary policies with their fiat currencies.
In the past a suitable foreign currency was fully exchangeable into silver or gold, so the decline and collapse could only be measured accordingly. It also means that it will be impossible for businesses to bypass the currency collapse by referencing prices to other currencies, being all similarly fiat. Many businesses in Germany survived the paper mark collapse in this way, but their modern equivalents will not have this option.
The final collapse of a currency is always a flight out of government fiat currency into goods. That can be the only outcome from the continuation of current macroeconomic policies. But above all, it would be a mistake to think it cannot happen, nor that it will be a long process giving us all plenty of time to plan. The final flight out of paper marks took approximately six months. Law’s scheme took slightly longer to destroy his livre. These should be our reference points.
via ZeroHedge News https://ift.tt/3cACvby Tyler Durden
In this month’s issue, we draw on decades of Reason journalism about policing and criminal justice to make practical suggestions about how to use the momentum of this summer’s tumultuous protests productively. Check out Damon Root on abolishing qualified immunity, Peter Suderman on busting the police unions, Jacob Sullum on ending the war on drugs, Sally Satel on rethinking crisis response, Zuri Davis on restricting asset forfeiture, C.J. Ciaramella on regulating use of force, Alec Ward on releasing body cam footage, Stephen Davies on defunding the police, and Nick Gillespie interviewing former Reasoner Radley Balko on police militarization.
“The practice of racial profiling grows from a trio of very tangible sources….The sources include the difficulty in policing victimless crimes in general and the resulting need for intrusive police techniques; the greater relevancy of this difficulty given the intensification of the drug war since the 1980s; and the additional incentive that asset forfeiture laws give police forces to seize money and property from suspects.”
Gene Callahan and William Anderson
“The Roots of Racial Profiling”
August/September 2001
On July 6, 2016, Philando Castile was fatally shot during a traffic stop in suburban Minnesota when a police officer freaked out over Castile’s legally carried concealed firearm. While activists and the general public reasonably focus on the senseless tragedy that day, that stop was at least the 46th time Castile had been pulled over in the previous 13 years. There is little reason to believe Castile was a particularly bad or dangerous driver. He was cited once for exceeding the posted speed limit and once for running a stop sign; three other stops were for more ambiguous moving violations, according to an NPR investigation published after his death. It seems, instead, that the local police used the myriad regulations at their disposal to repeatedly stop, investigate, fine, and sometimes arrest Castile for minor offenses. It is no exaggeration to say that Castile had been victimized by police many times before he was killed.
Police officers make millions of traffic and pedestrian stops in the United States every year. A very small number result in any police use of force, let alone lethal force. But officers understand that any on-duty interaction may become violent and are instructed to be prepared for that possibility at all times. To drive that message home during training, cadets watch graphic dashboard camera videos of traffic stops that result in shootouts and officer deaths. Thus, any involuntary contact between citizens and police officers carries an inherent danger for individuals on both sides of the encounter. Despite this risk, police departments incentivize their officers to make stops not only for traffic safety and crime deterrence but also for revenue generation and as a means of conducting investigations that would otherwise lack legal justification.
Undoubtedly, lawmakers have put too many crimes and civil violations on the books that can lead to police-initiated contact, a phenomenon broadly captured by the term overcriminalization. Most states and localities could purge many laws and regulations without any damage to public safety or security. But every day, police officers routinely use personal and institutional discretion to ignore countless violations that range from jaywalking to not using a turn signal to public consumption of drugs and alcohol. Thus, the determination of how often and under what circumstances to make traffic or pedestrian stops is ultimately one of policy, not one of law.
Although departments are prohibited from setting ticket or stop quotas for personnel, commanding officers set expectations for what a successful shift looks like on a typical day. For example, an officer assigned to general patrol, in which his role is to respond to 911 calls in his assigned zone, may not be expected to initiate many stops, particularly on shifts with a lot of calls. But if that officer is assigned to traffic duty on a road segment known for speeders and he reports only two vehicle stops and no tickets issued during an eight- or 10-hour shift, he will likely be questioned by his superior.
Beyond the individual incentives that an officer faces on any given day, police departments also can set an enforcement strategy in response to local conditions or political agendas. If a city or policing district sees a spike in gun violence, for example, political pressure will come down on the police brass to do something about it, which invariably trickles down to front-line officers. There are only so many police officers in a given department, and they can’t be everywhere at once, so the institution’s ability to reduce crime right away is naturally limited. But officers have a considerable amount of power at their disposal to produce numbers that show that they are “doing something” in response to a perceived crisis.
Aggressive policing techniques allow officers to confront individuals, question them, and perhaps search them for contraband such as drugs and guns. New York City’s stop-and-frisk program is the most notorious example of this style of policing. This program demonstrated to politicians that NYPD officers were in the streets discouraging crime with proactive tactics. Over the most active 10 years of the program, the New York Police Department (NYPD) recovered roughly 8,000 firearms by stopping, questioning, and frisking pedestrians. As in much of the rest of the country, violent crime in New York was trending downward during this period, so on its face, it may have seemed like an excellent anti-violence tactic.
What the NYPD didn’t highlight was that recovering those 8,000 guns required stopping roughly 4 million individuals, the vast majority of whom were black or Latino men—a firearm hit rate of 0.2 percent. Of course, officers also sometimes found drugs or people who had outstanding summonses for both petty and serious offenses. But even taking those cases into account, roughly 90 percent of the people the NYPD subjected to a stop and frisk were completely innocent of wrongdoing in the eyes of the law, according to the New York Civil Liberties Union.
Although New York’s program has been pared back as the result of a lawsuit and public pressure, the aggressive use of pedestrian and traffic stops to investigate crime in the general public remains commonplace for police departments around the country. Stops are a common tactic police use to respond to spates of violence and gun victimization. Sometimes the stops seem to lower crime, and sometimes they do not. But even when such programs seem to correlate with crime declines while they are operating, it bears noting that New York did not see a spike in crime after stop and frisk was scaled down there.
In 2015, the Los Angeles Police Department created its Metropolitan Division, which uses unmarked police vehicles to increase motor vehicle stops to search for drugs and guns in high-crime areas. According to the Los Angeles Times, nearly half of the motorists pulled over by Metro Division units are black, despite black people making up roughly 9 percent of the L.A. population. The Times dubbed this disparity “Stop-and-Frisk in a car.” Violent crime in those areas continued to increase until 2018.
In 2017, following an uptick in violent crime, the police department in Little Rock, Arkansas, assigned officers to special overtime patrols in marked cars with the express purpose of stopping vehicles to search for firearms. During a six-month period, the Arkansas Times reported, the Little Rock Police Department (LRPD) recovered 50 unspecified “weapons” as part of that initiative. But the LRPD needed over 6,000 vehicle stops to recover those weapons, pulling over 112 innocent motorists for every weapon recovered. When black community members complained about the special patrols, including reports that officers would sometimes draw their guns without provocation, the LRPD responded that the effort was part of a “community policing” strategy. Such tactics differ considerably from the ice cream socials and get-to-know-a-cop events that typify other cities’ community policing efforts.
Defenders of aggressive policing strategies like those deployed in Little Rock will point to a measurable decrease in violent crime once the special overtime patrols were initiated. While this decrease is almost certainly tied to the actions of LRPD, the explanation is not as straightforward as it may seem.
Modern professionalized policing has mostly been a reactive endeavor in the United States. Local governments tend to throw police officers at public problems—ranging from crime and disorder to mental health and other personal crises—and the cops, in turn, develop ad hoc strategies based on experience, hunches, and the “do something” incentives described above.
In the last couple of decades, a group of academics, including some current and former police officers, have developed a discipline known as “evidence-based policing.” They have designed research and field experiments to measure the effectiveness of police policy and strategies. By using legitimate scientific methods like random control trials, these researchers are creating a growing body of literature on how police can make society safer.
Getting police officers and departments to embrace this new way of thinking about their jobs continues to be an uphill struggle. One consistent finding is that Drug Awareness and Resistance Education (DARE) programs have no significant impact on teen drug use. Nevertheless, when I attended a conference put on by the Center for Evidence-Based Crime Policy (CEBCP) at George Mason University a few years back, one police officer in attendance was unironically wearing his department’s DARE polo shirt. Although the program isn’t as ubiquitous as it once was, the DARE America website boasts that its officers reach more than a million students every year, and its most recent annual report shows a 23 percent increase in revenue from 2017 to 2018.
The CEBCP maintains an online Evidence-Based Policing Matrix that collects and categorizes scientific research on police practices. The matrix characterizes studies by their methodological rigor and breaks them into categories such as “neighborhood,” “micro-place,” or “jurisdiction” to describe the size of the experiment. While DARE is an exceptionally useless police strategy, most policing practices aren’t as clearly effective or ineffective. But trends are emerging as the database continues to grow.
One finding that is now widely accepted in the evidence-based community is that a visible police presence can decrease criminal activity in areas experiencing elevated crime. While no evidence-based studies concerning Little Rock’s overtime patrol shifts or the L.A. Metro Division’s methods have been published, this observation seems to match up with their respective experiences. Los Angeles didn’t see a reduction in crime when it used unmarked vehicles to make its stops, but Little Rock, with its marked patrol cars, did.
Beyond increasing police visibility, though, the investigatory stops and searches may not have played any role in driving down violence. While some evidence-based studies suggest stop and frisk may reduce crime when carefully targeted in “hot spots,” the dramatic rollback of stop and frisk in New York indicates that widespread targeting of black and Latino men is not an effective crime control strategy.
For decades, aggressive police tactics have overwhelmingly targeted black men. Far more often than not, the individuals stopped have done nothing wrong. When they can, police point to guns taken off the street or crime rates that go down. But they too often do not seriously consider the social costs of stopping and harassing innocent people who perceive—often correctly—that their involuntary police encounter is in large part due to their race. Most innocent drivers aren’t shot and killed by an officer, as Castile was, but enough of the people who are stopped fear that could happen to them, and they rightfully resent it.
Data show that police don’t need to stop as many people as they do. Police departments should deploy their officers in a way that maximizes safety for everyone in the community instead of boosting officer activity for its own sake. Harassing people is a policy choice, and a poor one: There’s little evidence it works to reduce crime. But even if it did, in a free society, effectiveness must sometimes take a backseat to constitutional and civil liberties concerns. In this time of heightened sensitivity to police treatment of African Americans, reducing unnecessary police contact is an easy way for departments to demonstrate that black lives matter.
from Latest – Reason.com https://ift.tt/365vDC5
via IFTTT
In this month’s issue, we draw on decades of Reason journalism about policing and criminal justice to make practical suggestions about how to use the momentum of this summer’s tumultuous protests productively. Check out Damon Root on abolishing qualified immunity, Peter Suderman on busting the police unions, Jacob Sullum on ending the war on drugs, Sally Satel on rethinking crisis response, Zuri Davis on restricting asset forfeiture, C.J. Ciaramella on regulating use of force, Alec Ward on releasing body cam footage, Stephen Davies on defunding the police, and Nick Gillespie interviewing former Reasoner Radley Balko on police militarization.
“The practice of racial profiling grows from a trio of very tangible sources….The sources include the difficulty in policing victimless crimes in general and the resulting need for intrusive police techniques; the greater relevancy of this difficulty given the intensification of the drug war since the 1980s; and the additional incentive that asset forfeiture laws give police forces to seize money and property from suspects.”
Gene Callahan and William Anderson
“The Roots of Racial Profiling”
August/September 2001
On July 6, 2016, Philando Castile was fatally shot during a traffic stop in suburban Minnesota when a police officer freaked out over Castile’s legally carried concealed firearm. While activists and the general public reasonably focus on the senseless tragedy that day, that stop was at least the 46th time Castile had been pulled over in the previous 13 years. There is little reason to believe Castile was a particularly bad or dangerous driver. He was cited once for exceeding the posted speed limit and once for running a stop sign; three other stops were for more ambiguous moving violations, according to an NPR investigation published after his death. It seems, instead, that the local police used the myriad regulations at their disposal to repeatedly stop, investigate, fine, and sometimes arrest Castile for minor offenses. It is no exaggeration to say that Castile had been victimized by police many times before he was killed.
Police officers make millions of traffic and pedestrian stops in the United States every year. A very small number result in any police use of force, let alone lethal force. But officers understand that any on-duty interaction may become violent and are instructed to be prepared for that possibility at all times. To drive that message home during training, cadets watch graphic dashboard camera videos of traffic stops that result in shootouts and officer deaths. Thus, any involuntary contact between citizens and police officers carries an inherent danger for individuals on both sides of the encounter. Despite this risk, police departments incentivize their officers to make stops not only for traffic safety and crime deterrence but also for revenue generation and as a means of conducting investigations that would otherwise lack legal justification.
Undoubtedly, lawmakers have put too many crimes and civil violations on the books that can lead to police-initiated contact, a phenomenon broadly captured by the term overcriminalization. Most states and localities could purge many laws and regulations without any damage to public safety or security. But every day, police officers routinely use personal and institutional discretion to ignore countless violations that range from jaywalking to not using a turn signal to public consumption of drugs and alcohol. Thus, the determination of how often and under what circumstances to make traffic or pedestrian stops is ultimately one of policy, not one of law.
Although departments are prohibited from setting ticket or stop quotas for personnel, commanding officers set expectations for what a successful shift looks like on a typical day. For example, an officer assigned to general patrol, in which his role is to respond to 911 calls in his assigned zone, may not be expected to initiate many stops, particularly on shifts with a lot of calls. But if that officer is assigned to traffic duty on a road segment known for speeders and he reports only two vehicle stops and no tickets issued during an eight- or 10-hour shift, he will likely be questioned by his superior.
Beyond the individual incentives that an officer faces on any given day, police departments also can set an enforcement strategy in response to local conditions or political agendas. If a city or policing district sees a spike in gun violence, for example, political pressure will come down on the police brass to do something about it, which invariably trickles down to front-line officers. There are only so many police officers in a given department, and they can’t be everywhere at once, so the institution’s ability to reduce crime right away is naturally limited. But officers have a considerable amount of power at their disposal to produce numbers that show that they are “doing something” in response to a perceived crisis.
Aggressive policing techniques allow officers to confront individuals, question them, and perhaps search them for contraband such as drugs and guns. New York City’s stop-and-frisk program is the most notorious example of this style of policing. This program demonstrated to politicians that NYPD officers were in the streets discouraging crime with proactive tactics. Over the most active 10 years of the program, the New York Police Department (NYPD) recovered roughly 8,000 firearms by stopping, questioning, and frisking pedestrians. As in much of the rest of the country, violent crime in New York was trending downward during this period, so on its face, it may have seemed like an excellent anti-violence tactic.
What the NYPD didn’t highlight was that recovering those 8,000 guns required stopping roughly 4 million individuals, the vast majority of whom were black or Latino men—a firearm hit rate of 0.2 percent. Of course, officers also sometimes found drugs or people who had outstanding summonses for both petty and serious offenses. But even taking those cases into account, roughly 90 percent of the people the NYPD subjected to a stop and frisk were completely innocent of wrongdoing in the eyes of the law, according to the New York Civil Liberties Union.
Although New York’s program has been pared back as the result of a lawsuit and public pressure, the aggressive use of pedestrian and traffic stops to investigate crime in the general public remains commonplace for police departments around the country. Stops are a common tactic police use to respond to spates of violence and gun victimization. Sometimes the stops seem to lower crime, and sometimes they do not. But even when such programs seem to correlate with crime declines while they are operating, it bears noting that New York did not see a spike in crime after stop and frisk was scaled down there.
In 2015, the Los Angeles Police Department created its Metropolitan Division, which uses unmarked police vehicles to increase motor vehicle stops to search for drugs and guns in high-crime areas. According to the Los Angeles Times, nearly half of the motorists pulled over by Metro Division units are black, despite black people making up roughly 9 percent of the L.A. population. The Times dubbed this disparity “Stop-and-Frisk in a car.” Violent crime in those areas continued to increase until 2018.
In 2017, following an uptick in violent crime, the police department in Little Rock, Arkansas, assigned officers to special overtime patrols in marked cars with the express purpose of stopping vehicles to search for firearms. During a six-month period, the Arkansas Times reported, the Little Rock Police Department (LRPD) recovered 50 unspecified “weapons” as part of that initiative. But the LRPD needed over 6,000 vehicle stops to recover those weapons, pulling over 112 innocent motorists for every weapon recovered. When black community members complained about the special patrols, including reports that officers would sometimes draw their guns without provocation, the LRPD responded that the effort was part of a “community policing” strategy. Such tactics differ considerably from the ice cream socials and get-to-know-a-cop events that typify other cities’ community policing efforts.
Defenders of aggressive policing strategies like those deployed in Little Rock will point to a measurable decrease in violent crime once the special overtime patrols were initiated. While this decrease is almost certainly tied to the actions of LRPD, the explanation is not as straightforward as it may seem.
Modern professionalized policing has mostly been a reactive endeavor in the United States. Local governments tend to throw police officers at public problems—ranging from crime and disorder to mental health and other personal crises—and the cops, in turn, develop ad hoc strategies based on experience, hunches, and the “do something” incentives described above.
In the last couple of decades, a group of academics, including some current and former police officers, have developed a discipline known as “evidence-based policing.” They have designed research and field experiments to measure the effectiveness of police policy and strategies. By using legitimate scientific methods like random control trials, these researchers are creating a growing body of literature on how police can make society safer.
Getting police officers and departments to embrace this new way of thinking about their jobs continues to be an uphill struggle. One consistent finding is that Drug Awareness and Resistance Education (DARE) programs have no significant impact on teen drug use. Nevertheless, when I attended a conference put on by the Center for Evidence-Based Crime Policy (CEBCP) at George Mason University a few years back, one police officer in attendance was unironically wearing his department’s DARE polo shirt. Although the program isn’t as ubiquitous as it once was, the DARE America website boasts that its officers reach more than a million students every year, and its most recent annual report shows a 23 percent increase in revenue from 2017 to 2018.
The CEBCP maintains an online Evidence-Based Policing Matrix that collects and categorizes scientific research on police practices. The matrix characterizes studies by their methodological rigor and breaks them into categories such as “neighborhood,” “micro-place,” or “jurisdiction” to describe the size of the experiment. While DARE is an exceptionally useless police strategy, most policing practices aren’t as clearly effective or ineffective. But trends are emerging as the database continues to grow.
One finding that is now widely accepted in the evidence-based community is that a visible police presence can decrease criminal activity in areas experiencing elevated crime. While no evidence-based studies concerning Little Rock’s overtime patrol shifts or the L.A. Metro Division’s methods have been published, this observation seems to match up with their respective experiences. Los Angeles didn’t see a reduction in crime when it used unmarked vehicles to make its stops, but Little Rock, with its marked patrol cars, did.
Beyond increasing police visibility, though, the investigatory stops and searches may not have played any role in driving down violence. While some evidence-based studies suggest stop and frisk may reduce crime when carefully targeted in “hot spots,” the dramatic rollback of stop and frisk in New York indicates that widespread targeting of black and Latino men is not an effective crime control strategy.
For decades, aggressive police tactics have overwhelmingly targeted black men. Far more often than not, the individuals stopped have done nothing wrong. When they can, police point to guns taken off the street or crime rates that go down. But they too often do not seriously consider the social costs of stopping and harassing innocent people who perceive—often correctly—that their involuntary police encounter is in large part due to their race. Most innocent drivers aren’t shot and killed by an officer, as Castile was, but enough of the people who are stopped fear that could happen to them, and they rightfully resent it.
Data show that police don’t need to stop as many people as they do. Police departments should deploy their officers in a way that maximizes safety for everyone in the community instead of boosting officer activity for its own sake. Harassing people is a policy choice, and a poor one: There’s little evidence it works to reduce crime. But even if it did, in a free society, effectiveness must sometimes take a backseat to constitutional and civil liberties concerns. In this time of heightened sensitivity to police treatment of African Americans, reducing unnecessary police contact is an easy way for departments to demonstrate that black lives matter.
from Latest – Reason.com https://ift.tt/365vDC5
via IFTTT
Politico Invited me to submit to a symposium on President Trump’s reported decision to nominate Judge Amy Coney Barrett. It is titled, “A New Roberts Court Begins for the Last Time.”
For the past fifteen years, the Supreme Court has been known as the Roberts Court. But in truth, each new justice forms a new court. Chief Justice Roberts has presided over numerous personnel changes. Justices O’Connor, Souter, Stevens, Scalia, Kennedy and Ginsburg left, and Justices Alito, Sotomayor, Kagan, Gorsuch and Kavanaugh have arrived. By election day, the Chief Justice will likely welcome Justice Amy Coney Barrett as the ninth member of the Court. And a new Roberts Court will begin.
The confirmation process for Justice Barrett will be excruciatingly painful. Yet, it will still be familiar—a process that we know, with a predictable outcome. The future of the court, on the other hand, is far more uncertain. In 2021, or perhaps 2025, Democrats will likely push to expand the court. Roberts may soon have to greet two or more new members, even though there were no departures. The chief may go through all the same formalities, welcoming #10 and #11 the same way he welcomed #9—but the Supreme Court will never be the same. We may be looking at the last new Roberts Court.
Think the Barrett hearings will be bad? Just you wait. Relish the moment. It will seem tame by comparison of what comes next.
from Latest – Reason.com https://ift.tt/3i82qbQ
via IFTTT
Politico Invited me to submit to a symposium on President Trump’s reported decision to nominate Judge Amy Coney Barrett. It is titled, “A New Roberts Court Begins for the Last Time.”
For the past fifteen years, the Supreme Court has been known as the Roberts Court. But in truth, each new justice forms a new court. Chief Justice Roberts has presided over numerous personnel changes. Justices O’Connor, Souter, Stevens, Scalia, Kennedy and Ginsburg left, and Justices Alito, Sotomayor, Kagan, Gorsuch and Kavanaugh have arrived. By election day, the Chief Justice will likely welcome Justice Amy Coney Barrett as the ninth member of the Court. And a new Roberts Court will begin.
The confirmation process for Justice Barrett will be excruciatingly painful. Yet, it will still be familiar—a process that we know, with a predictable outcome. The future of the court, on the other hand, is far more uncertain. In 2021, or perhaps 2025, Democrats will likely push to expand the court. Roberts may soon have to greet two or more new members, even though there were no departures. The chief may go through all the same formalities, welcoming #10 and #11 the same way he welcomed #9—but the Supreme Court will never be the same. We may be looking at the last new Roberts Court.
Think the Barrett hearings will be bad? Just you wait. Relish the moment. It will seem tame by comparison of what comes next.
from Latest – Reason.com https://ift.tt/3i82qbQ
via IFTTT
Pepe Escobar Exposes ‘Sinophobia Inc.’ – The West’s Information-Industrial Hybrid Warfare Complex
Tyler Durden
Fri, 09/25/2020 – 23:40
Authored by Pepe Escobar via The Saker blog, (Originally posted at The Asia Times),
It took one minute for President Trump to introduce a virus at the virtual 75th UN General Assembly, blasting “the nation which unleashed this plague onto the world”.
And then it all went downhill.
Even as Trump was essentially delivering a campaign speech and could not care less about the multilateral UN, at least the picture was clear enough for all the socially distant “international community” to see.
Here is President Xi’s full statement. And here is President Putin’s full statement. And here’s the geopolitical chessboard, once again; it’s the “indispensable nation” versus the Russia-China strategic partnership.
As he stressed the importance of the UN, Xi could not be more explicit that no nation has the right to control the destiny of others: “Even less should one be allowed to do whatever it likes and be the hegemon, bully, or boss of the world .”
The US ruling class obviously won’t take this act of defiance lying down. The full spectrum of Hybrid War techniques will continue to be relentlessly turbo-charged against China, coupled with rampant Sinophobia, even as it dawns on many Dr. Strangelove quarters that the only way to really “deter” China would be Hot War.
Alas, the Pentagon is overstretched – Syria, Iran, Venezuela, South China Sea. And every analyst knows about China’s cyber warfare capabilities, integrated aerial defense systems, and carrier-killer Dongfeng missiles.
For perspective, it’s always very instructive to compare military expenditure. Last year, China spent $261 billion while the US spent $732 billion (38% of the global total).
Rhetoric, at least for the moment, prevails. The key talking point, incessantly hammered, is always about China as an existential threat to the “free world”, even as the myriad declinations of what was once Obama’s “pivot to Asia” not so subtly accrue the manufacture of consent for a future war.
This report by the Qiao Collective neatly identifies the process:
“We call it Sinophobia, Inc. – an information industrial complex where Western state funding, billion dollar weapons manufacturers, and right-wing think tanks coalesce and operate in sync to flood the media with messages that China is public enemy number one. Armed with state funding and weapons industry sponsors, this handful of influential think tanks are setting the terms of the New Cold War on China. The same media ecosystem that greased the wheels of perpetual war towards disastrous intervention in the Middle East is now busy manufacturing consent for conflict with China.”
The demonization of China, infused with blatant racism and rabid anti-communism, is displayed across a full, multicolored palette: Hong Kong, Xinjiang (“concentration camps), Tibet (“forced labor”), Taiwan, “China virus”; the Belt and Road’s “debt trap”.
The trade war runs in parallel – glaring evidence of how “socialism with Chinese characteristics” is beating Western capitalism at its own high-tech game. Thus the sanctioning of over 150 companies that manufacture chips for Huawei and ZTE, or the attempt to ruin TikTok’s business in the US (“But you can’t rob it and turn it into a US baby”, as Global Times editor-in-chief Hu Xijin tweeted).
Still, SMIC (Semiconductor Manufacturing International Corporation), China’s top chip company, which recently profited from a $7.5 billion IPO in Shanghai, sooner or later may jump ahead of US chip manufacturers.
On the military front, “maximum pressure” on China’s eastern rim proceeds unabated – from the revival of the Quad to a scramble to boost the Indo-Pacific strategy.
Think Tankland is essential in coordinating the whole process, via for instance the Center for Strategic & International Studies, with “corporation and trade association donors” featuring usual suspects such as Raytheon, Lockheed Martin, Boeing, General Dynamics and Northrop Grumman.
So here we have what Ray McGovern brilliantly describes as MICIMATT – the Military-Industrial-Congressional-Intelligence-Media-Academia-Think-Tank complex – as the comptrollers of Sinophobia Inc.
Assuming there would be a Dem victory in November, nothing will change. The next Pentagon head will probably be Michele Flournoy, former Undersecretary of Defense for Policy (2009-2012) and co-founder of the Center for a New American Security, which is big on both the “China challenge” and the “North Korean threat”.
Flournoy is all about boosting the “U.S. military’s edge” in Asia.
China’s top foreign policy principle is to advance a “community of shared future for mankind”. That is written in the constitution, and implies that Cold War 2.0 is an imposition from foreign actors.
China’s top three priorities post-Covid-19 are to finally eradicate poverty; solidify the vast domestic market; and be back in full force to trade/investment across the Global South.
China’s “existential threat” is also symbolized by the drive to implement a non-Western trade and investment system, including everything from the Asian Infrastructure Investment Bank (AIIB) and the Silk Road Fund to trade bypassing the US dollar.
A Harvard Kennedy School report at least tried to understand how Chinese “authoritarian resilience” appeals domestically. The report found out that the CCP actually benefitted from increased popular support from 2003 to 2016, reaching an astonishing 93%, essentially due to social welfare programs and the battle against corruption.
By contrast, when we have a MICCIMAT investing in Perpetual War – or “Long War” (Pentagon terminology since 2001) – instead of health, education and infrastructure upgrading, what’s left is a classic wag the dog. Sinophobia is perfect to blame the abysmal response to Covid-19, the extinction of small businesses and the looming New Great Depression on the Chinese “existential threat”.
The whole process has nothing to do with “moral defeat” and complaining that “we risk losing the competition and endangering the world”.
The world is not “endangered” because at least vast swathes of the Global South are fully aware that the much-ballyhooed “rules-based international order” is nothing but a quite appealing euphemism for Pax Americana – or Exceptionalism. What was designed by Washington for post-WWII, the Cold War and the “unilateral moment” does not apply anymore.
As President Putin has made it very clear over and over again, the US is no longer “agreement capable” . As for the “rules-based international order”, at best is a euphemism for privately controlled financial capitalism on a global scale.
The Russia-China strategic partnership has made it very clear, over and over again, that against NATO and Quad expansion their project hinges on Eurasia-wide trade, development and diplomatic integration.
Unlike the case from the 16th century to the last decades of the 20th century, now the initiative is not coming from the West, but from East Asia (that’s the beauty of “initiative” incorporated to the BRI acronym).
Enter continental corridors and axes of development traversing Southeast Asia, Central Asia, the Indian Ocean, Southwest Asia and Russia all the way to Europe, coupled with a Maritime Silk Road across the South Asian rimland.
For the very first time in its millenary history, China is able to match ultra-dynamic political and economic expansion both overland and across the seas. This reaches way beyond the short era of the Zheng He maritime expeditions during the Ming dynasty in the early 15th century.
No wonder the West, and especially the Hegemon, simply cannot comprehend the geopolitical enormity of it all. And that’s why we have so much Sinophobia, so many Hybrid War techniques deployed to snuff out the “threat”.
Eurasia, in the recent past, was either a Western colony, or a Soviet domain. Now, it stands on the verge of finally getting rid of Mackinder, Mahan and Spykman scenarios, as the heartland and the rimland progressively and inexorably integrate, on their own terms, all the way to the middle of the 21st century.
via ZeroHedge News https://ift.tt/3i4JgDR Tyler Durden
Politico Invited me to submit to a symposium on President Trump’s reported decision to nominate Judge Amy Coney Barrett. It is titled, “A New Roberts Court Begins for the Last Time.”
For the past fifteen years, the Supreme Court has been known as the Roberts Court. But in truth, each new justice forms a new court. Chief Justice Roberts has presided over numerous personnel changes. Justices O’Connor, Souter, Stevens, Scalia, Kennedy and Ginsburg left, and Justices Alito, Sotomayor, Kagan, Gorsuch and Kavanaugh have arrived. By election day, the Chief Justice will likely welcome Justice Amy Coney Barrett as the ninth member of the Court. And a new Roberts Court will begin.
The confirmation process for Justice Barrett will be excruciatingly painful. Yet, it will still be familiar—a process that we know, with a predictable outcome. The future of the court, on the other hand, is far more uncertain. In 2021, or perhaps 2025, Democrats will likely push to expand the court. Roberts may soon have to greet two or more new members, even though there were no departures. The chief may go through all the same formalities, welcoming #10 and #11 the same way he welcomed #9—but the Supreme Court will never be the same. We may be looking at the last new Roberts Court.
Think the Barrett hearings will be bad? Just you wait. Relish the moment. It will seem tame by comparison of what comes next.
from Latest – Reason.com https://ift.tt/3i82qbQ
via IFTTT
Astronauts Isolate As Mystery Air Leak Hunt Continues On International Space Station
Tyler Durden
Fri, 09/25/2020 – 23:20
NASA and the Russian Space Agency Roscosmos are searching for a small air leak on the International Space Station (ISS), according to Sputnik.
The crew of the ISS will move to the Russian side of the station on Friday, and through the weekend, to allow for a couple of days of air pressure tests. This will be the second time astronauts have isolated in an attempt to find the leak.
“Over the coming weekend, the ISS crew will self-isolate in the Russian segment of the station to search for an atmospheric leak at the station. The crew will regularly perform all planned operations, nothing threatens the crew’s safety,” a representative Roscosmos told Sputnik.
US astronaut Christopher Cassidy tweeted that the ISS crew would “try again with the module isolation this weekend. No harm or risk to us as the crew, but it is important to find the leak we are not wasting valuable air.” The crew took similar precautions in August, when they isolated on the Russian side of the station for four days.
Cassidy provided more color on the situation via a series of tweets. He said, “Moscow and Houston Mission Control Centers have been tracking a tiny air leak for several months.” For the last week, Cassidy has been examining all the window seals of the station using an ultrasonic leak detector.
NASA has said the leak was first detected on Sept. 2019 but has worsened in recent months.
As the ISS crew works over the weekend to find the source of the leak, readers may recall, in 2018, the station experienced another air leak that was initially thought to be the result of a micrometeorite. It was eventually concluded the tiny hole that created a dangerous air leak was a “deliberate sabotage.”
via ZeroHedge News https://ift.tt/2RZpFu9 Tyler Durden