The Golden Road Remains Constant

The Golden Road Remains Constant

Tyler Durden

Sat, 12/05/2020 – 07:00

Authored by Roy Sebag via GoldMoney.com,

I

One must always be careful to distinguish between a truism, a claim or narrative which is so deeply embedded into the fabric of cultural understanding that it is taken to be an indisputable historical fact, and truth, a continuing, self-evident feature of reality which is available to be observed, reasoned about, and tested in the present. Truisms are the handmaidens of convention which, for economic participants, eventually come to replace objective observations. The result is that the ‘science’ of economics is transformed into a battleground for subjective beliefs, where the soldiers are not self-evident observations or testable predictions, but, rather, fashionable claims and politically-correct statements. In recent times, we have seen this to be the case for the Philips Curve, the theory of aggregate demand, and Keynes inversion of Say’s Law. These, among many similar economic ideas, are modern truisms which, though falsified by present-day economic observation, persist as structures of belief which are dearly held by the economists and politicians who shape our collective future.

In this essay, I will draw the reader’s attention to a truism which endures as conventional wisdom today, even though it can be easily falsified by plain experience and objective examination. I speak of a belief which colours the whole of recent economic history: the judgment that the natural element Gold (Au) is now relegated to its present status as a store of value because payment technology has evolved beyond physical media.

The whiggish story goes something like this: just as we humans evolve, so, too, does our civilization progress along the rising road of history; therefore, just as our maturing societies exhibit increased technical capabilities, so do our monetary instruments undoubtedly improve with the march of time. Consequently, the transition from carrying around weighty physical coins made from precious metals to making instantly-settled electronic payments is seen as evidence enough that, in our time, gold no longer has a role to play as a monetary instrument.

This specific belief is unique insofar as opposite factions within the battleground of subjective economics all hold it to be true: while the disparate schools of sound money advocates, mainstream academics, and cryptocurrency evangelists share very little in common, they all seem to agree that physical gold, as a monetary instrument, has been superseded by human innovation and technology. To each of these independent groups, what the car is to the horse and carriage is what modern digital payments are to antiquated gold coins and balance scales.

In my own experience founding a precious metals payments and savings business, I have always been puzzled by this truism, and I have ultimately come to reject it. I believe that, in this regard, sound money advocates have made a terrible concession. As a result, the frontlines of one of the most important intellectual battles has been relinquished to the economic alchemists and technology-worshipping wizards of cryptocurrency; meanwhile, advocates of sound money retreat to their ramparts, safely guarding their own cherished bedrock upon which it is engraved with the chisel of perennial reason that gold, though no longer money for all, is the premier store of value for man. What results from this fissure is that most people view gold as a historic monetary instrument which remains a store of value today for some mysterious and indiscernible reason. For people like myself, however, gold is money for all times: as practical today as it was at any point in history.

In order to reconcile these two positions, we must first discern what money is. The traditional definition which you find in the dictionary is as follows: money is a unit of account, medium of exchange, and store of value. But what do these words mean? Unit of account refers to an objective, that is to say, unchanging measure which is fungible and arithmetically calculable. For example: one inch is always an inch; one gram is always one gram. Medium of exchange refers to the marketability of the thing, such that it is readily accepted among peoples across time and place. Said differently, the likelihood of any thing to be accepted implies its ease of use or its ability to efficiently move from the hands of one to the hands of another. We can even go further to say that, not only does the thing need to be exchangeable, but it must in itself have some utility or usefulness, lest it fail to be desired by different people at different places and different times. For example: an amount of salt in one place is desired in another place because salt preserves meat and enhances the taste of food; thus, salt could be carried from one place to another and readily accepted by self-interested actors as a medium of exchange. Finally, store of value refers to the relative scarcity of the thing (in comparison with other things) as embodied in the unit of account and medium of exchange. The rarer the thing, the more desirable and exchangeable it will be, from the past, through the present, and into the future. For example: the best house on the best street is a store of value because, so long as it remains unchanging and useful, it will always be more desirable and exchangeable relative to any other house on any other street.

This is the traditional picture which we learn from contemporary economic theory. But we can already see that this definition is perplexing, for it appears to presuppose something with physical qualities which simultaneously satisfies these three criteria. In other words, unless there was already something which is always able to satisfy these three criteria, then the act of defining ‘money’ would prove to be elusive. Now, it would be a difficult task to fulfill these three criteria in all times and in all places. In the case of the house, someone may build a better house, or the house may decay, or the quality of the neighbourhood may decline; we also know that people have subjective preferences for different architecture and design, so it may be unclear whether one house is ‘better’ than another. The same is true for salt: while we know that salt may be readily accepted, we can think of many things which are rarer and last longer than salt. We can begin to see that the standard definition of money presupposes an understanding of the order of nature and, more specifically, the irreducible building blocks of all matter – the corporeal elements. For without the unchanging, qualitative, and measurable attributes of the elements, we would be unable to rank potential moneys according to an internal hierarchy, such that we can discern a corporeal thing’s fittingness to satisfy these three criteria. Indeed, as we shall soon explore, attempts to superimpose human ideas into this procrustean definition of money consistently prove to be unintelligible, requiring a tremendous amount of complex, lofty thinking which contradicts the simplicity and intelligibility of the natural world.

Let us simplify our definition of money, in order to understand its nature, its purpose, and its function in light of the importance of these tripartite criteria. We know that these criteria must simultaneously hold true at all times and all places, intrinsically in the thing itself; therefore, any attenuation of these criteria undermines the viability of the thing-at-hand to be money. Keeping this in mind, we define ‘money’ as such: money is a corporeal good which serves as an unchanging measure and reward by virtue of its relative scarcity and temporal endurance when compared with other corporeal goods. With this definition of money, the unchanging nature of a naturally scarce, long-lasting, and useful corporeal good provides the foundation for neutral and lasting satisfaction among self-interested parties in any potential exchange. That the money must be a unit of account renders it an unchanging measure; likewise, that the money must be a useful store of value renders it a satisfying reward.

Money therefore is not simply the instrument which best facilitates payment; but rather, it underpins, reflects, and adjudicates the entirety of any transaction between self-interested parties, from measurement to reward, or from payment to satisfaction. At this point, we may recognise that the desire for convenience and expediency should in no way undermine the nature, purpose, and function of money.

II

In modern times, we have come to know a very different kind of money than the one I have just defined. Modern ‘money’ is really a monetary substitute whereby the medium of exchange or payment feature is, in practice, no more than the communication or confirmation of a transaction between two or more self-interested parties, rather than the actual movement of the money from one party to the other in coincidence with the settlement of a good or service. In this way, the payment has come to be conflated with the notion of settlement, or, as I prefer to say, lasting satisfaction; this is because swift, virtual payments often lead us to believe that the exchange has been satisfied in actuality. As we shall argue, the increase in the speed of payments afforded by modern money involves both the deferral of the actual settlement, as well as the grievous diminishment of the true nature and purpose of money.

It is indeed true that the manner according to which a transaction is communicated is responsive to technological innovation. Any conquest over latency, that is to say, the shortening of time in the act of communication, is thus an improvement in the means of payment. However, this is only an improvement when it comes to transactions between parties which are physically distant. We therefore agree that an instant payment confirmation between a merchant in Munich and a consumer in London is an impressive accomplishment, and it is perhaps more convenient than the mailing of a cheque or bill of sale. But, to be sure, this verification of payment communication does not extend to the settlement of the goods or services being exchanged; that activity, i.e., the delivery of the good to the London consumer, remains bound by the same physical laws which govern all actual cooperative transactions. Markus, the merchant in Munich, may receive instant confirmation of his payment from James, the consumer in London; contractually, however, James will not be satisfied until the time when Markus has delivered his good or completed his service. During this prolonged temporal interval between payment and satisfaction, while James still waits to receive his purchased product, both the value of the modern money that was paid to Markus and the acceptance or refusal of the product by James is subject to change. Here we see that the acceleration of the payment has perhaps introduced a layer of trust between self-interested parties, but it has done nothing to change the dynamics of satisfaction or the final settlement which is required for the transaction to be complete. Thus, in spite of the increased speed and ostensible convenience, the deferral of the settlement leaves the economic transaction open-ended and not yet subject to finality. This inescapable reality is one which issuers of modern money wish to disguise. In their view, the speeding up of payment supersedes the importance – or even redefines the notion – of settlement.

What we learn from this example is that the measure and reward of cooperation emerge from the settlement, not the payment, for the payment is contingent upon the settlement in an irreversible forward causation. Returning to our example: James pays Markus instantly, but neither party will be satisfied until James receives his good from Markus; if what James receives is in any way different from what he expected to receive, there will be no satisfaction for either James or Markus. This is different from a traditional transaction, where James enters a marketplace and decides to purchase a specific item which is physically present at the market and which satisfies his need or desire; James then agrees with the vendor about the payment, and the transaction is settled. So far, then, we can see that for exchanges where the parties are co-located, there is no need to improve the speed of the transaction in the first place, for the entirety of the exchange is measured, rewarded, and settled at the same time. Increased payment speed therefore does not enhance or improve the cooperative transaction for cases where parties are co-located and where settlement is contemporaneous with payment. On the other hand, for physically-distant transactions, or ones involving a temporal delay between payment and settlement even in local settings, accelerating the payment speed does nothing to secure the final settlement of the transaction. This is because the transaction is not complete until the product or service has been received and satisfaction has been obtained for both parties. Correspondingly, modern money, and the convenience which it provides, ultimately defers settlement in service of payment speed.

The result of this deferral is that the nature and purpose of money, namely, its being an unchanging measure and reward of human cooperation, is compromised for all parties involved. This is because, within this temporal interval of deferral, in the time between the instant payment and the delayed settlement, the value of the modern money and the nature of the transaction is in motion rather than at rest; therefore, it is subject to change. For rest to be achieved, either the money itself has to be at rest, which requires that money be defined as an unchanging naturally-scarce corporeal good, or the transaction has to be settled.

Now, in our modern era, one may rush to point to a class of transactions which may appear at first glance to be exempt from this analysis of the shortcomings of modern money: that is, virtual services. While James surely has to wait for his physical good to arrive from Munich in the post before he is satisfied, James need not wait when he pays for a subscription to an online streaming service before he can begin to instantly use the service. Here there is immediate settlement of the service at the same time that the payment is issued. The case of virtual services may therefore disprove our assessment of modern money. Perhaps, as a cryptocurrency enthusiast may argue, the anomaly case of virtual services may even provide reason for, or confirm the viability of, a modern money which is purely virtual in itself. In order to explore these challenges, let us first ask two general questions:

1. Can any cooperative society exist, prosper, and depend upon a purely virtual service economy?

2. With respect to the use of modern moneys for virtual services, what exactly serves as the unchanging measure and reward (money) which mediates such acts of cooperation, thereby enabling lasting satisfaction among all parties? 

With respect to the first question, it should be evident to any reader that there can be no such state of affairs. The basic concerns of any household will always and everywhere be tethered to the physical world: a family requires food to eat, fabrics to be clothed in, and energy for heat. Thus, each and every cooperative society is always dependent upon the negotiation, harvest, and distribution of useful, corporeal goods and their accompanying services in the physical world. We may instantly receive as many virtual video consultations with our doctor, but, at some point, we will require the administration of actual medicine and the use of real tools which must be produced by someone else in order to nurse our health condition. This fundamental premise holds true for all the necessities of life, from food, to shelter, to medicine, to law enforcement, and so on. It is impossible, then, for an economy to be purely virtual.

This is why we must ask the second question: at some point, these modern monetary abstractions demand resources from society which are corporeal and useful – from labour, to energy, to physical space. The same goes for virtual services: both the virtual service and the monetary abstraction which is used as payment for the service exact necessities within the corporeal world. It is these corporeal realities which determine the extrinsic boundaries for whether the virtual service, the virtual money, and the transaction as a whole can achieve lasting satisfaction. Thus, abstract utility (if such a thing even exists) is always and everywhere dependent upon and bound by corporeal usefulness. In light of our answer to the first question, it becomes clear that the abstract measure and reward which is modern money, whether it be paper fiat or virtual cryptocurrency, will be incapable of being unchanging, long lasting, and relatively scarce, insofar as it is always dependent upon the corporeal realities to which it remains tethered. Ultimately, these virtual services and modern moneys are always and everywhere determined by the real economy and the corporeal boundaries which define it.

Take the following example: let us say that there exists a mathematical school, run by Pythagoras himself, which offers the community with a promise that the teacher and his acolytes will employ the mysteries of the universe in order to maintain a master ledger of the community’s transactions which is cryptographically governed and thus logically unchanging. Would it not come across as strange or even silly to the local farmer and the local miner, who provide this academic school and the greater society with their daily nourishment, if each time they transacted their reward was merely an incorporeal and inaccessible promise from someone else in their town? Moreover, that a portion of their physical surplus would be required as a seigniorage to the mathematicians who administer this virtual service? No doubt this is a silly scheme indeed, and yet this silliness has endured from ancient times – think of the infamous John Law and the Mississippi Bubble, or, even in our present day, think of government-issued fiat currency as well as privately-issued fiat cryptocurrency. History teaches us that, wherever man seeks to contrive money, rather than to embrace money as a given feature of nature, the fundamental attributes of money as a measure and reward of cooperation are sacrificed, leading to injustice and disorder within society.

Consequently, with respect to modern moneys and their use for virtual services, we arrive at the same conclusion: that any apparent increase in speed diminishes the nature, function, and purpose of money. For money must in itself be the corporeal, useful, naturally-scarce, and enduring thing which objectively measures and neutrally rewards all acts of human cooperation. By contrast, an abstract modern money will always be subject to worldly fluctuation because it lacks any tether or grounding in concrete reality. As we established earlier, with modern money, speedy payment is pushed forward while the act of settlement is delayed. What results is similar to a game whereby a group of friends sitting in a circle pass around a melting ice cube: with each turn, the ice cube is fundamentally changed, and eventually it disappears altogether. With these monetary substitutes, the party receiving the money in any transaction is not left with an equitable measurement, a just reward, or even a lasting store of value – all he has is the melting ice cube, which he must pass on quickly. Whoever is left with nothing to show for his work but a puddle of water has been unfairly deprived of his labour, good, or service.  

We therefore see the need for these transactions to be anchored to a true or real money. If this rope fails to be hitched, then the abstract service economy may become parasitic upon the real economy – even though the former depends upon the latter – ultimately leading to two standards which become diametrically opposed to each other. The only way this paradox can be resolved, such that lasting settlement is at all possible, is for money to be physically quantifiable as an unchanging measure and reward which is either in motion (during payment) or at rest (during settlement). Only this money can grant finality to transactions because this money, possessing an unchanging nature in the corporeal world, in itself serves as a final settlement. For most of human history, man has chosen a natural money (a corporeal entity found in nature) which no man could create, manipulate, or replicate, for the very reasons which we have just adumbrated: it is physical, it can be either in motion or at rest, and it is not subject to change over extended periods of time. Furthermore, within a range of potential natural moneys, our ancestors consistently chose to employ the precious metals gold or silver as the money with which to measure, to reward, and to settle cooperative transactions. At this stage, we may now appreciate why it would be preferable to employ the most rare, long-lasting, and unchanging of the earth’s elements instead of the service offered by Pythagoras.

III

The monetary wisdom of our ancestors must not be disregarded, as if it were merely a crude stage of the evolutionary timeline of man’s history, culminating in the great telos of the modern technological awakening. Nor must we reject my appeal to the wisdom of the ancients as a kind of wistful nostalgia for a bygone era. Comparing the modern car to the horse and carriage is just fine; but comparing modern, man-made moneys which exhibit a superior payment speed to true money – to unchanging, corporeal, gold money – is a false equivalency. For, as we have argued, most human cooperation, insofar as it requires a physical, non-virtual settlement, remains beholden to some kind of corporeal settlement; the acceleration of payment speed does not alone replace or redefine the concrete satisfaction which needs to take place. Consequently, the benefit of speedy payments, which is the quiddity of modern money, does not in and of itself prove that gold has somehow been usurped by technologically-superior modern money. Before we proceed, let us review the argument which we have given so far:

i. The unchanging nature of money provides settlement finality, satisfaction, or rest.

ii. Payment speed alone does not achieve settlement finality

iii. For virtual services, it does initially appear possible that payment speed alone can achieve settlement finality.

a. But we cannot have a purely virtual economy because everything is interconnected and interdependent upon the physical, real economy.

b. Moreover, even virtual services and modern moneys require ultimate finality in the physical world because they demand resources in order to exist.

iv. If all of this is true, we can see that gold can still be used today to settle transactions instead of modern money. The acceleration of payment speed therefore cannot be the reason that gold is no longer used as money.

We must therefore reject the conventional wisdom, or truism, that gold is an antiquated monetary instrument which has been eclipsed by superior monetary instruments. Once we reject this claim, we can begin to perceive the real issues which proscribe gold money from being adopted today.

So far we have uncovered that, for most of human history, precious metals have continually and universally served as the unchanging measure and reward which enables lasting satisfaction between self-interested parties. What we have not yet discussed is the myriad novel technologies and innovations which made this possible. Take the following examples: ancient cuneiform tablets and Egyptian papyri reveal bills of sale for merchant transactions which required a contractual deferral between payment and settlement, such as in the case of an imminent harvest; circa 600 BC, coinage was invented in Lydia in order to increase the velocity of transactions so that individuals no longer needed to weigh variable masses of metals; in the early modern era, Isaac Newton served as the first Master of the Royal Mint implementing policies and procedures which allowed for bills corresponding to metals stored at the bank’s vaults to circulate more swiftly and efficiently. The most recent example is the Bretton Woods system which functioned from 1934-1971. In this system, the measure and reward of settlement (gold) was kept in physical vaults at specific locations, while the payment media circulated throughout the banking systems of the world. We are aware that this system failed, but we must be careful not to judge the reason for its failure to be something related to either inefficient payment speed or wholesale advancements in technology. In all of these examples, we see technology that is introduced and adopted in order to enhance the efficiency and speed of payments, while the money itself remains anchored to a gold standard, grounded in the natural world. In this way, lasting satisfaction is achieved, payment speed is maximized, and the primary nature and purpose of money are maintained.

Here we have touched upon another point of rather unquestionable agreement amongst disparate schools of economic thought. That is, that all gold standards have failed. This technically-correct statement tends to be touted by mainstream economists, often as a rebuttal to the dictum of sound money advocates that ‘all fiat currency systems have failed.’

There is an obvious distinction which makes one statement correct in the sophistic sense while the other is correct in the ontological sense. It is indeed correct to say that ‘all gold standards have failed,’ but only if we qualify the claim as follows: all gold standards have failed because governments have failed to maintain a gold backing as originally promisedHence why the value of gold today, that is to say its purchasing power, is atemporally commensurate with historical gold standard ratios, while the value of fiat has deteriorated. For example: on the eve of the suspension of the gold standard by President Richard Nixon in 1971, the price of gold was 42 dollars per ounce; today, 49 years later, the price of gold is 1,800 dollars per ounce. The 1,800 dollars today will buy the same goods and services that the 42 dollars would in 1971, whereas the same 42 dollars held from 1971 to the present day can only buy 2% of the original ounce of gold. In other words, the dollar has been devalued by 98%. In this case, it is clear that the gold standard ‘failed’ not because gold has ceased to be an unchanging, naturally-given measure and reward, but because our political leaders decided to terminate the system, with deleterious consequences for the purchasing power and savings of the average person. The sophistic statement that ‘all gold standards have failed’ is thus rendered to its proper place, as nothing more than propaganda. Conversely, the challenge that ‘all fiat currency systems have failed’ cannot be disputed on the same grounds, given that fiat currencies which have historically failed have no present value, while existing fiat currencies lose purchasing power over time because of inflation. The best reply to the challenge is to say that the current instantiation of fiat currency has not failed per se, but, rather, that it has lost over 90% of its purchasing power in fifty years, leading to unprecedented wealth inequality and myriad disorders within the modern West. Meanwhile, gold has continued to function as an enduring store of value which is owned and traded by citizens and governments.

Having now rejected the truism, and having come to discern the true reason for the failure of historic gold standards, we are provided with an additional insight: the great integrity of the gold monetary system not only grants finality to transactions and lasting satisfaction to all members of the cooperative society, but it also alerts the society to when the promises of the social and political kind are broken. Just as the money is either at motion or at rest, so, too, is there a tangible backing of metal or not. This greater indication of the health of society and the status of the social contract is yet another instance of the complex manifestation of the naturalist understanding of real money which we have so far discussed. Conversely, modern, man-made money obfuscates whether or not a promise has been broken because it provides the issuers of said contrived money with great latitude and power to change the definition of money itself, while defaulting on historic promises. Here we come full circle in recognising how, within the modern monetary system, conventional wisdoms – including the truism that ‘gold is old’ – are, in effect, manipulative responses to the subjective winds of politicians, rather than the result of objective observation, testable prediction, or even common sense.

III        

Even today, gold exemplifies the nature and purpose of money in its very essence. Indeed, gold can serve as money today, not just as savings, but as the sole means of measuring and rewarding all cooperative transactions. There are no technical impediments which could prevent a sovereign from employing physical gold to anchor their monetary system, thereby restoring an unchanging measure and lasting reward for all people. This can be done both with or without redemption rights, and it can be enhanced by both historic and contemporary innovations in payment communications. Gold can easily anchor a physical coin, a fiat currency, a swift system, a stock exchange, or a credit card network which would significantly improve how we cooperate for the reason that such a system would grant lasting satisfaction to all involved.

As for the apparent costs associated with the institution of a gold standard, they are de minimis when compared with their alternatives: from the hypothetical Pythagorean school, to an army of 5,000 PhDs employed at the Federal Reserve, to the hundreds of millions of watts of electricity which online banking systems and cryptocurrencies demand twenty-four hours per day, seven days per week. In truth, the ‘costs’ of tethering the economy to gold would amount to rounding errors within the overall balance sheet of physical cooperation; this is because gold itself is a quantifiable, physical element which exists and endures in nature independently of the acts of cooperation which are anchored to it. It is the physical, natural attributes of gold – its specific gravity, natural scarcity, malleability, ductility, conductivity – which render it the most effectively dense material to store, to protect, and to transport within the physical world. Even today, the Bank of England supports hundreds of billions of dollars of annual economic activity (much of which takes place instantly and electronically) with a hoard of physical gold stored within its vaults. The metal almost never moves, yet the transactions which are supported by the metal transcend time, space, nationality, or creed.  Conversely, any man-made, modern money systems which attempt to mimic gold exact an exhaustive dependence upon all acts of human cooperation; even so, the result of this ‘progress’ provides us with nothing more than the frustrating deferral of settlement and an ever-growing pile of broken promises.

Those who argue that gold monetary systems are antiquated relics of an ante-technological, ante-digital era should reconsider their position. To anchor a ledger to physical gold requires minimal effort compared to fiat or cryptocurrencies. Therefore, as we have argued, governments have ceased to do this for fiscal reasons, rather than because of any limitations intrinsic to gold or any advantages intrinsic to modern moneys.

The only obstacle standing between mankind and the implementation of a gold monetary standard today is that governments must make the decision to transparently reserve a unit of money in circulation to a weight of gold. This is the truth of the matter, not the truism. It is a truth which I believe, in our lifetime, will be rediscovered.

It will only require one nation, one leader, one society to make this decision, a decision which will profoundly impact its citizens and their relative prosperity, and which will reshape the course of geopolitical history in the years to come. The golden road remains constant, for it is permanently paved by the hand of God within the natural world. No matter how many alternative roads man chooses to build, the golden road is always available to be taken.

via ZeroHedge News https://ift.tt/33HbgJp Tyler Durden

Will Cities Survive 2020?

featurewillcitiessurvive1

One of the first coronavirus outbreaks in the United States was in a nursing home in the Seattle suburb of Kirkland, Washington. On the same day that the Centers for Disease Control and Prevention (CDC) announced the country’s first COVID-19 death, it also reported two cases linked to Kirkland’s Life Care Center, where two-thirds of residents and 47 staff members would eventually become infected with the virus. Of those, 35 would die.

COVID-19 deaths in America’s nursing homes are appallingly common. Many of those deaths could have been prevented if families had better options for keeping grandpa closer to home and out of crowded elder care. But building regulations passed—ironically—in the name of public health make that difficult or impossible in many cities.

Kirkland requires that any accessory dwelling units (ADUs)—often known as granny flats or in-law suites—can be no larger than 800 square feet and no higher than 15 feet above the main home. They also must come with an off-street parking space.

Of the people who applied for such permits in Kirkland since 1995, nearly half never ended up starting construction. A survey by the city in 2018 found that design constraints were the biggest difficulty applicants faced.

Kirkland’s granny flat rule is just one of countless examples of ordinances, restrictions, and red tape that have slowly wrapped up America’s cities, regulating how much people can build, where they can build it, and what they can use it for.

While often justified initially as a means of protecting public health, zoning codes have now gone far beyond nuisance laws—which limited themselves to regulating the externalities of the most noxious polluters—and control of infectious disease. They instead incorporated planners’ desires to scientifically manage cities, protect property values, and combat the moral corruption that supposedly came with high-density housing.

New York City adopted the nation’s first comprehensive zoning code in 1916, which placed restrictions on the height and density of new buildings, and classified different types of land use. Within a few years, thousands of communities across the country had adopted similar regulations.

Their proliferation attracted fierce opposition with critics arguing these zoning codes were “worse than prohibition” and represented “an advanced form of communism.” These disagreements, largely between planners who think cities need to be designed from the top down and others who think they should be left to grow organically, have persisted to this day.

In cities themselves, at least, the planners have won. A century later, every major metro area in the country save for Houston has adopted zoning codes that regulate how densely people can build on their land and what kind of activities they are allowed to do there.

The history of America’s cities is, in a very real sense, the history of zoning regulations, which have long shaped real estate development, labor, and living arrangements. So it’s no surprise that COVID-19, the biggest public health crisis in a century, which has occasioned an equally massive public health response, has already begun reshaping how people live in cities and how they are governed—rekindling old debates over urban density vs. suburban sprawl while raising new questions about the value of many land-use regulations.

At the same time, renewed fears of violence and decay, stoked by the sporadic riots and looting that plagued city cores throughout the summer, have changed public perceptions about the safety of urban living. As urbanites flee to the suburbs and municipal governments peel away ancient red tape to ease life under suddenly changed circumstances, the coronavirus has forced us to ask: What are cities for? What will they become? And in the wake of a pandemic and waves of riots that have upended so much of urban life, will they survive at all?

‘To Fill Dead Men’s Shoes’

There’s a chapter in Neal Stephenson’s historical novel Quicksilver in which the protagonist, Daniel Waterhouse, must leave his home on the outskirts of London, where he’s been quarantining for a month during the Great Plague of 1665, and travel into the disease-ravaged city center. His mission is to exchange a paper note for gold, at the time a new and innovative financial service. Along the way he passes both boarded-up plague homes and bustling coffee houses where the city’s elite gather to conduct business.

It’s a sequence that neatly illustrates the intense tradeoffs that once came with city living. Dense clusters of people living cheek by jowl enabled the spread of both deadly pathogens and innovations in trade, finance, science, and art.

“It’s a bit counterintuitive. Very large cities have problems of pollution and congestion, which are very difficult to solve,” says Alain Bertaud, a senior research scholar at New York University’s Marron Institute of Urban Management. “In spite of all that…these messy cities, if you look at what people produce, they produce a much larger part of the [gross domestic product] than the rest of the country per capita.”

Cities at their root, says Bertaud, are labor markets where people are presented with lots of choices about where to work and companies have lots of options for whom to hire. This intense intermingling of capital and labor means innovative ideas can spread more quickly and production can become more specialized. The result is that urban economies end up producing more wealth than would be possible if the workers and firms that inhabit them were spread out among smaller communities.

The prosperity created by these “agglomeration effects”—the measurable economic benefits from density—in turn spawns the character-defining scenes, industries, and attractions that make cities valuable beyond their ability to provide people with a paycheck. The Bay Area’s tech scene and Nashville’s live music venues are products of this urban agglomeration.

Historically, agglomeration effects have been powerful enough to prompt people to pour into cities in spite of the real hazards that density posed to residents’ health and well-being. An April VoxEU article by economists Neil Cummins, Morgan Kelly, and Cormac Ó Gráda notes that from 1563 to 1665, there were four major plagues that each managed to kill off 25 percent of London’s population. Remarkably, these periodic outbreaks did little to diminish the attractiveness of the city to newcomers.

“Although devastating, the impact of plague on London’s population was surprisingly transitory,” they write. “Within two years of each visitation, population as measured by births had returned to its previous level, as migrants from the countryside flowed in ‘to fill dead men’s shoes.'”

In 1793, Philadelphia, then America’s capital, was hit with a severe outbreak of yellow fever that killed 10 percent of the city’s population. “It’s pretty shocking, and it’s something that the founding fathers had to deal with. I think it’s left out of the musical Hamilton,” says Catherine Brinkley, an assistant professor of community and regional development at the University of California, Davis.

This outbreak of yellow fever, says Brinkley, inspired the first efforts to start cleaning up city streets through mucking out gutters and creating alleyways where waste could be dumped. Cholera outbreaks in American cities in the 19th century led to the creation of the first systems that could pipe in clean water and carry away sewage.

The stubborn unwillingness of people to abandon cities even in the face of periodic epidemics gave rise to interventions that made urban life a little less deadly. In his 2011 book Triumph of the City, the Harvard economist Edward Glaeser notes that developments like municipal water service and waste collection—which he calls “self-protecting urban innovations”—led to significant reductions in urban mortality. Between the end of the Civil War and the 1920s, the death rate in New York City dropped by two-thirds. Chicago saw a similar decline over the same period, with about half that fall in mortality chalked up to the provision of clean water.

The later addition of use-segregating, density-restricting zoning codes, predicated on the now-discredited “miasma” theory that a lack of light and air was to blame for the spread of urban disease, did much less to improve public health.

The economics of population density ensured that people continued to congregate in cities despite the dangers of urban disease. As the incidence of epidemics lessened and then virtually disappeared, expanding agglomeration effects drew in increasing numbers of people.

According to U.S. Census Bureau data, the share of America’s population living in urban areas was just shy of 20 percent in 1860, when these early public health interventions were first taking shape. By the dawn of the 20th century, it had grown to nearly 40 percent. Today, over 80 percent of the country lives in an urban environment.

Growing Up, Moving Out

How people move within cities and what kinds of jobs they do there have changed dramatically over time. But one thing that’s stayed largely constant is their intolerance for a long commute. Most folks, whether they’re medieval Parisians or modern Americans, are unwilling to spend more than 30 minutes traveling, one way, between home and work.

This iron law of urban commuting—sometimes known as Marchetti’s constant, after Italian physicist Cesare Marchetti—has profound implications for how cities look, and, particularly, how sprawling or dense they can be.

If cities exist primarily as labor markets, and people are generally only willing to spend 30 minutes getting to work, the scale of an urban area’s agglomeration effects is going to depend on (a) how many jobs your average city worker can reach within a half-hour’s travel from his home and (b) how many employees can live within 30 minutes of your average city firm.

One can imagine two basic ways of adjusting for this reality: building up to accommodate more homes and firms within a given space, or speeding travel so that more destinations can be reached in a given amount of time.

Physical limits on both building and transportation technology meant that for the first few thousand years of their existence, cities tended to be pretty small, cramped places. Jonathan English, writing for CityLab in 2019, showed that pre-industrial urban areas packed most of their populations within a mile of the city center in order to accommodate a half-hour walking commute. The 19th century brought with it innovations not just in public health but also in transportation and building construction, allowing cities to grow beyond their previous limitations.

Horse-drawn omnibuses, and later steam- and electric-powered rail transit, allowed the upper and middle classes to move to commuter towns and “streetcar suburbs.” At the same time, steel-framed construction and safety break–equipped elevators enabled the first skyscrapers, which could contain within them a huge number of manufacturing and office jobs, to take shape. The combination of both led to what’s known as the monocentric city, or the standard urban model, where most jobs were located in compact downtowns and those who could afford it would commute in from the surrounding suburbs, where land was cheaper and houses were larger but transportation costs were higher.

This turn-of-the-century city model was upended in 1913 by the advent of Henry Ford’s moving assembly line, which lowered the costs of producing the latest transportation technology, the automobile, while also shifting production away from urban centers. “The moving assembly line required lots and lots of land. You couldn’t fit a factory downtown that had moving assembly lines,” says Cato Institute transportation scholar Randal O’Toole. Industry started moving to the fringes of cities in order to take advantage of assembly lines’ productive potential. Workers, benefiting from higher pay and lower-cost cars, weren’t far behind.

The result was that American metro areas grew larger but also more suburban. Innovations in housing construction in the postwar era also helped lower the cost of single-family homes, putting the whole process on steroids. From 1950 to 1990, urban areas saw their populations increase by 72 percent, while city centers saw their populations decline by 17 percent.

O’Toole argues that the steady dispersal of jobs throughout urban areas undermines the need for density in most American cities. Places like Manhattan are an exception, not the rule, he says.

“Today’s cities are nanocentric,” O’Toole explains, with a large majority of jobs existing outside the downtown core. “These jobs are in retail, health care, education; they’re in lodging and wholesaling; they’re in lots and lots of different fields where you don’t have to bunch up to make the jobs work.”

Harvard’s Glaeser, in contrast, argues that the evolution of technology and economic activity creates an ebb and flow in the demand for urban densities across time. The postwar United States was characterized by centrifugal technological changes pushing people out of urban centers. The explosive 21st century growth of knowledge-intensive industries such as finance and tech, on the other hand, benefits from information-rich interpersonal interactions that are more easily facilitated by dense job clusters. “Urban density abets knowledge accumulation and creativity,” Glaeser wrote in a March 2020 working paper. “Dense environments facilitate random personal interactions that can create serendipitous flows of knowledge and collaborative creativity.”

The country itself seems as divided as the experts. Between 2010 and 2016, some cities saw jobs and residents flood into their traditional urban cores. Seattle, D.C., New York City, and Chicago all followed trajectories of more people living closer together. At the same time, hot metros such as Austin, Houston, and Raleigh, North Carolina, spread out as they added population.

In the vast majority of cities, the trend is still toward suburbanization. But the nanocentric city isn’t thriving in a regulatory vacuum. Instead, cities and states have erected reams of red tape that constrict urban housing development while raising prices.

Density limits and exclusively commercial zoning often trip up the construction of high-rise housing in city centers. Outside of downtown, other rules prevent the development of rowhomes, townhouses, and other “missing middle” options.

“Single-unit zoning limits these useful types of housing,” writes Emily Hamilton in CityLab. “So do myriad other restrictions on how and where housing can be built: minimum lot size requirements, parking requirements, height limits and more.” The net effect is more people are pushed into suburban and exurban communities, regardless of their preference for urban living. As Indeed’s chief economist, Jed Kolko, wrote in The New York Times in 2017, “where people live reflects not only what they want—but also what’s available and what it costs.”

These regulations have shaped decades of urban development and, in turn, have influenced a multitude of individual choices about where people live and work. In the wake of the coronavirus crisis, however, many of those regulations—and the choices they’ve inspired—are suddenly being rethought.

Broken Windows and Boarded-Up Businesses

In March and April, COVID lockdowns shut down U.S. cities, making them feel empty and abandoned. In late May, just as some were beginning to reopen, protests, some of which turned violent, made them feel unsafe.

It was toward the end of that month that a bystander captured on video the horrific death of Minneapolis man George Floyd at the hands of city police. Within days, rioters had burned down a police precinct building, torched a nearly completed affordable housing development, and severely damaged or destroyed another 1,500 businesses.

Few major cities escaped the national wave of civil unrest that followed, and the timing couldn’t have been worse. In Washington, D.C., violent protesters and violent police officers clashed on the streets just one day after the city lifted its stay-at-home order. Businesses that could have been offering curbside service were instead boarding up their windows.

In New York City, stay-at-home orders that had never been lifted were complemented by a police-enforced curfew. Chicago, Los Angeles, and Philadelphia all saw looting and property damage. Downtown Portland devolved into a nightly, often violent perma-protest. For a few weeks, Seattle lost multiple city blocks to a leftist street commune known as the Capitol Hill Occupied Protest, or CHOP.

The worst of this discord was mostly temporary. With a few exceptions—notably Portland, Oregon, where activists continued to clash with local and federal law enforcement authorities—the street-level tumult had dwindled by the onset of fall.

Yet the damage to cities, both physically and psychologically, lingered on. In late October, days of unrest followed a police shooting in Philadelphia. Businesses in many metro downtowns boarded up storefronts before the election, anticipating chaos. Others had never taken down their plywood defenses. A survey of 27 U.S. cities by the Council on Criminal Justice found that urban homicide rates between June and August rose 53 percent relative to the same time frame in 2019.

President Donald Trump eagerly seized on the unrest to slam cities and their predominantly Democratic-run city halls. (Murder rates rose in cities with Republican mayors too.) His attacks were characteristically hyperbolic, but he was reflecting widely held concerns. One late July poll found 77 percent of respondents were concerned about urban crime; half reported concerns about crime in their own communities. Those with the wherewithal to move out of urban settings seemed to have fewer and fewer reasons to stay.

Timely Deregulation

When San Francisco issued its first shelter-in-place order on March 16, it was set to expire in three weeks. By week four, city business owners were starting to worry. “In the middle of April, we had realized that we were in trouble,” says Laurie Thomas, owner of two restaurants in the city. Restaurant owners needed a way to reopen, and they needed it fast.

The solution was obvious: outdoor dining. Health experts quickly ascertained that viral transmission was much more of a threat indoors, and European cities had already begun to allow restaurants to spill out onto sidewalks and streets. But San Francisco’s complex mesh of zoning and permitting rules meant that expanding outdoor dining would, under ordinary circumstances, require onerous fees and months of waiting for approval. So Thomas—who also serves as executive director of San Francisco’s Golden Gate Restaurant Association—started putting together a plan that would allow Bay Area businesses to move outdoors without the bureaucratic hassles.

It was a daunting task. “There [were] multiple departments that had to be involved with this,” she says, from the Department of Public Works to the Metropolitan Transportation Commission to Alcoholic Beverage Control to the fire department. “I’d lived in San Francisco a long time. I thought, ‘There is no effing way we are going to pull this off.'”

Yet they did. At the end of May, the city announced a program called Shared Spaces. Via a simple online application, restaurants and other businesses could get permits to set up dining on sidewalks and parking spaces. They could also apply to close the streets near their storefronts to cars. Sidewalk cafés have sprouted all over the city. To date, some 1,600 businesses have received permission to expand in this manner.

On the other side of the country, David Rodgers, a professional musician based in Nashville, Tennessee, was facing the opposite challenge. State orders had shuttered the music venues where he made a living. Instead of spending the summer touring and performing, he was moving his business inside his home.

In April, Rodgers relocated from an apartment near Belmont University to a house in Nashville’s Sylvan Park neighborhood. The larger space meant he had room for a full piano and higher-quality recording equipment, which would enable him to supplement his lost touring income by recording and providing music lessons. He says there’s been a surprising amount of demand for his services.

“Families who had been thinking about piano lessons for the fall were like, ‘We can’t do anything this summer, at least out of town. This is a good time to start three months early,'” says Rodgers. “That’s nothing I was out looking for. It just fell into my lap.”

Rodgers—and thousands more Nashville musicians in a similar position—is also benefiting from a bit of timely deregulation. In July, the city ended its longstanding prohibition on home businesses serving customers on-site, a policy that had officially banned everything from home recording studios to home hair salons.

What About the Roads?

Since much of the American economy was shuttered in March and April, there have been numerous unexpected side effects. Among the most surreal: traffic-free Los Angeles highways.

In the early days of shelter-in-place orders, when people were more inclined to actually stay in their homes, a city famous for gridlock saw congestion fall off a cliff. L.A. traffic volumes declined 45 percent between mid- and late March. Average speeds increased by 30 percent. Empty streets even led to a rash of drag racing and crowds gathering in intersections to perform burnouts and donuts.

Amid the smoke wafting from hoodlums’ tires was an important lesson about urban transportation: Traveling fast requires empty space—the kind you only get when few people have places to go. That’s why in normal times, when residents have jobs to get to, kids to take to school, and dinner reservations to keep, highways tend to get pretty crowded pretty fast. But urban agglomeration, as mentioned, works best when people have access to lots of job opportunities within the space of about 30 minutes. Mounting congestion in an urban area means either longer commutes (which makes people miserable) or access to fewer jobs (which makes them poorer).

Current levels of traffic congestion already put a headwind on the agglomeration effects of America’s cities. The analytics firm INRIX estimates that in 2018, American commuters spent 97 hours a year on average in gridlock, collectively costing them $87 billion worth of wasted time. As America digs itself out of a coronavirus-induced recession, it is easy to imagine that this traffic trend will only grow worse in America’s densest metro areas.

Crowding into a sealed train car with a bunch of strangers hardly sounds appealing in the middle of a pandemic, particularly when everywhere you might want to go is closed. For that reason, public transit ridership plunged some 80 percent across major American cities at the peak of the coronavirus shutdowns, according to the app Moovit. Individual transit agencies in New York City, San Francisco, and Washington, D.C., have reported declines of 90 percent or more, even as car traffic, which dipped earlier in the year, returned to about 90 percent of pre-COVID levels by August 2020.

One study out of Vanderbilt University estimated that daily travel times would increase by 20 minutes per person in San Francisco and 14 minutes per person in New York if a quarter of those area’s 2018 transit riders and carpoolers switched to driving alone. If three-quarters of transit riders made the switch, travel times would increase by 80 and 68 minutes, respectively.

Even if riders are willing to return, there might not be much of a transit network to come back to in some cities. Places like New York and D.C. have massive maintenance backlogs that had been eroding service quality and on-time performance for years. The financial hit these systems have taken during the pandemic—which already led to one $25 billion federal bailout—has left transit agencies with even less money for needed repairs.

There’s no obvious solution to the problem of overcrowded streets and empty transit systems. The supply of roads is relatively fixed, particularly in the most congested areas—which are congested because they’re filled with things people want to get to. Demolishing buildings to make room for roads is likely self-defeating if the buildings were the reason for the traffic in the first place.

One option might be to rededicate street space currently reserved for cars to other modes of transit, such as bicycles and electric scooters, which take up less room when moving and require less parking when stationary. But since bikes and scooters have a significantly more limited range than do automobiles, city planners would effectively be prioritizing shorter local trips by people living within an urban core over longer drives to those destinations from farther-out neighborhoods. In fact, giving over too much space to bicycles and scooters could make urban connectivity worse, as drivers—trying to reach jobs and amenities spread out across the nanocentric city—are crowded onto fewer and fewer lanes. These microtransit and active transit options, therefore, are probably only a viable commuting option in the densest city centers, where those vehicles’ limited range is less of a problem and ultra-high land prices increase the costs of handing over more free real estate to space-hogging cars.

Another option is congestion pricing, whereby a fee is charged to drivers based on how much of the road they take up and how crowded the roads are at that time. Congestion pricing is a proven means of managing demand and ensuring faster traffic flows. Done right, it can get people to reconsider traveling at rush hour, and even boost carpooling and transit where that’s more efficient. To the extent that people have legitimate health concerns about those options, however, the prices required to keep traffic flowing will get expensive fast. That could make urban mobility prohibitively costly for many commuters, making them question whether they need to be traveling to work at all.

Phoning It In

One possibility is that the need for urban transportation options will decline as more people work remotely, choosing not to commute at all.

A common feature of news coverage from the early days of COVID-19 were stories about wealthy Manhattanites decamping from their disease-ravaged city to second homes in the countryside. As a few weeks of stay-at-home orders turned into a few months, the urban exodus has started to include more than just the wealthiest Americans.

“The coronavirus is challenging the assumption that Americans must stay physically tethered to traditionally hot job markets—and the high costs and small spaces that often come with them—to access the best work opportunities,” wrote Wall Street Journal reporters Rachel Feintzeig and Ben Eisen in August.

Tech companies, often credited with sparking the “return to the city” after decades of suburbanization, are now leading the way on remote work. Twitter, Facebook, Stripe, and others have all announced that their employees need not report to the office for the foreseeable future. But the trend is sweeping the entire economy.

Before the coronavirus pandemic, roughly 5 percent of the workforce worked from home. As of June, 42 percent of America’s workforce was remote, according to a Stanford University study, compared to 26 percent of workers who were still reporting to work in-person and 33 percent of the labor force that wasn’t working at all.

The impact of this shift on cities is more than anecdotal. Untethering white-collar workers from their offices has caused vacancy rates to rise and rents to fall in some cities. According to the real estate firm Douglas Elliman, Manhattan residential rental vacancies hit an all-time record of 5.75 percent in September. The Community Housing Improvement Program, a trade association of mostly small landlords in New York, reported a vacancy rate of 12 percent among its members in September, compared to 3 percent in February of this year.

San Francisco, meanwhile, has seen the number of homes listed for sale increase by 96 percent. Rents in the city have fallen by as much as 31 percent as of September. Other Bay Area counties, as well as Manhattan and Seattle have seen similar declines. The shift to remote work is being cited as a major reason for the drop. (Rents in smaller cities and those adjacent to larger metro areas are increasing slightly, which likely reflects increased demand from people leaving these places’ bigger, denser counterparts.)

The sudden remote work revolution is real. The question is whether it will be more than a temporary expedient spurred on by the need for social distancing.

New York University’s Bertaud suggests that even as technology has made remote work more feasible, it’s still not a full-blown replacement for interpersonal connections. He gives the example of his own shift to teaching online. Technology facilitates lectures well enough, he says; what it can’t replicate are the wine and cheese sessions he’d have with students after class, where free-flowing conversations could happen and novel ideas that didn’t get an airing in the classroom could be expressed.

“After some time you will realize that the lack of face-to-face contact, you will miss some things,” he says. “When you are only on Zoom, everything is already planned. There is no randomness, or very little anyway.”

Iowa State University economist Matt Clancy is much more bullish on remote work’s prospects. In City Journal‘s Spring 2020 issue, he argues that companies’ use of software like Slack, which relies on informal message-based communication, does a much better job of simulating casual hallway conversations and random collaboration than did the old remote-work infrastructure, like email and phone calls. Increasingly customizable emojis and gifs, and growing comfort with their use, he says, are an effective substitute for the social atmosphere that offices traditionally provided.

Nicholas Bloom, an economist at Stanford, stakes out something of a middle ground. He thinks the pandemic has lowered the traditional stigma of working from home and argues that the money and time people have put into setting up home workspaces will make them unwilling to completely abandon them. A year of social distancing may also make people less keen on crowding into downtown office towers.

But there’s still a problem with full-time remote work, he says: “It’s full-time.” Being constantly plugged in can produce loneliness, and it requires parents to simultaneously juggle work and childcare responsibilities. “It’s not that working from home during COVID is fantastic,” he says. “It’s just that the alternative is even worse.”

Bloom believes the most likely outcome will be a hybrid arrangement, where people work one or two days a week at the office and the rest of the time from home. People’s experience with social distancing will also encourage companies to switch to low-rise suburban office parks that don’t rely on mass transit and crowded elevators to get people to and from their desks, he says.

Finally, there’s another problem with remote work: It takes middle- and upper-middle-class workers off the streets. And that, in turn, creates space for others to move in, especially if local political forces take a hands-off approach. Which is exactly what has happened in San Francisco.

Even before the pandemic, a high concentration of homeless people and reticence on the part of local officials to prosecute petty crimes had led to a deteriorating streetscape dominated, in places, by vagrants, dirty needles, and piles of poop. In 2017, the city would occasionally get hundreds of calls a day about human waste, leading to the creation of a dedicated poop patrol.

As more businesses close and more residents move away, there are fewer people left with the energy and incentive to even try to keep order in the public spaces adjacent to their properties. With them goes the tax base needed to sustain city services required to keep the streets clean.

‘It’s All Intertwined’

Offices as a concept might be here to stay, but the erosion of dense office clusters in city centers could still spell doom for the ecosystem of businesses that has grown up to serve them.

“My friends that have restaurants down in the financial district, down [south of Market Street] where tech was…they are struggling. Because no one is going down there,” says Thomas, the San Francisco restaurateur. “We need those people. We need the tech workers to come and spend money in our businesses. That’s why the restaurants have had a good run for so long. It’s all intertwined.”

Thomas says she hasn’t been as badly affected because her two restaurants are located in neighborhoods near where people live. They still have some people coming through the door each day. But she doubts she’ll manage to be profitable without the return of indoor dining.

Thomas’ situation is yet another illustration of how pre-pandemic attempts to regulate cities are exacerbating the effects of COVID-19, often in ways that are totally counterproductive to the hopes and dreams of urban planners. Efforts to create dense commercial cores by zoning those areas exclusively for office and retail use have left businesses without a base of residential customers now that the office workers have left.

Even as San Francisco’s Shared Spaces program gives businesses more control over public space in front of their stores, their ability to adapt their own private spaces to different uses that might be more profitable during the pandemic remains incredibly curtailed. The city has numerous different commercial zones, neighborhood plans, and special-use districts that all affect what a given property can be used for. A parcel that allows retail uses might not allow restaurant uses. A restaurant space might not be zoned to allow serving alcohol or the addition of a chain restaurant.

When Los Angeles restaurants tried to cope with the closing of their dining rooms by converting to makeshift convenience stores at the beginning of the pandemic, public health inspectors almost immediately shut them down for lacking the requisite grocery permits.

An early concern about zoning ordinances, Sonia A. Hirt writes in her 2014 book Zoned in the USA, was that they would give unscrupulous officials too much discretion. In contemporary San Francisco, every building permit is ultimately reliant on the whims of the city’s planning commission. That allows third parties, often with anti-competitive interests in mind, to delay or stop the issuance of permits. It can happen even when a use is permitted by the underlying zoning code, leading to such absurd episodes as one falafel shop trying to prevent another falafel shop from opening down the street or neighborhood activists trying to prevent an arcade repair shop from being converted into an arcade that sells food and alcohol. The joke is that in American zoning law, everything is forbidden except that which is permitted. But in San Francisco zoning law, everything is forbidden, particularly that which is permitted.

San Francisco is on the extreme end of the spectrum when it comes to the complexity of its zoning code and the discretion it gives city officials, but it’s hardly unique. According to a recent policy brief from the Mercatus Center at George Mason University, Arlington, Texas’ code is nearly as complex. “Some zones allow colleges but not trade schools, new car sales but not used car sales, restaurants but not catering services, and firearm sales but not farmers markets,” it notes.

These regulations became all the more burdensome when coronavirus-related upheavals began to render whole business models obsolete, demanding extreme experimentation from entrepreneurs just to keep their livelihoods afloat.

Cities Without Zoning

As the pandemic upended urban life in 2020, a debate arose about whether cities as we’ve known them could survive.

Some of America’s recently great cities already resemble shells of their former selves. Without the same clusters of in-person jobs and amenities that they once offered, many inhabitants of ultra-pricey places like San Francisco and New York might well opt for more space and cheaper living elsewhere.

“New York City is dead forever,” declared former hedge fund manager and Manhattan comedy club owner James Altucher in the New York Post in August. In his telling, there were three primary reasons for moving to New York: “business opportunities, culture and food.” The empty office towers, closed restaurants, and shuttered performance venues had irreparably damaged the city’s attractive powers, he said.

Not so, shot back comedian Jerry Seinfeld: Remote work lacks the “energy” people crave, firms need, and only great cities can provide. “Energy, attitude and personality cannot be ‘remoted’ through even the best fiber optic lines. That’s the whole reason many of us moved to New York in the first place,” Seinfeld wrote in The New York Times. “Feeling sorry for yourself because you can’t go to the theater for a while is not the essential element of character that made New York the brilliant diamond of activity it will one day be again.”

Most of these debates were essentially about the pandemic’s effect on urban culture and character. Yet as we’ve seen, the development of America’s cities—and their ability to withstand a shock like COVID-19—is not divorced from policy decisions.

Houston, Texas, is famous for its lack of zoning regulations. Unlike most every other city in the country, its government generally does not tell property owners what they can build or what they can do on their land. As a result, the Houston metro area managed to add roughly the same amount of housing in 2019 as did the greater New York area, despite having a third of the population.

Houston’s development free-for-all is not a panacea. Half of renters in Harris County are considered cost-burdened, meaning they spend more than 30 percent of their income on rent. That’s slightly higher than the national average, but well below most other big cities. High rates of new construction have made Houston distinctive as a large growing metropolitan area that has nevertheless managed to stay relatively cheap.

Houston’s track record also suggests there is less of a tradeoff between growing up and growing out than might be imagined. Despite its reputation for endless suburban sprawl, it is experiencing a boom in dense mixed-use construction. “There’s this move to become more urban,” says Jacob Sudhoff, CEO of Douglas Elliman Texas, which is working on a number of mixed-use, multifamily projects in Houston. “A lot of people want this walkable lifestyle of having live, work, play, all in one location.”

The pandemic has presented challenges, Sudhoff says. But the city’s more liberalized development environment has made it easy for the projects his company works on to respond quickly to the changes wrought by the coronavirus.

“Hospitality, right now, is tough. We’ve had several projects here in Houston where hotels were part of the mixed-use development, and several of our developers have decided to [remove] the hotels,” Sudhoff says. “What’s beautiful about Houston is we can make modifications to our projects without having to go back and go through planning and zoning. For us to pull components out of a mixed-use development or add components in is night and day compared to any other city, practically.”

That’s a stark contrast with California, where one Bay Area developer’s effort to convert an abandoned mall into housing has spawned multiple lawsuits and ballot initiatives to stop it. The project got final approval this year, so it can at long last start construction—six years after the effort was launched.

Past pandemics suggest that coronavirus will push America’s cities along the path they’ve been charting for the last several decades—a path toward more suburbanization, more decentralization of jobs, and more remote work arrangements. Urban violence and social unrest, wherever they persist, will only speed along this development.

“A major pandemic does have big effects and does come to mark a watershed or turning point,” wrote historian Stephen Davies in an April essay for the U.K.’s Institute for Economic Affairs. “What it does not do is introduce something truly novel. Rather, it accelerates and magnifies trends and processes that were already under way.”

But the coronavirus pandemic and the other upheavals of 2020 aren’t the end of cities or the immense benefits that density can bring. They, and the people who live in them, will continue to grow, evolve, and adapt to new challenges—so long as they’re allowed to do so.

from Latest – Reason.com https://ift.tt/3mM6G3U
via IFTTT

Will Cities Survive 2020?

featurewillcitiessurvive1

One of the first coronavirus outbreaks in the United States was in a nursing home in the Seattle suburb of Kirkland, Washington. On the same day that the Centers for Disease Control and Prevention (CDC) announced the country’s first COVID-19 death, it also reported two cases linked to Kirkland’s Life Care Center, where two-thirds of residents and 47 staff members would eventually become infected with the virus. Of those, 35 would die.

COVID-19 deaths in America’s nursing homes are appallingly common. Many of those deaths could have been prevented if families had better options for keeping grandpa closer to home and out of crowded elder care. But building regulations passed—ironically—in the name of public health make that difficult or impossible in many cities.

Kirkland requires that any accessory dwelling units (ADUs)—often known as granny flats or in-law suites—can be no larger than 800 square feet and no higher than 15 feet above the main home. They also must come with an off-street parking space.

Of the people who applied for such permits in Kirkland since 1995, nearly half never ended up starting construction. A survey by the city in 2018 found that design constraints were the biggest difficulty applicants faced.

Kirkland’s granny flat rule is just one of countless examples of ordinances, restrictions, and red tape that have slowly wrapped up America’s cities, regulating how much people can build, where they can build it, and what they can use it for.

While often justified initially as a means of protecting public health, zoning codes have now gone far beyond nuisance laws—which limited themselves to regulating the externalities of the most noxious polluters—and control of infectious disease. They instead incorporated planners’ desires to scientifically manage cities, protect property values, and combat the moral corruption that supposedly came with high-density housing.

New York City adopted the nation’s first comprehensive zoning code in 1916, which placed restrictions on the height and density of new buildings, and classified different types of land use. Within a few years, thousands of communities across the country had adopted similar regulations.

Their proliferation attracted fierce opposition with critics arguing these zoning codes were “worse than prohibition” and represented “an advanced form of communism.” These disagreements, largely between planners who think cities need to be designed from the top down and others who think they should be left to grow organically, have persisted to this day.

In cities themselves, at least, the planners have won. A century later, every major metro area in the country save for Houston has adopted zoning codes that regulate how densely people can build on their land and what kind of activities they are allowed to do there.

The history of America’s cities is, in a very real sense, the history of zoning regulations, which have long shaped real estate development, labor, and living arrangements. So it’s no surprise that COVID-19, the biggest public health crisis in a century, which has occasioned an equally massive public health response, has already begun reshaping how people live in cities and how they are governed—rekindling old debates over urban density vs. suburban sprawl while raising new questions about the value of many land-use regulations.

At the same time, renewed fears of violence and decay, stoked by the sporadic riots and looting that plagued city cores throughout the summer, have changed public perceptions about the safety of urban living. As urbanites flee to the suburbs and municipal governments peel away ancient red tape to ease life under suddenly changed circumstances, the coronavirus has forced us to ask: What are cities for? What will they become? And in the wake of a pandemic and waves of riots that have upended so much of urban life, will they survive at all?

‘To Fill Dead Men’s Shoes’

There’s a chapter in Neal Stephenson’s historical novel Quicksilver in which the protagonist, Daniel Waterhouse, must leave his home on the outskirts of London, where he’s been quarantining for a month during the Great Plague of 1665, and travel into the disease-ravaged city center. His mission is to exchange a paper note for gold, at the time a new and innovative financial service. Along the way he passes both boarded-up plague homes and bustling coffee houses where the city’s elite gather to conduct business.

It’s a sequence that neatly illustrates the intense tradeoffs that once came with city living. Dense clusters of people living cheek by jowl enabled the spread of both deadly pathogens and innovations in trade, finance, science, and art.

“It’s a bit counterintuitive. Very large cities have problems of pollution and congestion, which are very difficult to solve,” says Alain Bertaud, a senior research scholar at New York University’s Marron Institute of Urban Management. “In spite of all that…these messy cities, if you look at what people produce, they produce a much larger part of the [gross domestic product] than the rest of the country per capita.”

Cities at their root, says Bertaud, are labor markets where people are presented with lots of choices about where to work and companies have lots of options for whom to hire. This intense intermingling of capital and labor means innovative ideas can spread more quickly and production can become more specialized. The result is that urban economies end up producing more wealth than would be possible if the workers and firms that inhabit them were spread out among smaller communities.

The prosperity created by these “agglomeration effects”—the measurable economic benefits from density—in turn spawns the character-defining scenes, industries, and attractions that make cities valuable beyond their ability to provide people with a paycheck. The Bay Area’s tech scene and Nashville’s live music venues are products of this urban agglomeration.

Historically, agglomeration effects have been powerful enough to prompt people to pour into cities in spite of the real hazards that density posed to residents’ health and well-being. An April VoxEU article by economists Neil Cummins, Morgan Kelly, and Cormac Ó Gráda notes that from 1563 to 1665, there were four major plagues that each managed to kill off 25 percent of London’s population. Remarkably, these periodic outbreaks did little to diminish the attractiveness of the city to newcomers.

“Although devastating, the impact of plague on London’s population was surprisingly transitory,” they write. “Within two years of each visitation, population as measured by births had returned to its previous level, as migrants from the countryside flowed in ‘to fill dead men’s shoes.'”

In 1793, Philadelphia, then America’s capital, was hit with a severe outbreak of yellow fever that killed 10 percent of the city’s population. “It’s pretty shocking, and it’s something that the founding fathers had to deal with. I think it’s left out of the musical Hamilton,” says Catherine Brinkley, an assistant professor of community and regional development at the University of California, Davis.

This outbreak of yellow fever, says Brinkley, inspired the first efforts to start cleaning up city streets through mucking out gutters and creating alleyways where waste could be dumped. Cholera outbreaks in American cities in the 19th century led to the creation of the first systems that could pipe in clean water and carry away sewage.

The stubborn unwillingness of people to abandon cities even in the face of periodic epidemics gave rise to interventions that made urban life a little less deadly. In his 2011 book Triumph of the City, the Harvard economist Edward Glaeser notes that developments like municipal water service and waste collection—which he calls “self-protecting urban innovations”—led to significant reductions in urban mortality. Between the end of the Civil War and the 1920s, the death rate in New York City dropped by two-thirds. Chicago saw a similar decline over the same period, with about half that fall in mortality chalked up to the provision of clean water.

The later addition of use-segregating, density-restricting zoning codes, predicated on the now-discredited “miasma” theory that a lack of light and air was to blame for the spread of urban disease, did much less to improve public health.

The economics of population density ensured that people continued to congregate in cities despite the dangers of urban disease. As the incidence of epidemics lessened and then virtually disappeared, expanding agglomeration effects drew in increasing numbers of people.

According to U.S. Census Bureau data, the share of America’s population living in urban areas was just shy of 20 percent in 1860, when these early public health interventions were first taking shape. By the dawn of the 20th century, it had grown to nearly 40 percent. Today, over 80 percent of the country lives in an urban environment.

Growing Up, Moving Out

How people move within cities and what kinds of jobs they do there have changed dramatically over time. But one thing that’s stayed largely constant is their intolerance for a long commute. Most folks, whether they’re medieval Parisians or modern Americans, are unwilling to spend more than 30 minutes traveling, one way, between home and work.

This iron law of urban commuting—sometimes known as Marchetti’s constant, after Italian physicist Cesare Marchetti—has profound implications for how cities look, and, particularly, how sprawling or dense they can be.

If cities exist primarily as labor markets, and people are generally only willing to spend 30 minutes getting to work, the scale of an urban area’s agglomeration effects is going to depend on (a) how many jobs your average city worker can reach within a half-hour’s travel from his home and (b) how many employees can live within 30 minutes of your average city firm.

One can imagine two basic ways of adjusting for this reality: building up to accommodate more homes and firms within a given space, or speeding travel so that more destinations can be reached in a given amount of time.

Physical limits on both building and transportation technology meant that for the first few thousand years of their existence, cities tended to be pretty small, cramped places. Jonathan English, writing for CityLab in 2019, showed that pre-industrial urban areas packed most of their populations within a mile of the city center in order to accommodate a half-hour walking commute. The 19th century brought with it innovations not just in public health but also in transportation and building construction, allowing cities to grow beyond their previous limitations.

Horse-drawn omnibuses, and later steam- and electric-powered rail transit, allowed the upper and middle classes to move to commuter towns and “streetcar suburbs.” At the same time, steel-framed construction and safety break–equipped elevators enabled the first skyscrapers, which could contain within them a huge number of manufacturing and office jobs, to take shape. The combination of both led to what’s known as the monocentric city, or the standard urban model, where most jobs were located in compact downtowns and those who could afford it would commute in from the surrounding suburbs, where land was cheaper and houses were larger but transportation costs were higher.

This turn-of-the-century city model was upended in 1913 by the advent of Henry Ford’s moving assembly line, which lowered the costs of producing the latest transportation technology, the automobile, while also shifting production away from urban centers. “The moving assembly line required lots and lots of land. You couldn’t fit a factory downtown that had moving assembly lines,” says Cato Institute transportation scholar Randal O’Toole. Industry started moving to the fringes of cities in order to take advantage of assembly lines’ productive potential. Workers, benefiting from higher pay and lower-cost cars, weren’t far behind.

The result was that American metro areas grew larger but also more suburban. Innovations in housing construction in the postwar era also helped lower the cost of single-family homes, putting the whole process on steroids. From 1950 to 1990, urban areas saw their populations increase by 72 percent, while city centers saw their populations decline by 17 percent.

O’Toole argues that the steady dispersal of jobs throughout urban areas undermines the need for density in most American cities. Places like Manhattan are an exception, not the rule, he says.

“Today’s cities are nanocentric,” O’Toole explains, with a large majority of jobs existing outside the downtown core. “These jobs are in retail, health care, education; they’re in lodging and wholesaling; they’re in lots and lots of different fields where you don’t have to bunch up to make the jobs work.”

Harvard’s Glaeser, in contrast, argues that the evolution of technology and economic activity creates an ebb and flow in the demand for urban densities across time. The postwar United States was characterized by centrifugal technological changes pushing people out of urban centers. The explosive 21st century growth of knowledge-intensive industries such as finance and tech, on the other hand, benefits from information-rich interpersonal interactions that are more easily facilitated by dense job clusters. “Urban density abets knowledge accumulation and creativity,” Glaeser wrote in a March 2020 working paper. “Dense environments facilitate random personal interactions that can create serendipitous flows of knowledge and collaborative creativity.”

The country itself seems as divided as the experts. Between 2010 and 2016, some cities saw jobs and residents flood into their traditional urban cores. Seattle, D.C., New York City, and Chicago all followed trajectories of more people living closer together. At the same time, hot metros such as Austin, Houston, and Raleigh, North Carolina, spread out as they added population.

In the vast majority of cities, the trend is still toward suburbanization. But the nanocentric city isn’t thriving in a regulatory vacuum. Instead, cities and states have erected reams of red tape that constrict urban housing development while raising prices.

Density limits and exclusively commercial zoning often trip up the construction of high-rise housing in city centers. Outside of downtown, other rules prevent the development of rowhomes, townhouses, and other “missing middle” options.

“Single-unit zoning limits these useful types of housing,” writes Emily Hamilton in CityLab. “So do myriad other restrictions on how and where housing can be built: minimum lot size requirements, parking requirements, height limits and more.” The net effect is more people are pushed into suburban and exurban communities, regardless of their preference for urban living. As Indeed’s chief economist, Jed Kolko, wrote in The New York Times in 2017, “where people live reflects not only what they want—but also what’s available and what it costs.”

These regulations have shaped decades of urban development and, in turn, have influenced a multitude of individual choices about where people live and work. In the wake of the coronavirus crisis, however, many of those regulations—and the choices they’ve inspired—are suddenly being rethought.

Broken Windows and Boarded-Up Businesses

In March and April, COVID lockdowns shut down U.S. cities, making them feel empty and abandoned. In late May, just as some were beginning to reopen, protests, some of which turned violent, made them feel unsafe.

It was toward the end of that month that a bystander captured on video the horrific death of Minneapolis man George Floyd at the hands of city police. Within days, rioters had burned down a police precinct building, torched a nearly completed affordable housing development, and severely damaged or destroyed another 1,500 businesses.

Few major cities escaped the national wave of civil unrest that followed, and the timing couldn’t have been worse. In Washington, D.C., violent protesters and violent police officers clashed on the streets just one day after the city lifted its stay-at-home order. Businesses that could have been offering curbside service were instead boarding up their windows.

In New York City, stay-at-home orders that had never been lifted were complemented by a police-enforced curfew. Chicago, Los Angeles, and Philadelphia all saw looting and property damage. Downtown Portland devolved into a nightly, often violent perma-protest. For a few weeks, Seattle lost multiple city blocks to a leftist street commune known as the Capitol Hill Occupied Protest, or CHOP.

The worst of this discord was mostly temporary. With a few exceptions—notably Portland, Oregon, where activists continued to clash with local and federal law enforcement authorities—the street-level tumult had dwindled by the onset of fall.

Yet the damage to cities, both physically and psychologically, lingered on. In late October, days of unrest followed a police shooting in Philadelphia. Businesses in many metro downtowns boarded up storefronts before the election, anticipating chaos. Others had never taken down their plywood defenses. A survey of 27 U.S. cities by the Council on Criminal Justice found that urban homicide rates between June and August rose 53 percent relative to the same time frame in 2019.

President Donald Trump eagerly seized on the unrest to slam cities and their predominantly Democratic-run city halls. (Murder rates rose in cities with Republican mayors too.) His attacks were characteristically hyperbolic, but he was reflecting widely held concerns. One late July poll found 77 percent of respondents were concerned about urban crime; half reported concerns about crime in their own communities. Those with the wherewithal to move out of urban settings seemed to have fewer and fewer reasons to stay.

Timely Deregulation

When San Francisco issued its first shelter-in-place order on March 16, it was set to expire in three weeks. By week four, city business owners were starting to worry. “In the middle of April, we had realized that we were in trouble,” says Laurie Thomas, owner of two restaurants in the city. Restaurant owners needed a way to reopen, and they needed it fast.

The solution was obvious: outdoor dining. Health experts quickly ascertained that viral transmission was much more of a threat indoors, and European cities had already begun to allow restaurants to spill out onto sidewalks and streets. But San Francisco’s complex mesh of zoning and permitting rules meant that expanding outdoor dining would, under ordinary circumstances, require onerous fees and months of waiting for approval. So Thomas—who also serves as executive director of San Francisco’s Golden Gate Restaurant Association—started putting together a plan that would allow Bay Area businesses to move outdoors without the bureaucratic hassles.

It was a daunting task. “There [were] multiple departments that had to be involved with this,” she says, from the Department of Public Works to the Metropolitan Transportation Commission to Alcoholic Beverage Control to the fire department. “I’d lived in San Francisco a long time. I thought, ‘There is no effing way we are going to pull this off.'”

Yet they did. At the end of May, the city announced a program called Shared Spaces. Via a simple online application, restaurants and other businesses could get permits to set up dining on sidewalks and parking spaces. They could also apply to close the streets near their storefronts to cars. Sidewalk cafés have sprouted all over the city. To date, some 1,600 businesses have received permission to expand in this manner.

On the other side of the country, David Rodgers, a professional musician based in Nashville, Tennessee, was facing the opposite challenge. State orders had shuttered the music venues where he made a living. Instead of spending the summer touring and performing, he was moving his business inside his home.

In April, Rodgers relocated from an apartment near Belmont University to a house in Nashville’s Sylvan Park neighborhood. The larger space meant he had room for a full piano and higher-quality recording equipment, which would enable him to supplement his lost touring income by recording and providing music lessons. He says there’s been a surprising amount of demand for his services.

“Families who had been thinking about piano lessons for the fall were like, ‘We can’t do anything this summer, at least out of town. This is a good time to start three months early,'” says Rodgers. “That’s nothing I was out looking for. It just fell into my lap.”

Rodgers—and thousands more Nashville musicians in a similar position—is also benefiting from a bit of timely deregulation. In July, the city ended its longstanding prohibition on home businesses serving customers on-site, a policy that had officially banned everything from home recording studios to home hair salons.

What About the Roads?

Since much of the American economy was shuttered in March and April, there have been numerous unexpected side effects. Among the most surreal: traffic-free Los Angeles highways.

In the early days of shelter-in-place orders, when people were more inclined to actually stay in their homes, a city famous for gridlock saw congestion fall off a cliff. L.A. traffic volumes declined 45 percent between mid- and late March. Average speeds increased by 30 percent. Empty streets even led to a rash of drag racing and crowds gathering in intersections to perform burnouts and donuts.

Amid the smoke wafting from hoodlums’ tires was an important lesson about urban transportation: Traveling fast requires empty space—the kind you only get when few people have places to go. That’s why in normal times, when residents have jobs to get to, kids to take to school, and dinner reservations to keep, highways tend to get pretty crowded pretty fast. But urban agglomeration, as mentioned, works best when people have access to lots of job opportunities within the space of about 30 minutes. Mounting congestion in an urban area means either longer commutes (which makes people miserable) or access to fewer jobs (which makes them poorer).

Current levels of traffic congestion already put a headwind on the agglomeration effects of America’s cities. The analytics firm INRIX estimates that in 2018, American commuters spent 97 hours a year on average in gridlock, collectively costing them $87 billion worth of wasted time. As America digs itself out of a coronavirus-induced recession, it is easy to imagine that this traffic trend will only grow worse in America’s densest metro areas.

Crowding into a sealed train car with a bunch of strangers hardly sounds appealing in the middle of a pandemic, particularly when everywhere you might want to go is closed. For that reason, public transit ridership plunged some 80 percent across major American cities at the peak of the coronavirus shutdowns, according to the app Moovit. Individual transit agencies in New York City, San Francisco, and Washington, D.C., have reported declines of 90 percent or more, even as car traffic, which dipped earlier in the year, returned to about 90 percent of pre-COVID levels by August 2020.

One study out of Vanderbilt University estimated that daily travel times would increase by 20 minutes per person in San Francisco and 14 minutes per person in New York if a quarter of those area’s 2018 transit riders and carpoolers switched to driving alone. If three-quarters of transit riders made the switch, travel times would increase by 80 and 68 minutes, respectively.

Even if riders are willing to return, there might not be much of a transit network to come back to in some cities. Places like New York and D.C. have massive maintenance backlogs that had been eroding service quality and on-time performance for years. The financial hit these systems have taken during the pandemic—which already led to one $25 billion federal bailout—has left transit agencies with even less money for needed repairs.

There’s no obvious solution to the problem of overcrowded streets and empty transit systems. The supply of roads is relatively fixed, particularly in the most congested areas—which are congested because they’re filled with things people want to get to. Demolishing buildings to make room for roads is likely self-defeating if the buildings were the reason for the traffic in the first place.

One option might be to rededicate street space currently reserved for cars to other modes of transit, such as bicycles and electric scooters, which take up less room when moving and require less parking when stationary. But since bikes and scooters have a significantly more limited range than do automobiles, city planners would effectively be prioritizing shorter local trips by people living within an urban core over longer drives to those destinations from farther-out neighborhoods. In fact, giving over too much space to bicycles and scooters could make urban connectivity worse, as drivers—trying to reach jobs and amenities spread out across the nanocentric city—are crowded onto fewer and fewer lanes. These microtransit and active transit options, therefore, are probably only a viable commuting option in the densest city centers, where those vehicles’ limited range is less of a problem and ultra-high land prices increase the costs of handing over more free real estate to space-hogging cars.

Another option is congestion pricing, whereby a fee is charged to drivers based on how much of the road they take up and how crowded the roads are at that time. Congestion pricing is a proven means of managing demand and ensuring faster traffic flows. Done right, it can get people to reconsider traveling at rush hour, and even boost carpooling and transit where that’s more efficient. To the extent that people have legitimate health concerns about those options, however, the prices required to keep traffic flowing will get expensive fast. That could make urban mobility prohibitively costly for many commuters, making them question whether they need to be traveling to work at all.

Phoning It In

One possibility is that the need for urban transportation options will decline as more people work remotely, choosing not to commute at all.

A common feature of news coverage from the early days of COVID-19 were stories about wealthy Manhattanites decamping from their disease-ravaged city to second homes in the countryside. As a few weeks of stay-at-home orders turned into a few months, the urban exodus has started to include more than just the wealthiest Americans.

“The coronavirus is challenging the assumption that Americans must stay physically tethered to traditionally hot job markets—and the high costs and small spaces that often come with them—to access the best work opportunities,” wrote Wall Street Journal reporters Rachel Feintzeig and Ben Eisen in August.

Tech companies, often credited with sparking the “return to the city” after decades of suburbanization, are now leading the way on remote work. Twitter, Facebook, Stripe, and others have all announced that their employees need not report to the office for the foreseeable future. But the trend is sweeping the entire economy.

Before the coronavirus pandemic, roughly 5 percent of the workforce worked from home. As of June, 42 percent of America’s workforce was remote, according to a Stanford University study, compared to 26 percent of workers who were still reporting to work in-person and 33 percent of the labor force that wasn’t working at all.

The impact of this shift on cities is more than anecdotal. Untethering white-collar workers from their offices has caused vacancy rates to rise and rents to fall in some cities. According to the real estate firm Douglas Elliman, Manhattan residential rental vacancies hit an all-time record of 5.75 percent in September. The Community Housing Improvement Program, a trade association of mostly small landlords in New York, reported a vacancy rate of 12 percent among its members in September, compared to 3 percent in February of this year.

San Francisco, meanwhile, has seen the number of homes listed for sale increase by 96 percent. Rents in the city have fallen by as much as 31 percent as of September. Other Bay Area counties, as well as Manhattan and Seattle have seen similar declines. The shift to remote work is being cited as a major reason for the drop. (Rents in smaller cities and those adjacent to larger metro areas are increasing slightly, which likely reflects increased demand from people leaving these places’ bigger, denser counterparts.)

The sudden remote work revolution is real. The question is whether it will be more than a temporary expedient spurred on by the need for social distancing.

New York University’s Bertaud suggests that even as technology has made remote work more feasible, it’s still not a full-blown replacement for interpersonal connections. He gives the example of his own shift to teaching online. Technology facilitates lectures well enough, he says; what it can’t replicate are the wine and cheese sessions he’d have with students after class, where free-flowing conversations could happen and novel ideas that didn’t get an airing in the classroom could be expressed.

“After some time you will realize that the lack of face-to-face contact, you will miss some things,” he says. “When you are only on Zoom, everything is already planned. There is no randomness, or very little anyway.”

Iowa State University economist Matt Clancy is much more bullish on remote work’s prospects. In City Journal‘s Spring 2020 issue, he argues that companies’ use of software like Slack, which relies on informal message-based communication, does a much better job of simulating casual hallway conversations and random collaboration than did the old remote-work infrastructure, like email and phone calls. Increasingly customizable emojis and gifs, and growing comfort with their use, he says, are an effective substitute for the social atmosphere that offices traditionally provided.

Nicholas Bloom, an economist at Stanford, stakes out something of a middle ground. He thinks the pandemic has lowered the traditional stigma of working from home and argues that the money and time people have put into setting up home workspaces will make them unwilling to completely abandon them. A year of social distancing may also make people less keen on crowding into downtown office towers.

But there’s still a problem with full-time remote work, he says: “It’s full-time.” Being constantly plugged in can produce loneliness, and it requires parents to simultaneously juggle work and childcare responsibilities. “It’s not that working from home during COVID is fantastic,” he says. “It’s just that the alternative is even worse.”

Bloom believes the most likely outcome will be a hybrid arrangement, where people work one or two days a week at the office and the rest of the time from home. People’s experience with social distancing will also encourage companies to switch to low-rise suburban office parks that don’t rely on mass transit and crowded elevators to get people to and from their desks, he says.

Finally, there’s another problem with remote work: It takes middle- and upper-middle-class workers off the streets. And that, in turn, creates space for others to move in, especially if local political forces take a hands-off approach. Which is exactly what has happened in San Francisco.

Even before the pandemic, a high concentration of homeless people and reticence on the part of local officials to prosecute petty crimes had led to a deteriorating streetscape dominated, in places, by vagrants, dirty needles, and piles of poop. In 2017, the city would occasionally get hundreds of calls a day about human waste, leading to the creation of a dedicated poop patrol.

As more businesses close and more residents move away, there are fewer people left with the energy and incentive to even try to keep order in the public spaces adjacent to their properties. With them goes the tax base needed to sustain city services required to keep the streets clean.

‘It’s All Intertwined’

Offices as a concept might be here to stay, but the erosion of dense office clusters in city centers could still spell doom for the ecosystem of businesses that has grown up to serve them.

“My friends that have restaurants down in the financial district, down [south of Market Street] where tech was…they are struggling. Because no one is going down there,” says Thomas, the San Francisco restaurateur. “We need those people. We need the tech workers to come and spend money in our businesses. That’s why the restaurants have had a good run for so long. It’s all intertwined.”

Thomas says she hasn’t been as badly affected because her two restaurants are located in neighborhoods near where people live. They still have some people coming through the door each day. But she doubts she’ll manage to be profitable without the return of indoor dining.

Thomas’ situation is yet another illustration of how pre-pandemic attempts to regulate cities are exacerbating the effects of COVID-19, often in ways that are totally counterproductive to the hopes and dreams of urban planners. Efforts to create dense commercial cores by zoning those areas exclusively for office and retail use have left businesses without a base of residential customers now that the office workers have left.

Even as San Francisco’s Shared Spaces program gives businesses more control over public space in front of their stores, their ability to adapt their own private spaces to different uses that might be more profitable during the pandemic remains incredibly curtailed. The city has numerous different commercial zones, neighborhood plans, and special-use districts that all affect what a given property can be used for. A parcel that allows retail uses might not allow restaurant uses. A restaurant space might not be zoned to allow serving alcohol or the addition of a chain restaurant.

When Los Angeles restaurants tried to cope with the closing of their dining rooms by converting to makeshift convenience stores at the beginning of the pandemic, public health inspectors almost immediately shut them down for lacking the requisite grocery permits.

An early concern about zoning ordinances, Sonia A. Hirt writes in her 2014 book Zoned in the USA, was that they would give unscrupulous officials too much discretion. In contemporary San Francisco, every building permit is ultimately reliant on the whims of the city’s planning commission. That allows third parties, often with anti-competitive interests in mind, to delay or stop the issuance of permits. It can happen even when a use is permitted by the underlying zoning code, leading to such absurd episodes as one falafel shop trying to prevent another falafel shop from opening down the street or neighborhood activists trying to prevent an arcade repair shop from being converted into an arcade that sells food and alcohol. The joke is that in American zoning law, everything is forbidden except that which is permitted. But in San Francisco zoning law, everything is forbidden, particularly that which is permitted.

San Francisco is on the extreme end of the spectrum when it comes to the complexity of its zoning code and the discretion it gives city officials, but it’s hardly unique. According to a recent policy brief from the Mercatus Center at George Mason University, Arlington, Texas’ code is nearly as complex. “Some zones allow colleges but not trade schools, new car sales but not used car sales, restaurants but not catering services, and firearm sales but not farmers markets,” it notes.

These regulations became all the more burdensome when coronavirus-related upheavals began to render whole business models obsolete, demanding extreme experimentation from entrepreneurs just to keep their livelihoods afloat.

Cities Without Zoning

As the pandemic upended urban life in 2020, a debate arose about whether cities as we’ve known them could survive.

Some of America’s recently great cities already resemble shells of their former selves. Without the same clusters of in-person jobs and amenities that they once offered, many inhabitants of ultra-pricey places like San Francisco and New York might well opt for more space and cheaper living elsewhere.

“New York City is dead forever,” declared former hedge fund manager and Manhattan comedy club owner James Altucher in the New York Post in August. In his telling, there were three primary reasons for moving to New York: “business opportunities, culture and food.” The empty office towers, closed restaurants, and shuttered performance venues had irreparably damaged the city’s attractive powers, he said.

Not so, shot back comedian Jerry Seinfeld: Remote work lacks the “energy” people crave, firms need, and only great cities can provide. “Energy, attitude and personality cannot be ‘remoted’ through even the best fiber optic lines. That’s the whole reason many of us moved to New York in the first place,” Seinfeld wrote in The New York Times. “Feeling sorry for yourself because you can’t go to the theater for a while is not the essential element of character that made New York the brilliant diamond of activity it will one day be again.”

Most of these debates were essentially about the pandemic’s effect on urban culture and character. Yet as we’ve seen, the development of America’s cities—and their ability to withstand a shock like COVID-19—is not divorced from policy decisions.

Houston, Texas, is famous for its lack of zoning regulations. Unlike most every other city in the country, its government generally does not tell property owners what they can build or what they can do on their land. As a result, the Houston metro area managed to add roughly the same amount of housing in 2019 as did the greater New York area, despite having a third of the population.

Houston’s development free-for-all is not a panacea. Half of renters in Harris County are considered cost-burdened, meaning they spend more than 30 percent of their income on rent. That’s slightly higher than the national average, but well below most other big cities. High rates of new construction have made Houston distinctive as a large growing metropolitan area that has nevertheless managed to stay relatively cheap.

Houston’s track record also suggests there is less of a tradeoff between growing up and growing out than might be imagined. Despite its reputation for endless suburban sprawl, it is experiencing a boom in dense mixed-use construction. “There’s this move to become more urban,” says Jacob Sudhoff, CEO of Douglas Elliman Texas, which is working on a number of mixed-use, multifamily projects in Houston. “A lot of people want this walkable lifestyle of having live, work, play, all in one location.”

The pandemic has presented challenges, Sudhoff says. But the city’s more liberalized development environment has made it easy for the projects his company works on to respond quickly to the changes wrought by the coronavirus.

“Hospitality, right now, is tough. We’ve had several projects here in Houston where hotels were part of the mixed-use development, and several of our developers have decided to [remove] the hotels,” Sudhoff says. “What’s beautiful about Houston is we can make modifications to our projects without having to go back and go through planning and zoning. For us to pull components out of a mixed-use development or add components in is night and day compared to any other city, practically.”

That’s a stark contrast with California, where one Bay Area developer’s effort to convert an abandoned mall into housing has spawned multiple lawsuits and ballot initiatives to stop it. The project got final approval this year, so it can at long last start construction—six years after the effort was launched.

Past pandemics suggest that coronavirus will push America’s cities along the path they’ve been charting for the last several decades—a path toward more suburbanization, more decentralization of jobs, and more remote work arrangements. Urban violence and social unrest, wherever they persist, will only speed along this development.

“A major pandemic does have big effects and does come to mark a watershed or turning point,” wrote historian Stephen Davies in an April essay for the U.K.’s Institute for Economic Affairs. “What it does not do is introduce something truly novel. Rather, it accelerates and magnifies trends and processes that were already under way.”

But the coronavirus pandemic and the other upheavals of 2020 aren’t the end of cities or the immense benefits that density can bring. They, and the people who live in them, will continue to grow, evolve, and adapt to new challenges—so long as they’re allowed to do so.

from Latest – Reason.com https://ift.tt/3mM6G3U
via IFTTT

2020: A Retrospective From 2025

2020: A Retrospective From 2025

Tyler Durden

Fri, 12/04/2020 – 23:40

Authored by Tom Trenchard via AmericanMind.org,

Donald Trump and the Altogether True and Amazing Origin of the United American Counties.

2020 marked an epoch in American history, standing alongside 1865, 1787, and 1776. First there was the COVID-19 pandemic, then there were the racial protests and riots throughout the summer, and then there was the disputed presidential election. Finally and most cataclysmically, though, 2020 witnessed the initial formation of the United American Counties (UACo) within the former United States of America. Five years later, it is only now becoming possible to assess the most important causes and consequences of this momentous development for American political society.

As with most politically revolutionary events, the Declaration of UACo Independence was almost entirely unforeseen before it occurred, but almost inevitable in hindsight. By the early 2010s two things were clear:

(1) Americans had become increasingly polarized in their worldviews and political beliefs; and

(2) These polarized halves of the U.S. were increasingly sorting themselves into either urban or suburban/rural areas.

Trump’s election in 2016 put a spotlight on these political realities; as Trump frequently boasted, the 2016 electoral map looked like a sea of red surrounding islands of blue. In 2020, that situation was essentially unchanged.

97% of land area in the U.S. constituted rural counties. Trump’s support within these counties was high and enthusiastic both in 2016 and 2020. Within the remaining 3% of the geographical U.S. – the big cities – anti-Trump sentiment was equally high and enthusiastic.

The 2020 election was the perfect storm for a confrontation between these two factions. It looked like Trump was winning on election day, and then the mail-in ballots handed an apparent victory to Biden. Although widespread electoral fraud wasn’t uncovered by the protracted legal investigation that followed, the die had been cast. Trump and his supporters thought the election had been stolen, and that Trump was the legitimate president of the U.S.

If it had only been the election dispute, tensions may have dissipated over time. Trump supporters may have learned to live with a Biden presidency, especially given GOP victories at the state level and in Congress. The problem was that the election dispute coincided with a deep polarization of worldviews and American historical narratives that had been building for decades. This polarization had proceeded to the extent of annihilating any possible common ground, rendering attempts at compromise or a “live and let live” approach impossible. We had become two Americas; and, as Lincoln had said, “a house divided against itself cannot stand.”

In 1861, the outcome of this intractable situation was state secession. The division at this time was between slave states and free states. In 2020, the division was not so much between states as between rural and urban counties within states. In 1861, Lincoln was able to marshal the political will, the moral justification, and the economic and military resources necessary to maintain the original constitutional union by force. In 2020, none of these factors was present: Biden proved to be no Lincoln, and the country was too exhausted from the events of 2020 to muster an extended effort to compel union through force.

An America Altogether New

The rapid dissemination of the Declaration of UACo Independence in December 2020 provided the motivation and justification for the formation of a new political society within the former U.S.

Its “List of Principles” effectively encapsulated the worldview of American political conservatives and echoed the Declaration of 1776: it endorsed “the equal natural rights bestowed by God on all human beings,” “limited and local self-government,” “the traditional family begun in marriage between one man and one woman,” and “the free market economy.”

And its “List of Grievances” against the progressive liberal orthodoxy entrenched in corrupt urban areas supplied the relevant context for separation: chief among the complaints were “the suppression of freedom of speech” through cancel culture and thought policing, “the eclipse of local self-government by distant ruling elites,” “the replacement of equality under the law with identity politics,” “the rejection of the American political tradition,” and “the introduction of policies destructive of economic freedom.”

As the Declaration was supplying the inspiration, Trump’s team supplied the necessary perspiration by working quickly and tirelessly to rally support and official endorsement from the hundreds of counties that had supported his election to office weeks before. The rapidity with which this was accomplished was crucial to its ultimate success, and almost unbelievable in hindsight. They were aided by the establishment of efficient systems of communication running throughout the hundreds of rural and suburban counties sympathetic to the movement—the so-called “Town Crier Committees.” This system, working in conjunction with self-dubbed “Minutemen” vigilante groups, provided the coordinated resistance necessary to enforce the county endorsements of Trump’s leadership.

The preexistence of county government and law enforcement structures aided the transition as well. Early efforts by state governors to use state police and National Guard troops to compel adherence to state laws across vast UACo areas met with such resistance, both externally and internally, that they were quickly deemed impracticable. With the adoption of the provisional Constitution for the United American Counties in January 2021 by more than 500 counties—a number that would grow to nearly 2,000 by May of that same year—the stage was set for a decision by the newly-inaugurated Biden and the areas remaining under his jurisdiction. Would he go to war with Trump’s counties and attempt to compel union as Lincoln had?

A Separate Peace

Many factors weighed against this decision. There was, first, the lack of the kind of moral momentum that the abolition movement had supplied in the decades leading up to the Civil War. As Lincoln had long insisted, the controversy that brought on the Civil War was the question of whether slavery was right or wrong. The seceding states took a stand for its rightness, and the Union states took a stand for its wrongness. In 2020, there was no moral controversy that would come close to this kind of stark alternative; no higher ideal that would plausibly justify shedding the blood of fellow Americans.

Secondly, although Biden technically assumed control of the powerful U.S. military, he and his advisors were justifiably wary of issuing an immediate order to mobilize this force—a majority of whom had voted for Trump in the election—against such a widespread movement involving innumerable family connections and divided loyalties for military service members. There was the problem of supply chains for manufacturing and transportation; since these relied upon and ran directly through large swaths of UACo-controlled territory, they could be easily disrupted either by the withholding of necessary support or through sabotage.

There was also the immense practical difficulty of fighting a war against guerilla-type forces dispersed across more than 75% of the land area of the U.S. As the British had come to realize in the American Revolutionary War, such a conflict may well have been unwinnable, despite a large disparity in raw military and economic might.

In the face of these obstacles to compelling union through force, Biden had no choice but to negotiate with Trump. The American Friendship Accords, finalized on the anniversary of election day the year before (November 3, 2021), officially established two sovereign nations (the United American Counties and the United American Cities), averted large-scale violent conflict, and established the economic and military agreements necessary to maintain cooperation between the two new political entities at a level similar to what had existed before.

In 2025, just five short years after the tumultuous period of 2020-21, we seem to have entered a new era of American peace and prosperity. Relieved from the incessant tension of trying to reconcile fundamentally irreconcilable worldviews under a common government, polarized American society has achieved a kind of equilibrium. Common moral and political principles are once again able to provide the foundation for productive debate and coherent public policy within both the UACo and the UACi. The freedom of economic exchange and personal movement between the two has facilitated the growth of new ties of continental friendship where before there was polarization and enmity.

It may still be too early to pronounce judgment on the new political situation in the former U.S. But so far, looking back on 2020 seems to confirm the old proverb: It’s always darkest just before the dawn.

via ZeroHedge News https://ift.tt/2JAFs1O Tyler Durden

Chaos & Suspicion: The Killing Of Iran’s Nuclear Scientists

Chaos & Suspicion: The Killing Of Iran’s Nuclear Scientists

Tyler Durden

Fri, 12/04/2020 – 23:20

New details are emerging about an attack that killed Iran’s most senior nuclear scientist last Friday. Initially, it was thought that Mohsen Fakhrizadeh’s car was attacked by undentified gunmen armed with automatic weapons and explosives. However, a Fars news agency report on Sunday evening states that Fakhrizadeh was actually killed by a remote-controlled weapon mounted in a vehicle that subsequently exploded. Iran has blamed Israel and an opposition group in exile called Mujahedeen-e-Khalq for the attack. A senior Iranian security official has described it as “highly complex”, adding that it was carried out with “electronic devices”.

While Mohsen Fakhrizadeh is the most senior nuclear scientist to be killed in mysterious circumstances in Iran, Statista’s Niall McCarthy notes that he is by no means the first with Tehran holding Israel accountable for at least five assassinations.

Infographic: Chaos & Suspicion: The Killing Of Iran's Nuclear Scientists | Statista

You will find more infographics at Statista

The first high-profile killing happened in early 2010 when Masoud Ali Mohammadi died after a bomb was detonated on a motorcycle when he left his home. The pattern of targeting nuclear scientists during their commute repeated itself in subsequent incidents and Majid Shahriar died when a motorcyclist attached a bomb to his car in November 2010. It remains unclear whether Darioush Rezaeineja was connected to the nuclear program but he was shot dead regardless by two gunmen on a motorcycle in July 2011. Mostafa Ahmadi Roshan was killed a year later when assailants on a motorcycle attached magnetic bombs to his car while he was on his way to work.

Alongside the assassinations, a chain of mysterious blasts and fires at various facilities associated with the nuclear program have added to the chaos and suspicion. These have been happening for years with reports of major incidents emerging in 2011. That year, an explosion was heard at a nuclear facility in Isfahan and a blast occurred at a steel mill linked to the nuclear program in Yasd, killing seven people. They have become more frequent in 2020 with a major explosion rocking a missile-production complex in Khojir in June, followed by a blast that destroyed a building developing advanced centrifuges at the Natanz nuclear facility in July.

The fact that a presidential transition is imminent in Washington D.C. has added to the tension, particularly as Joe Biden has pledged to resurrect the 2015 nuclear deal. It is unclear whether Tehran will be receptive given recent events, however, and it has already promised to push on with its nuclear program in addition to vowing to retaliate for Fakhrizadeh’s killing. Friday’s incident is also almost certainly going to fan the flames of Iran’s regional conflict with Israel which has been going on for years, particularly in Syria. Israel has carried out airstrikes against Iranian proxies in Syria as well as against Iran’s military directly. The latest killing of a nuclear scientist may lead to a dangerous escalation in that (relatively) covert conflict.

via ZeroHedge News https://ift.tt/33LaP0O Tyler Durden

NDAA Seeks To Halt Trump’s Troop Withdrawals From Afghanistan & Germany

NDAA Seeks To Halt Trump’s Troop Withdrawals From Afghanistan & Germany

Tyler Durden

Fri, 12/04/2020 – 23:00

Authored by Dave DeCamp via AntiWar.com,

The version of the National Defense Authorization Act (NDAA) agreed to by the House and Senate, known as the compromise version, includes provisions to block President Trump’s planned troop withdrawals in both Afghanistan and Germany.

For Afghanistan, there is language in the bill that would block funding to reduce troop numbers in the country before the Pentagon, State Department, and the director of national intelligence assess how the drawdown would affect US security. The assessment would be required before troop numbers could drop lower than they are when the NDAA becomes law, and again if they drop below 2,000.

Via Zuma Press/Xinhua

President Trump’s current plan is to bring troop numbers in Afghanistan down to 2,500 by January 15th. The US-Taliban peace deal signed in February paved the way for all US and other foreign forces to be out of the country by Spring 2021.

Another troop drawdown President Trump’s Pentagon is planning is a reduction of forces in Germany from about 36,000 troops to 24,000. Congressional aides told The Hill that the compromise version of the NDAA includes language that would block the drawdown.

“There is language that prevents reduction in the number of US forces stationed in Germany below 34,500 until 120 days after the secretary of Defense submits an assessment and planning regarding the implications for allies, costs, military families, deterrence and other key issues,” one of the aides said.

The provisions to block Trump’s withdrawals could add to the controversy that is already surrounding the NDAA. On Tuesday, President Trump said he would veto the spending bill if it did not include an amendment to repeal Section 230 of the 1996 Communications Decency Act.

Section 230 gives tech platforms immunity from liability for content published by third parties. Trump doubled down on his call to include the provision in a tweet on Thursday after some Republican senators voiced their objection to the idea.

via ZeroHedge News https://ift.tt/36GrR1G Tyler Durden

Black Google Researcher Claims She Was Fired Because She Discovered AI Is Racist

Black Google Researcher Claims She Was Fired Because She Discovered AI Is Racist

Tyler Durden

Fri, 12/04/2020 – 22:40

A well-known artificial intelligence researcher at Google tweeted Wednesday that she was fired over an email expressing dismay with management over the censorship of new research. 

Timnit Gebru, a technical co-lead of Google’s Ethical A.I. team, who researches algorithmic bias and data mining, has been an outspoken advocate for diversity in technology, claimed, in a series of tweets, she was fired for refusing to retract a research paper that outlines how A.I. discriminates against minorities and also due to a complaint in an email to colleagues

Expressing her frustrations in an email to an internal company group named Google Brain Women and Allies – Gebru criticized Google’s hiring of minorities and not doing enough to promote “responsible A.I.” 

The email was shared by Platformer’s Casey Newton:  

Hi friends,

I had stopped writing here as you may know, after all the micro and macro aggressions and harassments I received after posting my stories here (and then of course it started being moderated).

Recently however, I was contributing to a document that Katherine and Daphne were writing where they were dismayed by the fact that after all this talk, this org seems to have hired 14% or so women this year. Samy has hired 39% from what I understand but he has zero incentive to do this.

What I want to say is stop writing your documents because it doesn’t make a difference. The DEI OKRs that we don’t know where they come from (and are never met anyways), the random discussions, the “we need more mentorship” rather than “we need to stop the toxic environments that hinder us from progressing” the constant fighting and education at your cost, they don’t matter. Because there is zero accountability. There is no incentive to hire 39% women: your life gets worse when you start advocating for underrepresented people, you start making the other leaders upset when they don’t want to give you good ratings during calibration. There is no way more documents or more conversations will achieve anything. We just had a Black research all hands with such an emotional show of exasperation. Do you know what happened since? Silencing in the most fundamental way possible.

Have you ever heard of someone getting “feedback” on a paper through a privileged and confidential document to H.R.? Does that sound like a standard procedure to you or does it just happen to people like me who are constantly dehumanized?

Imagine this: You’ve sent a paper for feedback to 30+ researchers, you’re awaiting feedback from P.R. & Policy who you gave a heads up before you even wrote the work saying “we’re thinking of doing this”, working on a revision plan figuring out how to address different feedback from people, haven’t heard from P.R. & Policy besides them asking you for updates (in 2 months). A week before you go out on vacation, you see a meeting pop up at 4:30pm PST on your calendar (this popped up at around 2pm). No one would tell you what the meeting was about in advance. Then in that meeting your manager’s manager tells you “it has been decided” that you need to retract this paper by next week, Nov. 27, the week when almost everyone would be out (and a date which has nothing to do with the conference process). You are not worth having any conversations about this, since you are not someone whose humanity (let alone expertise recognized by journalists, governments, scientists, civic organizations such as the electronic frontiers foundation etc) is acknowledged or valued in this company.

Then, you ask for more information. What specific feedback exists? Who is it coming from? Why now? Why not before? Can you go back and forth with anyone? Can you understand what exactly is problematic and what can be changed?

And you are told after a while, that your manager can read you a privileged and confidential document and you’re not supposed to even know who contributed to this document, who wrote this feedback, what process was followed or anything. You write a detailed document discussing whatever pieces of feedback you can find, asking for questions and clarifications, and it is completely ignored. And you’re met with, once again, an order to retract the paper with no engagement whatsoever.

Then you try to engage in a conversation about how this is not acceptable and people start doing the opposite of any sort of self reflection—trying to find scapegoats to blame.

Silencing marginalized voices like this is the opposite of the NAUWU principles which we discussed. And doing this in the context of “responsible A.I.” adds so much salt to the wounds. I understand that the only things that mean anything at Google are levels, I’ve seen how my expertise has been completely dismissed. But now there’s an additional layer saying any privileged person can decide that they don’t want your paper out with zero conversation. So you’re blocked from adding your voice to the research community—your work which you do on top of the other marginalization you face here.

I’m always amazed at how people can continue to do thing after thing like this and then turn around and ask me for some sort of extra DEI work or input. This happened to me last year. I was in the middle of a potential lawsuit for which Kat Herller and I hired feminist lawyers who threatened to sue Google (which is when they backed off–before that Google lawyers were prepared to throw us under the bus and our leaders were following as instructed) and the next day I get some random “impact award.” Pure gaslighting.

So if you would like to change things, I suggest focusing on leadership accountability and thinking through what types of pressures can also be applied from the outside. For instance, I believe that the Congressional Black Caucus is the entity that started forcing tech companies to report their diversity numbers. Writing more documents and saying things over and over again will tire you out but no one will listen.

Timnit

Gebru was apparently in talks with management over a possible resignation if certain conditions regarding her research paper were not met. She said those conditions, which by the way, were not stated in the public domain, were not met. The company did not give her a chance to respond with immediate termination. 

“Apparently my manager’s manager sent an email my direct reports saying she accepted my resignation. I hadn’t resigned—I had asked for simple conditions first and said I would respond when I’m back from vacation. But I guess she decided for me 🙂 that’s the lawyer-speak,” she tweeted.

She continued: “I said here are the conditions. If you can meet them great I’ll take my name off this paper, if not then I can work on a last date. Then she sent an email to my direct reports saying she has accepted my resignation. So that is google for you folks. You saw it happen right here.” 

Here’s a quoted email response from Gebru’s manager about her termination: 

“Thanks for making your conditions clear. We cannot agree to #1 and #2 as you are requesting. We respect your decision to leave Google as a result, and we are accepting your resignation.

“However, we believe the end of your employment should happen faster than your email reflects because certain aspects of the email you sent last night to non-management employees in the brain group reflect behavior that is inconsistent with the expectations of a Google manager.

“As a result, we are accepting your resignation immediately, effective today. We will send your final paycheck to your address in Workday. When you return from your vacation, PeopleOps will reach out to you to coordinate the return of Google devices and assets.”

In another tweet, Gebru called out Google’s A.I. chief, Jeff Dean. She said Dean is likely the one who signed off on her firing. “He didn’t like my email to a mailing list for women & allies at brain,” she added.

Gebru’s tweets about her termination came after the U.S. National Labor Relations Board filed a complaint against Google, accusing the company of violating labor laws. 

The company was allegedly “interfering with, restraining, and coercing employees in the exercise of their rights guaranteed in Section 7 of the Act,” according to the complaint filed Tuesday. 

Since Gebru’s termination, Google employees are standing in solidarity with the fired researcher due to “unprecedented research censorship,” read the website Google Walkout For Real Change.

“We call on Google Research to strengthen its commitment to research integrity and to unequivocally commit to supporting research that honors the commitments made in Google’s A.I. Principles,” the website said. 

This is just the latest incident showing Google is getting too powerful.

via ZeroHedge News https://ift.tt/36IIRod Tyler Durden

Chief Medical Officer Says Canadians Who Refuse Vaccine Won’t Have “Freedom To Move Around”

Chief Medical Officer Says Canadians Who Refuse Vaccine Won’t Have “Freedom To Move Around”

Tyler Durden

Fri, 12/04/2020 – 22:20

Authored by Paul Joseph Watson via Summit News,

Ontario’s Chief Medical Officer says that those who refuse to take the COVID vaccine won’t have “freedom to move around” and will have to continue to wear masks.

Dr. David Williams was asked if he “would make some sort of mandatory vaccination recommendation.”

Williams acknowledged that “we can’t force someone to take a vaccine,” but when on to explain how people who didn’t take it would have their freedom of mobility severely restricted.

“What we can do is to say sometimes for access or ease of getting into certain settings, if you don’t have vaccination then you’re not allowed into that setting without other protection materials,” said Williams.

“What may be mandatory is proof of…vaccination in order to have latitude and freedom to move around…without wearing other types of personal protective equipment,” he added.

Williams also suggested that people would be prevented from entering certain settings without having been vaccinated if there was a “risk.”

As we previously highlighted, governments do not have to make the vaccine mandatory, they can simply make life unlivable for people who refuse to take the vaccine.

If bars, restaurants, cinemas, sports venues, airlines, employers and others all make the vaccination a mandatory condition of service, anyone who refuses to take it will be reduced to a personal form of de facto lockdown with their social lives and mobility completely stunted.

*  *  *

New limited edition merch now available! Click here. In the age of mass Silicon Valley censorship It is crucial that we stay in touch. I need you to sign up for my free newsletter here. Support my sponsor – Turbo Force – a supercharged boost of clean energy without the comedown. Also, I urgently need your financial support here.

via ZeroHedge News https://ift.tt/3lHIYob Tyler Durden