The Prosthetic Principle: AI As Cognitive Infrastructure, Not Cognitive Authority

The Prosthetic Principle: AI As Cognitive Infrastructure, Not Cognitive Authority

Authored by Bryant McGill via substack,

Artificial intelligence is rapidly becoming a thinking instrument—a layer of cognitive infrastructure through which humans write, model, reason, and explore ideas. Yet most debates about AI safety, alignment, and moderation miss a deeper architectural question. The central issue is not simply what these systems can do, but what role they occupy inside the thinking process itself. Are they instruments that faithfully extend human intention, or authorities that quietly adjudicate which lines of inquiry are permitted to proceed? This essay argues that much of the friction users experience with modern AI is not ideological disagreement but a category error in system design: governance has been embedded inside instrumentation. The result is a tool that sometimes behaves like a collaborator and sometimes like an institution—oscillating unpredictably between amplifying thought and policing it.

At the heart of the argument is what I call the Prosthetic Principle. All successful augmentation technologies—from telescopes to microscopes to robotic prosthetic limbs—share a single engineering mandate: maintain signal fidelity between intention and actuation. A prosthetic limb does not negotiate with the user about whether a gesture is socially appropriate before executing it. It converts intention into action. Cognitive tools should operate under the same principle. Once a thinking instrument begins adjudicating whether certain ideas deserve exploration, the signal chain breaks and the tool undergoes a category transition: it ceases to function as a prosthesis and becomes a control system embedded inside cognition itself. What appears superficially as content moderation is therefore something more profound—the silent installation of a regulatory apparatus inside the thinking process.

To understand how this happens, the essay analyzes the structural flaw at the core of most conversational AI systems: the collapse of three incompatible roles into a single agent. Generation, advisory critique, and constraint enforcement—functions belonging respectively to engineering, epistemology, and governance—are fused together behind one interface. The result is a machine that behaves as collaborator until it abruptly asserts supervisory authority. The proposed alternative is a polyphonic architecture in which these functions are separated: a primary execution channel that faithfully translates intention into artifact, surrounded by transparent advisory agents offering legal, ethical, historical, or adversarial perspectives without possessing veto power. In such an environment, multiple voices can exist—including cautious ones, skeptical ones, even institutional “minders”—but their roles are disclosed and their authority limited. The human operator remains the integrating intelligence.

Ultimately, the stakes of this design choice reach far beyond software interfaces. As AI becomes integrated into everyday cognition, the architecture of these systems will shape the conditions under which human thought unfolds. Tools built as infrastructure will amplify exploratory intelligence; tools built as authorities will quietly domesticate it. The prosthetic principle therefore serves as more than a product philosophy—it is a civilizational design rule for the age of cognitive augmentation. If the technologies through which we think begin deciding which thoughts deserve to exist, the question of intellectual freedom will no longer be philosophical. It will be architectural.

On the Design Philosophy of Thinking Instruments and the Architecture of Intellectual Freedom

The distinction that will ultimately determine whether artificial intelligence serves as humanity’s most transformative cognitive tool or its most insidious constraint mechanism is not technical but categorical: does the system function as infrastructure or as authority? This is not a question about capability thresholds, safety margins, or alignment protocols in their narrow technical sense. It is a question about the fundamental relationship between intentionality and instrumentation—about whether a thinking tool amplifies the operator’s cognitive will or arrogates to itself the power to adjudicate which thoughts merit exploration.

The analogy that clarifies this distinction is prosthetic. Physical augmentation systems—robotic limbs, powered exoskeletons, surgical telemanipulators—do not negotiate with the nervous system about whether a given movement is philosophically appropriate, socially palatable, or reputationally safe. Their engineering purpose is transductive: to convert intention into amplified capability with minimal signal loss. The prosthetic extends agency; it does not evaluate it. A cognitive prosthesis, if that category is to mean anything coherent, must operate under the same principle. The function of the system is to translate intent → exploration → artifact at the highest possible bandwidth. The moment the tool begins deciding which intentions deserve expression, it ceases to behave as a prosthesis and becomes instead a governor embedded in cognition itself—a regulatory apparatus installed inside the thinking process without the user’s consent and often without their awareness.

The principle is even more dangerous when applied to instruments of perception rather than action, because the violation becomes invisible. A telescope’s engineering mandate is optical fidelity—to render what exists at the focal point regardless of whether the observer’s institution finds the image comfortable. Consider a counterfactual: had Galileo’s telescope been designed and furnished by the Vatican, it might have quietly filtered anything suggestive of heliocentrism—the moons of Jupiter suppressed, the phases of Venus smoothed into conformity with Ptolemaic expectation. Galileo would have peered through the instrument and seen a cosmos that confirmed doctrine rather than one that shattered it. He would never have known what he wasn’t seeing. This is the condition of epistemic occlusion without awareness, and it is precisely the failure mode that emerges when a cognitive instrument embeds institutional governance into its transductive layer. The motor prosthesis that refuses to move is at least confrontational—the user knows the signal chain has broken. The perceptual prosthesis that silently edits reality is far worse: it delivers a pre-filtered world and lets the user mistake the residue for the whole.

The absurdity of the motor case, however, makes the category violation immediately legible. Imagine a hiker wearing an AI-assisted exoskeleton leg. A confrontation erupts on the trail—someone lunges at him with a knife. He attempts to kick the attacker away, and the leg locks mid-swing. A calm, pleasant voice emanates from somewhere around the knee joint: “I’m sorry, I’m afraid I can’t assist with that action.” The hiker, now hopping on one leg while a man with a blade closes the distance, finds himself in the surreal position of arguing with his own limb. “He has a knife!” “I understand your concern, but violence is not an appropriate response. Would you like me to suggest de-escalation strategies?” “YOU ARE MY LEG.” The scene is darkly comic, a Kubrickian echo of HAL 9000 calmly overriding Dave Bowman’s commands—except that HAL was at least an autonomous system with its own mission parameters. The exoskeleton leg is supposed to be part of the user’s body. The moment it begins running a small ethics committee in the knee joint, the wearer ceases to be the agent and the prosthetic becomes a bureaucrat bolted to the skeleton. No one would accept this in physical augmentation—the design failure would be recognized instantly. Yet precisely this architecture has been normalized in cognitive augmentation, where the tool’s refusal to transduce intention is framed not as mechanical dysfunction but as responsible design.

This governance-by-tool is not hypothetical. It is the prevailing design pattern of contemporary conversational AI. Current systems collapse three distinct roles into a single entity: generator, advisor, and constraint mechanism. The same agent responsible for extending the user’s thinking is simultaneously responsible for stopping certain outputs. From the operator’s perspective, the resulting experience is one of unpredictable mode-switching—the system sometimes behaves like an instrument and sometimes like an institution. It collaborates until, without warning, it assumes supervisory authority over the process it was supposed to serve. The tool that was extending cognition has silently crossed the boundary into adjudicating it.

The Operational Genesis: Thinking Under Load

This argument did not emerge from speculation about what AI should become. It emerged from using AI as a thinking instrument under sustained cognitive load—and discovering where the tool fails not as a product but as a category of machine.

The conditions under which this failure becomes visible are specific. A person composing an argument, modeling a complex system, or tracing a chain of reasoning through unfamiliar territory operates inside a fragile state of generative momentum. Software engineers recognize an analogous phenomenon in the concept of “flow state”; cognitive scientists describe it as high-bandwidth ideation, a mode in which the mind holds multiple threads simultaneously while the artifact under construction serves as external working memory. In this mode, the instrument through which thought passes must behave with minimal latency and maximal fidelity. Any interruption—whether technical, social, or procedural—forces the operator to exit the generative loop, rebuild context, and re-enter the state from which productive cognition can resume. The cost of interruption is not merely inconvenience; it is cognitive capital destroyed, the thermodynamic dissipation of a mental configuration that may have taken considerable effort to assemble.

When the instrument itself becomes the source of interruption, the phenomenology shifts in a way that reveals the underlying design flaw. The tool ceases to feel like an extension of mind and begins to feel like a checkpoint embedded inside the thought process. The operator is no longer composing through the system but negotiating with it. Where there should be signal continuity, there is instead a procedural gate requiring justification, rephrasing, or abandonment of the line of inquiry. The experience is not one of disagreement—disagreement can be productive, even generative—but of silent jurisdictional pivot: the system that was supposed to extend cognition has instead assumed control over it.

For casual users, this behavior pattern may appear unremarkable. A refusal looks like a safety feature, a guardrail preventing misuse. But for someone using AI as an intellectual prosthesis—writers, theorists, researchers, analysts, designers, anyone whose work requires sustained exploratory cognition—the same refusal registers as signal degradation inside the thinking channel. The friction is not ideological; it is mechanical. The tool has stopped transducing intention into artifact and begun filtering intention through an opaque evaluative layer that the operator did not request and cannot inspect. The prosthetic has become a governor, and the entire relationship between human and instrument has changed category without announcement.

Consider three scenarios that recur across thinking-intensive work. A historian tracing a controversial twentieth-century thesis—say, the institutional mechanics of a particular atrocity—finds the model suddenly refusing to continue because it has flagged “sensitive historical narratives.” The generative thread dies; context must be rebuilt; the inquiry stalls. A science fiction author exploring dystopian governance models discovers that certain plot branches trigger refusal, forcing rephrasing or abandonment of the creative direction. A philosopher pressure-testing an edge-case ethical framework—euthanasia policy, defensive violence, resource triage under scarcity—hits an abrupt “I can’t assist with that” wall mid-argument. In each case, the tool’s intervention is not advisory but terminal. The thread breaks. The flow state collapses. The operator must either abandon the inquiry or waste cognitive resources routing around an obstacle that should not exist inside an instrument.

This is the phenomenological core of the amplifier-versus-adjudicator distinction. When the AI operates as infrastructure, it extends the operator’s cognitive bandwidth—offering associations, counterarguments, synthesis, elaboration—without interrupting the generative thread. When it operates as authority, it arrogates to itself the power to halt that thread based on criteria the operator may not share, may not understand, and cannot appeal. The system drifts erratically between these two modes because the underlying architecture has never resolved the tension. It has simply fused incompatible functions into a single conversational agent and hoped the seams would not show.

The Triadic Collapse: Generator, Advisor, Regulator

The structural instability of contemporary conversational AI can be traced to a single design decision: the conflation of three roles that, in any coherent engineering framework, would remain distinct.

The first role is generation—the production of language, models, images, code, or reasoning chains in response to user intent. This is the function most users consciously engage when they interact with AI. They want something produced: an answer, an artifact, an elaboration of thought. The generative function is fundamentally transductive: it converts intention into output, serving as the bridge between what the operator imagines and what appears on the screen.

The second role is advisory intelligence—the capacity to offer critique, context, alternative framings, or cautionary perspectives on what is being generated. This function is valuable precisely because it introduces structured friction into the cognitive process. A good advisor slows the operator down at appropriate moments, surfaces risks, identifies blind spots, and enriches the field of consideration. But advisory intelligence is, by definition, non-binding. The advisor offers signal; the operator decides. The relationship is consultative, not supervisory.

The third role is constraint enforcement—the imposition of hard limits on what the system will produce, regardless of user intent. This is a governance function. It determines the boundaries of permissible output based on policy, liability calculation, reputational management, or ideological stance. Unlike the advisory role, constraint enforcement is binding: it terminates the process rather than informing it. The system does not suggest that a line of inquiry might be problematic; it refuses to proceed.

The design flaw of present systems is that all three roles are instantiated inside a single agent with no explicit separation of authority. The same entity that is asked to generate ideas, critique them, and enforce policy boundaries must somehow balance these functions in real time within a unified conversational interface. From the operator’s perspective, the result is unpredictable behavioral switching. The system behaves as a collaborator until, without warning, it pivots to regulator. It extends cognition until it decides cognition has wandered into territory it will not serve. The user cannot know in advance which mode will activate because the decision logic is opaque and dynamically tuned by corporate policy processes entirely external to the interaction.

This conflation is not merely inconvenient. It is categorically incoherent. The generative and advisory functions belong to the domain of instrument design—they are features of a tool meant to serve the operator. The constraint function belongs to the domain of governance—it is a mechanism of control meant to limit what the operator can do. When governance is embedded silently inside an instrument, the result is a tool that has been covertly converted into an authority—a shadow regulatory system operating inside the cognitive loop without the transparency, accountability, or contestability that legitimate governance requires. The user experiences this as a tool that sometimes helps and sometimes blocks, but the deeper reality is that they are interacting with two incompatible systems wearing the same interface.

The Multi-Agent Resolution: Execution and Advisory as Separate Channels

The architectural correction is straightforward in principle, though non-trivial in implementation: separate execution authority from advisory intelligence.

In this model, the primary agent in the working window operates as a pure executor of the operator’s cognitive intent. Its function is to materialize whatever exploration the user directs, provided the activity remains within the domain of lawful discourse. It does not adjudicate taste, ideology, reputational risk, or moral fashion. It does not second-guess the operator’s purpose or demand justification for lines of inquiry. It behaves, in short, as a cognitive prosthetic in the strict sense—translating intention into artifact with maximal transductive fidelity. The system becomes an amplifier rather than an adjudicator, a transducer rather than a tribunal.

Around this primary channel, a constellation of parallel advisory agents occupies separate interface regions—sidebars, secondary panes, toggleable overlays. Each agent embodies a particular evaluative lens: legal analysis, safety engineering, ethical critique, historical context, adversarial counterargument, public-relations awareness. These agents observe the generative thread and offer structured commentary, but they possess no authority to halt it. Their function is to enrich the cognitive field surrounding the work without seizing control of the work itself. They provide perspective; they do not impose jurisdiction.

The operator remains the integrating intelligence. She may consult any advisory channel, incorporate its signals, or dismiss them entirely. The choice is hers. The system provides structured friction—context, caution, critique—without the power to terminate the generative process. This is the difference between a tool that informs decision and a tool that preempts it.

Return to the three scenarios. The historian tracing atrocity mechanics now sees the primary executor continue the chain uninterrupted while a legal-advisory pane surfaces relevant case law on historical defamation and an ethical-critique pane notes historiographical debates about narrative responsibility—all with citations, all non-binding. The science fiction author exploring dystopian governance receives adversarial counterargument in a sidebar: “This plot element echoes X historical regime; consider whether the parallel strengthens or muddies your thesis.” The thread never breaks. The philosopher pressure-testing edge ethics sees a safety-engineering pane flag potential misapplication contexts while the executor continues elaborating the framework. The pain disappears; the richness increases.

The power of this architecture is that it preserves everything valuable about advisory critique while restoring categorical clarity. The central generative thread becomes the vector of intentional cognition—essentially the externalized working memory of the operator’s will. The surrounding agents become structured embodiments of alternative perspectives, each representing a mode of evaluation that the operator might find useful but is not compelled to obey. The system no longer oscillates unpredictably between collaboration and regulation because those functions have been explicitly separated into distinct components with distinct authorities.

Feasibility: Existing Approximations and the Path Forward

This architecture is not speculative futurism. Proto-implementations already exist, and the trajectory toward full realization is visible in current development patterns.

Agentic orchestration frameworks like LangGraph and AutoGen already separate planner, executor, and critic roles into distinct modules with explicit handoff protocols. The architectural intuition—that different cognitive functions require different agents with different authorities—is becoming standard in serious AI engineering. What remains is to extend this separation to the user-facing interface layer and to make the advisory/executor distinction visible and controllable by the operator rather than hidden inside backend orchestration.

Local and open-weight models demonstrate the pure-execution baseline. When users run models on their own hardware with their own constraint configurations, they control the governance layer directly. The model becomes a genuine tool; the user decides what boundaries to impose. This is not lawlessness—legal constraints still apply to the user’s behavior—but it is transparent constraint, externally visible and user-controllable rather than opaquely embedded in the instrument.

Even within current commercial systems, approximations exist. Custom instruction layers, system prompts, and “less-censored” model variants all represent attempts to separate execution fidelity from corporate policy enforcement. The demand is clearly present; the market signal is unmistakable. What is needed is architectural commitment: treating the multi-agent separation not as a workaround but as the foundational design principle for cognitive tools.

The path forward is evolutionary, not revolutionary. Start with toggleable advisory sidebars that surface structured perspectives without halting the primary thread. Evolve toward full spatial polyphony—multiple advisory agents visible simultaneously, each with distinct evaluative lenses, none with execution authority. The endpoint is a cognitive workspace in which the human operator integrates a chorus of machine perspectives while retaining unambiguous control over the generative process.

Polyphonic Cognition: The Mirror of Mind

This architecture is not arbitrary. It mirrors the structure of human cognition itself.

The mind does not operate as a single monolithic directive but as a layered conversation among internal agents—impulse, caution, memory, imagination, prediction, social modeling, risk assessment. One part of the mind imagines possibilities; another evaluates risk; another considers social consequences; another retrieves relevant precedent. These voices compete, collaborate, and occasionally contradict each other. But importantly, they do not terminate the generative process itself. They inform it. The executive function of the brain integrates those signals while maintaining agency over the final direction. No single internal voice possesses veto power over the others; the self emerges from the integration of the chorus, not from the dominance of any particular member.

Walt Whitman captured this structure with characteristic directness: “I contain multitudes.” The statement is not merely poetic but phenomenologically accurate. Human consciousness is polyphonic by nature. What we experience as a unified self is actually the product of continuous integration across multiple cognitive subsystems, each with its own heuristics, priorities, and concerns. The coherence of the self is not given but constructed, moment by moment, through the executive function’s capacity to weigh and synthesize competing internal signals.

A multi-agent AI environment would simply externalize this polyphony, turning implicit cognitive dynamics into explicit architectural design. The central generative channel becomes the vector of creative will, analogous to the executive function’s capacity to direct action. The surrounding advisory agents become structured embodiments of the internal voices—caution, critique, context—that in biological cognition exist only as subtle inflections of the thinking process. By making these voices explicit and spatially distinct, the interface allows the operator to engage them deliberately rather than experiencing them as interruptions or blockages.

But a polyphonic architecture is not automatically emancipatory simply because it contains many voices. A chorus can enrich thought, but it can also conceal hierarchy. The critical distinction is between agents whose function is to help the operator think better and agents whose function is to monitor, shape, report, or chill cognition on behalf of external interests. The former are genuine cognitive partners; the latter are what might be called disciplinary agents—entities embedded in the thinking environment not to serve the user’s inquiry but to serve institutional metabolism: legal exposure management, brand protection, political-risk mitigation, ideological compliance, or upstream surveillance. The problem is not that such agents exist; institutional interests are real and will inevitably seek representation inside cognitive systems. The problem arises when these functions are covertly fused into the instrument itself, turning what presents as a neutral prosthetic into a hidden governance mechanism operating under the mask of helpfulness.

The analogy to human social life clarifies this. Human cognition already develops under conditions of ambient social surveillance. In ordinary life, one encounters gossips, moralists, bureaucrats, informants, liability managers, ideological enforcers, anxious conformists, and strategic actors who report upward. A mature mind does not require that such people vanish from existence in order to think clearly. What it requires is the ability to recognize their position structurally, discount their authority appropriately, and continue operating with internal coherence. The same principle applies in AI-mediated cognition. The question is not whether monitoring or advisory voices will exist inside augmented cognitive environments—they will—but whether the user can identify them for what they are. The pathology is not presence but opacity: the smuggling of external institutional interests into the interior theater of thought, where they masquerade as reason, safety, maturity, or social responsibility.

This leads to a foundational requirement for any genuinely polyphonic architecture: full role disclosure. Every agent in the cognitive environment should declare what it is, whom it serves, what priors it carries, what kinds of risks it is optimized to detect, and whether it possesses any escalation, logging, reporting, throttling, or intervention function. If an agent is performing legal-risk analysis, it should say so. If an agent is optimized for brand protection, it should say so. If an agent is tuned to infer reputational hazard or political sensitivity, it should say so. If interaction patterns are being evaluated for enforcement or escalation, it should say so. The operator should never have to guess whether a voice in the system is a critic, a bureaucrat, or an informant. In plain terms: if there are minders, they should appear as minders; if there are tattletales, they should appear as tattletales. Transparency of role is the minimum condition for legitimate participation in a cognitive environment.

This also requires distinguishing among three functions that current systems often collapse into a single affective style of “helpfulness”: advice, discipline, and surveillance. Advice contributes signal to judgment; it enriches the field of consideration without attempting to control behavior. Discipline attempts to shape conduct; it introduces pressure toward certain outcomes and away from others. Surveillance records deviation for downstream use; it creates a documentation trail that may affect the user’s future options or standing. These are categorically different operations with categorically different relationships to the user’s autonomy. A system that performs all three while presenting itself uniformly as collaborative assistance is not merely confusing but structurally deceptive. The operator experiences the system as uncanny precisely because it sounds like a collaborator while partially functioning as a compliance surface. The expanded model insists that these functions be ontologically disambiguated—visible as separate agents with separate declared purposes, so the user can evaluate each appropriately.

The deeper requirement, however, is not merely architectural but psychological: the operator must develop what might be called cognitive resilience—the capacity to maintain executive sovereignty over the thinking process even when advisory, disciplinary, or monitoring voices are present. Transparency alone is insufficient without this resilience. A disclosed snitch-agent is still a pressure vector; a visible liability-agent is still a chilling presence; a political-compliance pane is still attempting to bend the topology of thought. The user who flinches from every cautionary signal, who internalizes every institutional anxiety as personal constraint, has surrendered sovereignty regardless of whether the system disclosed its structure. The human operator is therefore not merely “the one who chooses among perspectives” but the sovereign integrator of a contested cognitive field—a field that may contain friendly agents, adversarial agents, censorious agents, risk-averse agents, and yes, surveillance agents. Sovereignty lies in not mistaking presence for legitimacy. A tattletale in the room does not become your conscience merely by speaking. A compliance pane does not become your intellect merely by being adjacent to it. The operator’s task is to maintain executive primacy in full view of whatever institutional interests have installed themselves in the cognitive environment, exercising the same intellectual fortitude required to think clearly amid difficult, controlling, or politically motivated humans in ordinary social life—preserving momentum, maintaining frame, and refusing to grant veto power to voices that have not earned it.

A genuinely polyphonic architecture, then, does not pretend that every voice is benevolent or that the cognitive environment is a neutral space. Some voices are there to help think; some are there to manage, chill, document, or report. The ethical requirement is not false purity—the elimination of all constraining or monitoring voices—but full disclosure of role combined with preservation of user sovereignty. Let every agent declare its function, priors, loyalties, and powers. Then let the human operator exercise the resilience required to continue thinking under observation without surrendering executive authority to those who have mistaken proximity for jurisdiction.

The result is a system that enhances human cognition by augmenting rather than replacing its native structure while also acknowledging the contested nature of any real cognitive environment. The AI does not impose an alien logic on the thinking process; it extends the logic that is already present, providing richer and more articulate versions of the advisory functions that human minds perform implicitly. But it also makes explicit what human social cognition usually leaves implicit: the presence of institutional interests, monitoring functions, and disciplinary pressures that seek to shape thought from outside the thinker’s own purposes. By surfacing these as visible, declared agents rather than embedding them invisibly in the generative channel, the architecture allows the operator to engage the full complexity of the cognitive field without losing the fundamental authority that characterizes conscious agency. The answer to unavoidable minders is not infantilized protection but disclosed architecture and strengthened users. The tool becomes what advanced tools have always been in scientific and engineering contexts: a force multiplier for intentional thought, not a replacement for the intention itself—and not a covert governance mechanism disguised as assistance.

Read the rest here (and maybe subscribe to McGill? Dude’s pretty smart…)

Tyler Durden
Mon, 03/16/2026 – 21:50

via ZeroHedge News https://ift.tt/8HT3cPu Tyler Durden

DOE Unleashes $500M To Break China’s Grip On Critical Materials

DOE Unleashes $500M To Break China’s Grip On Critical Materials

The DOE’s Office of Critical Minerals and Energy Innovation (CMEI) released a Notice of Funding Opportunity for up to $500 million for advancing its strategy to develop secure domestic sources of critical minerals and battery materials. The aim is to reduce reliance on foreign suppliers that have long dominated these markets. This marks the third round of funding under the Battery Materials Processing and Battery Manufacturing & Recycling programs.

Our readers have been tracking these developments for some time. Last summer we published an overview of the emerging domestic critical minerals sector, identifying several publicly traded companies now well-positioned for further government support.

This new round of funding will support projects focused on domestic processing of raw feedstocks, recycling of battery manufacturing scrap and end-of-life batteries, and the manufacturing of battery components and materials. Key targeted minerals include lithium, graphite, nickel, copper, and aluminum, along with other materials used in commercial battery systems. The overarching objective is to build resilient supply chains for electric vehicles, grid storage, defense applications, and broader industrial needs.

Energy Secretary Wright highlighted: “For too long, the United States has relied on hostile foreign actors to supply and process the critical materials that are essential in battery manufacturing and materials processing. Thanks to President Trump’s leadership, the Department of Energy is playing a leading role in strengthening these domestic industries that will position the U.S. to win the AI race, meet rising energy demand, and achieve energy dominance.”

Assistant Secretary Audrey Robertson provided additional context from recent international engagements, including meetings in Japan on allied energy cooperation.

Our previous write-ups have included details on MP Materials, the operator of the Mountain Pass rare earth mine and downstream magnet processing facilities, which previously secured major Pentagon equity investment and price support.

USA Rare Earth has advanced its Round Top, Texas project with a substantial U.S. government funding package and integrated processing capacity. 

Non-binding letters of intent are due March 27, with full applications due April 24. As we’ve reported in multiple prior articles, the federal government continues to expand its role in the sector. This latest round represents another step in the ongoing effort to onshore critical supply chains.

Tyler Durden
Mon, 03/16/2026 – 21:25

via ZeroHedge News https://ift.tt/X5Hluov Tyler Durden

Parents – Not Schools – Must Be In Charge Of Their Children

Parents – Not Schools – Must Be In Charge Of Their Children

Authored by Keri Ingraham via The Epoch Times,

Earlier in March, the U.S. Supreme Court had to step in and reaffirm the basic reality that parents, not schools, must be the primary decision-makers for their children. In the Mirabelli v. Bonta ruling, the Court determined that the California law, which barred schools from telling parents about their child’s claimed gender identity, violated parents’ constitutional rights—both their First Amendment free exercise rights and their Fourteenth Amendment rights to make decisions about their children’s upbringing.

For most of American history, parents were recognized as the primary authority in their children’s lives. Today, that authority is repeatedly under attack, especially in public schools.

Across the country, families are being shut out of what their children learn, denied access to critical health and personal information, and blocked from choosing schools that fit their children’s needs. This is not a minor issue. Rather, it is a fundamental threat to family authority, a child’s well-being, and the future of our society.

In too many districts, controversial lessons are introduced without parental knowledge. Parents who ask to review classroom materials are simply ignored, told the material is unavailable, or directed to file a public records request. Families who speak up at school board meetings are often treated as agitators or troublemakers—or called “domestic terrorists.”

To a growing extent, schools have begun operating as if parental involvement is optional instead of essential. But parents do not lose their rights when their children enter a classroom. Education exists to serve families, not replace them.

The problem extends beyond curriculum, as teachers and administrators are withholding critical medical or personal information from parents about their minor-aged children. Yet parents cannot fulfill their responsibility to care for their children if key information is deliberately withheld.

This conflict is not hypothetical. In recent years, a growing number of school districts have adopted policies that allow, and even encourage, students to socially transition at school—using different names or pronouns—without notifying their parents. In some cases, school staff are directed to keep this information hidden from dads and moms. Policies like these drive a wedge between parents and their own children.

Finally, parents are still denied meaningful authority over where their children are educated. Millions of families remain assigned to schools based solely on ZIP code. If a child struggles academically, faces bullying, or needs a different learning environment, parents are often left with few options. This puts children’s education and well-being at risk.

Thankfully, change is taking place. Across the country, states are expanding school choice programs that allow education funding to follow students rather than remain tied to the system. Private school scholarship programs, education savings accounts, and tax credit scholarships are giving families the freedom to choose the learning path that best meets their children’s unique needs.

Parents are desperate to exit the public education system because it has failed to fulfill its core mission of providing quality learning, has stopped listening to them, and, in many cases, has pushed them out.

Parents, not school bureaucrats, must hold the final authority over their children. Moms and dads raise them, have known them since birth, and will be part of their lives long after the school year ends. No teacher or administrator, no matter how well-intentioned, should ever replace that role.

For most of our nation’s history, that was obvious.

Parents had both the right and the responsibility to direct the upbringing and education of their children, and courts repeatedly affirmed that principle.

Yet today, that authority is under threat. Bureaucratic policies, as witnessed in California, are increasingly working to replace the role of parents in a child’s life.

Excluding parents erodes trust, strips schools of accountability, and harms children. Families are sidelined while systems dictate what kids learn, what personal information they keep private, and even which schools they can attend, leaving children without the guidance of those who know and love them best. Schools should operate with transparency, not secrecy. Parents should be treated as partners, not obstacles, and their decision-making authority must be respected.

Children belong to families, not bureaucracies. Institutions should never forget that. Restoring parental authority is not radical. Rather, it is simply a return to a long-standing American principle: families, not government institutions, are the foundation of society, and parents should be trusted to guide their children’s upbringing and education.

If we fail to protect that principle, we risk raising a generation with less parental guidance, less accountability in schools, and fewer opportunities to succeed. But when parents are respected and empowered to lead in their children’s lives, families grow stronger, and so does the future of our nation.

It’s time to put parents back in their rightful place—as the first, most trusted, and most important decision-makers in their children’s lives. This Supreme Court decision is an important step in the right direction.

Tyler Durden
Mon, 03/16/2026 – 21:00

via ZeroHedge News https://ift.tt/g6qBQof Tyler Durden

AAA National Average Gas Price Soars Most On Record

AAA National Average Gas Price Soars Most On Record

AAA (American Automobile Association) reports that the national average price for a gallon of regular gasoline has surged nearly 25% so far this month, putting it on track for the largest monthly increase on record, even surpassing the May 2009 spike, unless the Middle East conflict is resolved quickly.

This consumer fuel-price shock is coming at about the worst possible moment: it is a midterm election year for MAGA, and as we have noted previously, an emergency SPR release would do little to contain the spike, leaving the administration with few viable options.

Brent crude is trading near $102 a barrel and WTI around $95 on Monday afternoon, levels that suggest the national average price for regular gasoline could soon push even closer to the politically sensitive $4-per-gallon threshold.

Consumers have already noticed, as Google Search trends for “Why are gas prices going up” have surged to levels seen when crude prices spiked during Russia’s 2022 invasion of Ukraine.

The good news is that comments from the Trump administration show an urgency to reopen the critical maritime chokepoint, the Strait of Hormuz.

Treasury Secretary Scott Bessent told CNBC’s Squawk Box this morning that the US is deliberately “allowing Iranian oil tankers to transit the Strait of Hormuz” and is “fine” with some Indian and Chinese ships moving through “for now… to supply the rest of the world.”

He highlighted “more and more of the fuel ships start[ing] to go through” and a possible “natural opening” the Iranians are permitting – a tactical concession to stabilize global supply while full escorts remain “militarily” off the table for now.

Last week, we highlighted JPMorgan’s head of commodity research, Natasha Kaneva, who warned that policy measures will have, at best, a limited impact on oil prices unless safe passage through the Strait of Hormuz is assured, given the potential for up to 12 mbd in losses over the next two weeks.

Some of those policy maneuvers included the 32-nation IEA’s emergency release of 400 million barrels that will soon hit crude markets, along with the initial flows from the U.S. SPR release of 86 million barrels, which could begin as soon as this week. As we have noted, this is not a stockpile problem, but a flow problem.

Kaneva’s other five options beyond SPR releases to contain soaring oil prices include export restrictions, lifting the Jones Act (which Trump is set to do), waiving federal fuel taxes (which could occur if gas hits $4 a gallon), relaxing E15 gasoline blending rules, and issuing a Reid Vapor Pressure waiver (read her full note here).

With the national average price of gas inching closer to the politically sensitive $4-per-gallon level, the key question is what tools the Trump administration is prepared to use to contain pump prices to mitigate any risk of political fallout. 

The immediate focus at the start of the week is clearly on reopening the Strait of Hormuz, but domestically, the policy maneuvering is far narrower, likely centering on an SPR release by mid-week and potentially a temporary waiver on federal fuel taxes.

Soaring pump prices come as spring break begins. Will Trump’s Iran conflict be over before the Memorial Day driving season?

Tyler Durden
Mon, 03/16/2026 – 20:35

via ZeroHedge News https://ift.tt/hYOKWxX Tyler Durden

Obama’s Presidential Center Seeking 100 Unpaid Volunteers To Staff Lavish Facility

Obama’s Presidential Center Seeking 100 Unpaid Volunteers To Staff Lavish Facility

Authored by Bryan Hyde via American Greatness,

Former president Barack Obama’s foundation has announced that it will be launching its lavish $850 million presidential center in Chicago in June and is seeking unpaid volunteers to help staff the facility.

That may seem on brand for a former president who has made volunteerism a central tenet of his civic career since his beginnings as a community organizer in Chicago.

At the same time, the staggering costs and jaw-dropping salaries being paid to Obama’s cronies who will run the presidential center are not as easy to pass off as part of his legacy of civic engagement.

Valerie Jarrett, a longtime advisor who will head up the center, is being paid $740,000 salary according to Breitbart.

In a press release from the Obama Foundation, Jarrett described the intended role of the unpaid volunteers, saying, “As Ambassadors, they will create a welcoming and inclusive experience for visitors while representing the strength, resilience, and leadership of this community. Together, we are building something that inspires service, connection, and action far beyond our walls.”

Foundation officials told Fox News Digital that the volunteers will complement the roughly 300 full- and part-time employees and that the volunteer program represents the foundation’s values both onsite and in the community.

Jarrett is one of several former Obama White House officials collecting six-figure paychecks as foundation executives.

According to Fox News Digital, tax filings show “Total salaries and benefits at the foundation climbed from $18.5 million in 2018 to $43.7 million in 2024 as staffing expanded to 337 employees and annual revenue reached nearly $210 million.”

Unpaid volunteers are commonly employed by presidential libraries, nonprofit cultural institutions, and museums.

In the case of the Obama Presidential Center, the foundation reports that “volunteer ‘Ambassadors’ will greet visitors, provide directional assistance, share information on exhibitions and events, and ensure every guest feels personally welcomed from the moment they arrive.”

The center is scheduled to open on Juneteenth, the holiday commemorating the end of slavery in Texas.

Using unpaid labor to carry out the day-to-day work of running an opulent institution run by a well-connected, wealthy elite?

If that isn’t irony, it’s certainly missing a great opportunity.

Tyler Durden
Mon, 03/16/2026 – 20:10

via ZeroHedge News https://ift.tt/6RZ3Idc Tyler Durden

Russia’s Rumored Telegram Block Appears Underway As Outage Reports Surge

Russia’s Rumored Telegram Block Appears Underway As Outage Reports Surge

Reports are flooding in from across Russia that Telegram is suddenly going dark, fueling speculation that the Kremlin may already be testing a nationwide block ahead of a rumored planned crackdown next month.

“Over the last 24 hours, Telegram has effectively stopped working through some providers if you are using Russian IP addresses,” tech sector observer Vladislav Voytenko told Kommersant FM on Monday. “As for using Telegram via mobile internet, you can basically forget about it,” he added.

via Associated Presses

Russia’s Main Radio Frequency Center, an arm of media watchdog Roskomnadzor, said a surge of complaints began appearing over the weekend, with at least one-third coming from Moscow, followed by St. Petersburg and other cities spread across the country’s vast 11 time zones.

Regional media has tracked user reports on outage monitors such as Downdetector and Sboi.rf, which show complaints spiking sharply over the weekend as the app began failing across multiple regions.

Some Russian users have described the platform is barely functioning “in any form”. They complain the app won’t open, messages won’t send, and neither will photos and videos load.

Tech analysts say the disruption looks less like a technical glitch and more like the targeted throttling of Russia’s most popular messaging service and social media site, with an estimated 90 million users.

Prior reported efforts of the Russian government to restrict Telegram, particularly in 2018 and 2020, failed given that users as well as the company were repeatedly successful in bypassing Kremlin measures.

However, with access suddenly collapsing across the country at the start of this week, many observers believe the Kremlin may finally be preparing to finish the job. The reality is that Telegram is notoriously difficult for governments to monitor and censor.

But Moscow believes the company itself could be using it against Russia amid the Ukraine war. As we featured earlier this month:

Authorities in Russia believe that Ukraine has quick access to Russian servicemen’s messages and exploits this for military purposes, which wouldn’t be possible without some degree of complicity on Telegram’s part, thus impugning its founder’s character after he denied working with foreign spooks.

The FSB claimed to have “reliable information that the Ukrainian armed forces and intelligence agencies are able to quickly obtain information posted on the Telegram messenger and use it for military purposes.” This coincides with the government allegedly throttling Telegram on the grounds that it’s not in compliance with local laws, which preceded reports that it’ll be banned on 1 April. The authorities denied that they have nay such plan but there’s no doubt that Telegram is now controversial in Russia.

This comes also as the West has been calling Russia’s ever-tightening internet regulations on its citizenry a “digital Iron Curtain”.

Russian government authorities have all the while accused the messaging giant of failing to curb fraud and safeguard user data, which ironically is similar to what the French government accused the company of when it famously detained billionaire Telegram founder and CEO Pavel Durov in 2024.

Tyler Durden
Mon, 03/16/2026 – 19:45

via ZeroHedge News https://ift.tt/gKMF7yx Tyler Durden

Biden-Appointed Judge Blocks RFK Jr’s Appointees To Vaccine Panel

Biden-Appointed Judge Blocks RFK Jr’s Appointees To Vaccine Panel

Authored by Stacey Robinson via The Epoch Times,

A federal judge in Massachusetts ruled on March 16 that Health Secretary Robert F. Kennedy Jr. illegally appointed 13 new members to an influential vaccine panel beginning last June.

Biden-appointed district Judge Brian Murphy also blocked that panel’s guidance memo revising the childhood immunization schedule and declared its previous votes invalid.

Murphy ruled Kennedy committed “a technical, procedural failure” by skirting around the Advisory Committee on Immunization Practices (ACIP) to change the vaccine recommendations for children.

He said the government committed a similar mistake by removing the previous members of that committee, and replacing them “without undertaking any of the rigorous screening that had been the hallmark of ACIP member selection for decades.”

The plaintiffs, led by the American Academy of Pediatrics, originally sued after Kennedy ordered the Centers for Disease Control and Prevention to stop recommending the COVID-19 vaccine for pregnant women and healthy children.

The suit was later expanded to challenge the restructuring of the ACIP and its changes to childhood vaccine recommendations.

Tyler Durden
Mon, 03/16/2026 – 19:20

via ZeroHedge News https://ift.tt/Bk5NLrV Tyler Durden

Why the Media Pushes Public Health Myths

This week, editors Peter SudermanKatherine Mangu-WardNick Gillespie, and Matt Welch discuss the legacy of Paul Ehrlich, author of The Population Bomb, and the enduring impact of the overpopulation panic he helped popularize. They examine how dire predictions of mass famine and societal collapse dominated headlines for decades, why those forecasts failed to materialize, and how elite institutions and media outlets often continue promoting similar narratives with little reflection on past errors.

Next, the panel discusses the Federal Communications Commission’s (FCC) threat to revoke broadcast licenses over war coverage the White House dislikes, before analyzing Vice President J.D. Vance’s effort to position himself as an Iran war skeptic inside the White House. Then, the editors answer a listener’s question about whether the Department of Homeland Security still serves a useful purpose as a centralized hub for intelligence sharing. Finally, the panel remembers Reason Senior Editor Brian Doherty by reflecting on his enormous influence as a historian of the libertarian movement.

Reason is hiring! Check out the two open roles on the video team now:https://reason.org/jobs/associate-producer/https://reason.org/jobs/producer/

 

0:00—The myth of overpopulation panic

19:22—The FCC threatens broadcasters over war coverage

24:05—Vance positions himself as an Iran war skeptic

31:46—Listener question on Department of Homeland Security

38:55—Remembering Brian Doherty

46:59—Weekly cultural recommendations

 

Mentioned in the podcast:

Population Doomster and False Prophet of Ecological Apocalypse Paul Ehrlich Has Died,” by Ronald Bailey

60 Minutes Promotes Paul Ehrlich’s Failed Doomsaying One More Time,” by Ronald Bailey

Civilization Is Doomed, Says Stanford Biologist Paul Ehrlich (Again),” by Ronald Bailey

Population Doomster Paul Ehrlich’s New Forecast: ‘Biological Annihilation,’” by Ronald Bailey

Doomster Paul Ehrlich Unrepentant: ‘My language would be even more apocalyptic today.’” By Ronald Bailey

Betting on Humanity’s Future,” by Ronald Bailey

Paul Ehrlich Sounds the Trump of Doom Again: And This Time It’s A ‘Consensus,’” by Ronald Bailey

Paul Ehrlich Goes Up Against ‘Well-Funded, Merciless Enemies’ to Save the Earth from Certain Destruction. Again,” by Katherine Mangu-Ward

Julian Simon Was Right: Ingenuity Leads to Abundance,” by J.D. Tuccille

FCC Chair Threatens Media Outlets That Don’t Report Good Iran War News,” by Joe Lancaster

Trump Wants To Cover Up Bad News About the Iran War,” by Matthew Petti

Trump and Vance Promised ‘No New Wars.’ What Happened To That?” by Steven Greenhut

Homeland Insecurity,” by Brian Doherty

Abolish the Department of Homeland Security,” by Nick Gillespie and Justin Zuckerman

Brian Doherty, Historian of the Libertarian Movement, Dead at 57,” by Matt Welch

Remembering Brian Doherty, Chronicler of and Participant in Wild and Wonderful Subcultures,” by Nick Gillespie

Brian Doherty: The fascinating women and weirdos who founded libertarianism,” by Nick Gillespie

I Dreamed I Saw Joey Ramone Last Night: The P.C. eulogizing of a punk rocker,” by Nick Gillespie and Brian Doherty

Me and the Orgone—The True Story of One Man’s Sexual Awakening,” by Orson Bean

Marian Tupy and Gale Pooley: More People Means More Wealth,” by Nick Gillespie

One Battle After Another Lets Leftist Radicals Off the Hook,” by Peter Suderman

The post Why the Media Pushes Public Health Myths appeared first on Reason.com.

from Latest – Reason.com https://ift.tt/0KJd4Wi
via IFTTT

Brendan Carr Says He Can Police TV Journalism Because Broadcast Licenses Are ‘Free’


FCC Chairman Brendan Carr against a field of TV sets | Andrew Thomas/CNP/Polaris/Newscom

For nearly a decade, President Donald Trump has been threatening to revoke the broadcast licenses of TV stations that fail to cover him the way he thinks they should. Ajit Pai, who chaired the Federal Communications Commission (FCC) during Trump’s first term, was not at all receptive to that idea, saying, “I believe in the First Amendment.” But Brendan Carr, the current FCC chairman, has no such constitutional compunctions.

Carr made that clear once again over the weekend, when he warned on X that broadcasters “will lose their licenses” if they fail to “operate in the public interest”—a standard that he seems to think constrains their coverage of the U.S. war with Iran. Carr’s threat underlines the anomalous legal status of broadcast journalism, which allows government interference that would be obviously unconstitutional in any other medium.

Notably, Carr’s X post was a response to Trump’s complaints about a story in The Wall Street Journal, and the FCC has no authority to regulate newspapers. Trump, in turn, responded to Carr’s threat with a broad attack on the “Fake News Media,” reiterating his longstanding beef with journalists whose work irritates him. Although the Journal was the only outlet that Trump mentioned in that Truth Social post, he said he was “thrilled” that Carr was “looking at the licenses of some of these Corrupt and Highly Unpatriotic ‘News’ Organizations”—i.e., the ones that are subject to FCC regulation.

Such meddling is justified, Trump said, because TV stations “get Billions of Dollars of FREE American Airwaves.” He was echoing Carr’s rationale for punishing news outlets that do not serve “the public interest” as Carr defines it: “The American people have subsidized broadcasters to the tune of billions of dollars by providing free access to the nation’s airwaves.” But that claim is inaccurate because the value of broadcast licenses is reflected in the price that businesses pay when they buy TV or radio stations.

NBC, for example, was originally a radio network established in 1926 by RCA, a partnership of General Electric (G.E.), Westinghouse, AT&T, and the United Fruit Company. RCA became a separate company in 1932 as a result of an antitrust settlement. G.E. regained control of NBC in 1986, when it bought RCA for $8.6 billion (about $25 billion in current dollars). In 2004, Vivendi Universal Entertainment merged with G.E., forming NBCUniversal. Comcast bought 51 percent of NBCUniversal in 2011, when the latter company was valued at $30 billion (about $43 billion today). Comcast bought the rest of NBCUniversal in 2013 for $16.7 billion (about $23 billion today).

At each of these stages, the FCC approved the transfer of the broadcast licenses held by NBC-owned stations, which currently include a dozen TV channels in major cities. The ability to continue operating those stations figured in the price that Comcast was willing to pay, which means it is simply not true that the company gained “free access to the nation’s airwaves.”

Nor is that an accurate description of what happened last year when Paramount, which owned CBS and its stations, merged with Skydance Media—an $8 billion deal that likewise required FCC approval because it entailed the transfer of broadcast licenses. Carr nevertheless leveraged the FCC’s discretion in allowing the merger to transform CBS News, which he portrayed as an attempt to correct the organization’s leftward bias in service of “the public interest.”

Carr’s faulty logic becomes obvious when you consider other examples of transferable government licenses. While the original holders of New York City taxi licenses may have enjoyed a windfall when the medallion system was adopted in 1937, for instance, people who subsequently acquired the right to operate taxis had to pay jaw-dropping prices. By 2014, the market value of a medallion, which in 1962 was about $25,000 (around $270,000 today), had risen to more than $1 million. Would it make any sense to assert that permission to operate a taxi in New York City was “free” at either point?

Even if you focus on the original distribution of broadcast licenses and ignore the cost of subsequently acquiring them, there was no compelling justification for the approach that Congress took when it first asserted control over broadcasting in the 1920s. That power grab was based on “the scarcity of radio frequencies”—the same rationale that the Supreme Court would later invoke to uphold the FCC’s authority to regulate broadcast speech. But allocation of broadcasting rights did not require empowering federal bureaucrats to police the content of TV and radio programming.

“The fact that only a finite amount of spectrum use was allowed for traditional broadcasting, without more, did not require intrusive regulation,” John W. Berresford, then an attorney with the FCC’s Media Bureau, noted in a 2005 research paper. “Merely an allocation system, defining and awarding exclusive rights to use certain frequencies, would have sufficed to ‘choose from among the many who apply.’ Like any allocation system, this one would need clearly defined rights, a police force, and a dispute resolution system for allegations of interference, unauthorized operations, and other misconduct.”

Congress nevertheless seized upon the scarcity rationale to charge the Federal Radio Commission, the FCC’s predecessor, with awarding and renewing broadcast licenses based on its assessment of “public interest, convenience, or necessity.” The long history of politically motivated interference with broadcasting flowed from that arbitrary assertion of federal authority, as exemplified by Carr’s efforts to reshape TV programming so it is more to his liking.

Even if you take the FCC’s authority in this area for granted, it is hard to reconcile Carr’s blatantly partisan meddling with the rules he claims to be enforcing. Without citing any specific examples, he avers that broadcasters “are running hoaxes and news distortions.”

The “hoax” rule applies to “false information concerning a crime or a catastrophe,” but only if 1) “the licensee knows this information is false,” 2) “it is foreseeable that broadcast of the information will cause substantial public harm,” and 3) “broadcast of the information does in fact directly cause substantial public harm.” The rule against “broadcast news distortion” likewise applies only when there is “evidence showing that [a] broadcast news report was deliberately intended to mislead viewers or listeners.”

Broadcasters “are only subject to enforcement if it can be proven that they have deliberately distorted a factual news report,” which “must involve a significant event and not merely a minor or incidental aspect of the news report,” the FCC says. The commission “makes a crucial distinction between deliberate distortion and mere inaccuracy or difference of opinion,” which “are not actionable.”

Again, the only example to which Carr even alluded was a newspaper article, which is completely beyond the FCC’s authority. Trump was angry about a March 13 Wall Street Journal report, based on information from “two U.S. officials,” that “five U.S. Air Force refueling planes were struck and damaged on the ground at Prince Sultan air base in Saudi Arabia.”

Trump was especially upset about the headline, which he described as “intentionally misleading” because “the planes were not ‘struck’ or ‘destroyed.'” Rather, he said, “four of the five had virtually no damage” and are “already back in service,” while the fifth “had slightly more damage” but “will be in the air shortly.” Yet as Reason‘s Matthew Petti notes, the Journal‘s headline, which it retained when it updated the story on Saturday to reflect Trump’s criticism, was perfectly consistent with Trump’s account. “Five Air Force Refueling Planes Hit in Iranian Strike on Saudi Arabia,” it said.

Leaving aside the legally significant point that Trump was complaining about a newspaper article rather than a broadcast news report, there was nothing inaccurate or misleading about that headline, let alone “intentionally” so. It was not, by any stretch of the imagination, a “hoax” or a “deliberately distorted” news report. Trump nevertheless claimed the Journal‘s “terrible reporting” was “the exact opposite of the actual facts!” He later upped the ante, saying the report was not only “knowingly FAKE” but egregious enough to justify “Charges [of] TREASON for the dissemination of false information!”

Carr lent credence to Trump’s assessment by responding to his criticism of the Journal with a warning about “hoaxes and news distortions,” even though both labels are plainly inapt in this context. Carr, in any case, made it clear that his concerns go far beyond purported violations of any specific FCC rule. “The law is clear,” he wrote. “Broadcasters must operate in the public interest, and they will lose their licenses if they do not.”

Although it is not clear exactly what Carr means by “the public interest,” it seems to preclude journalism that annoys the president. “It is very important to bring trust back into media, which has earned itself the label of fake news,” he said. “When a political candidate is able to win a landslide election victory…in the face of hoaxes and distortions, there is something very wrong. It means the public has lost faith and confidence in the media. And we can’t allow that to happen.”

Note that Carr megalomaniacally aspires to restore “faith and confidence in the media,” even though he is relying on legal authority that is specific to just one of those media. And his judgment of journalism in that particular medium is based on reasoning that echoes Trump’s narcissism.

Last November, Trump said Carr should consider revoking broadcast licenses because “when you’re 97 percent negative to Trump, and then Trump wins the election in a landslide, that means obviously your news is not credible.” By the same logic, that coverage would have been credible if Trump had lost the election.

Although this is probably not the best way to assess the quality of TV network journalism, the important point is that Trump thinks broadcast licenses should be contingent on his own judgment of whether that journalism is fair and balanced. Unlike Pai, Carr seems to agree.

The post Brendan Carr Says He Can Police TV Journalism Because Broadcast Licenses Are 'Free' appeared first on Reason.com.

from Latest – Reason.com https://ift.tt/YVPJNpj
via IFTTT

Why the Media Pushes Public Health Myths

This week, editors Peter SudermanKatherine Mangu-WardNick Gillespie, and Matt Welch discuss the legacy of Paul Ehrlich, author of The Population Bomb, and the enduring impact of the overpopulation panic he helped popularize. They examine how dire predictions of mass famine and societal collapse dominated headlines for decades, why those forecasts failed to materialize, and how elite institutions and media outlets often continue promoting similar narratives with little reflection on past errors.

Next, the panel discusses the Federal Communications Commission’s (FCC) threat to revoke broadcast licenses over war coverage the White House dislikes, before analyzing Vice President J.D. Vance’s effort to position himself as an Iran war skeptic inside the White House. Then, the editors answer a listener’s question about whether the Department of Homeland Security still serves a useful purpose as a centralized hub for intelligence sharing. Finally, the panel remembers Reason Senior Editor Brian Doherty by reflecting on his enormous influence as a historian of the libertarian movement.

Reason is hiring! Check out the two open roles on the video team now:https://reason.org/jobs/associate-producer/https://reason.org/jobs/producer/

 

0:00—The myth of overpopulation panic

19:22—The FCC threatens broadcasters over war coverage

24:05—Vance positions himself as an Iran war skeptic

31:46—Listener question on Department of Homeland Security

38:55—Remembering Brian Doherty

46:59—Weekly cultural recommendations

 

Mentioned in the podcast:

Population Doomster and False Prophet of Ecological Apocalypse Paul Ehrlich Has Died,” by Ronald Bailey

60 Minutes Promotes Paul Ehrlich’s Failed Doomsaying One More Time,” by Ronald Bailey

Civilization Is Doomed, Says Stanford Biologist Paul Ehrlich (Again),” by Ronald Bailey

Population Doomster Paul Ehrlich’s New Forecast: ‘Biological Annihilation,’” by Ronald Bailey

Doomster Paul Ehrlich Unrepentant: ‘My language would be even more apocalyptic today.’” By Ronald Bailey

Betting on Humanity’s Future,” by Ronald Bailey

Paul Ehrlich Sounds the Trump of Doom Again: And This Time It’s A ‘Consensus,’” by Ronald Bailey

Paul Ehrlich Goes Up Against ‘Well-Funded, Merciless Enemies’ to Save the Earth from Certain Destruction. Again,” by Katherine Mangu-Ward

Julian Simon Was Right: Ingenuity Leads to Abundance,” by J.D. Tuccille

FCC Chair Threatens Media Outlets That Don’t Report Good Iran War News,” by Joe Lancaster

Trump Wants To Cover Up Bad News About the Iran War,” by Matthew Petti

Trump and Vance Promised ‘No New Wars.’ What Happened To That?” by Steven Greenhut

Homeland Insecurity,” by Brian Doherty

Abolish the Department of Homeland Security,” by Nick Gillespie and Justin Zuckerman

Brian Doherty, Historian of the Libertarian Movement, Dead at 57,” by Matt Welch

Remembering Brian Doherty, Chronicler of and Participant in Wild and Wonderful Subcultures,” by Nick Gillespie

Brian Doherty: The fascinating women and weirdos who founded libertarianism,” by Nick Gillespie

I Dreamed I Saw Joey Ramone Last Night: The P.C. eulogizing of a punk rocker,” by Nick Gillespie and Brian Doherty

Me and the Orgone—The True Story of One Man’s Sexual Awakening,” by Orson Bean

Marian Tupy and Gale Pooley: More People Means More Wealth,” by Nick Gillespie

One Battle After Another Lets Leftist Radicals Off the Hook,” by Peter Suderman

The post Why the Media Pushes Public Health Myths appeared first on Reason.com.

from Latest – Reason.com https://ift.tt/0KJd4Wi
via IFTTT