4/10/1967: Loving v. Virginia argued.
The post Today in Supreme Court History: April 10, 1967 appeared first on Reason.com.
from Latest – Reason.com https://ift.tt/HxiU6WD
via IFTTT
another site
4/10/1967: Loving v. Virginia argued.
The post Today in Supreme Court History: April 10, 1967 appeared first on Reason.com.
from Latest – Reason.com https://ift.tt/HxiU6WD
via IFTTT
These Strange New Minds is a comprehensive book for lay readers wondering how large language models (LLMs) work and how they might help or harm human culture.
Its author, the cognitive neuroscientist Christopher Summerfield, faces an inherent challenge: The pace of change in AI makes it difficult for any traditionally published book to feel fully up to date. Books from major publishers can take more than a year to move from manuscript to finished copy. Summerfield addresses this by adding a later-written afterword noting that LLMs are already reasoning and conversing more effectively than they did just two years ago. They are becoming more “agentic,” helping users accomplish tasks rather than merely answering prompts, while also becoming more capable tools for crime and fraud.
Summerfield does not believe LLMs will destroy humanity. But he makes clear that dismissing what they can already do, or what they are likely to do, is shortsighted. Anyone who organizes their work or daily life through computers should not ignore AI’s looming impact. That remains true even if how “deep learning” achieves its results is still, in some respects, “mysterious.”
Summerfield engages seriously with skeptics who claim that, because LLMs merely predict or echo patterns derived from the vast corpus of human writing on which they are trained, they are not truly thinking or meaningfully imitating the human mind. LLMs, he acknowledges, “work by multiplying together large matrices of numbers,” while our brains operate through “electrical signals in an organic medium.” But that does not mean the outcomes—effective understanding and communication—are always meaningfully distinguishable. To “say that LLMs do not think at all,” Summerfield writes, “requires a new and rather convoluted definition of what it means to ‘think.'”
The post Review: A Cognitive Neuroscientist's Take on How AI Models Think appeared first on Reason.com.
from Latest – Reason.com https://ift.tt/Ss0byQj
via IFTTT
The incendiary Olivier Award–winning play Giant, which premiered in London last year, comes to Broadway this spring with a regrettably timely message. The titular giant is Roald Dahl, the cantankerous, 6′ 6”, much-loved children’s author.
The drama takes place during a 1983 summer luncheon and afternoon hosted by Dahl (John Lithgow) and his soon-to-be second wife, Felicity Crosland (Rachael Stirling). The guests are his British publisher (Elliott Levey) and his American publisher’s sales representative (Aya Cash), both of whom are Jewish.
Worried about sales of Dahl’s new book, The Witches, the publishers want the irascible author to craft an apology for an article in which his criticisms of Israel scandalously veered into antisemitism. Though playwright Mark Rosenblatt completed Giant before Hamas’ attack on Israel, the play’s concerns resonate at a time when criticism of Israel’s conduct in the Gaza war similarly commingles often with a hatred of Jews.
While the play deals effectively with these big issues, it is not in the least didactic. It dramatically presents characters subtly negotiating the entanglements of identity and the perils of cancel culture. The cast is superb, and Lithgow’s portrayal of Dahl’s simultaneous tenderness and monstrousness is perfection.
The post Review: <i>Giant</i> Dramatizes Roald Dahl's Antisemitism Controversy appeared first on Reason.com.
from Latest – Reason.com https://ift.tt/sJnv95t
via IFTTT
The definition of a trade-off is “a balancing of factors all of which are not attainable at the same time.” We know there are no perfect choices in anything we do or buy. It’s like that old maxim about any service. There’s quality, speed, and price, but you can only pick two. In the realm of public policy, however, most people think they can have everything without any hard choices.
And so we arrive at the latest debates and legal verdicts about social media and technology. Almost all Americans are addicted, at some level, to smart phones, TV screens, and computers. Our lives have also been enhanced by them. I needn’t detail the immeasurable benefits—the endless information and entertainment that’s literally at our fingertips, or our ability to interact with others in ways that were previously unimaginable.
But there’s a dark side. I can’t manage to watch even the most engrossing movie without scrolling through my phone. Many young people spend more time on their phones than they do participating in healthy endeavors. Studies show some teens spend hours on their phones a day—and that the highest social-media users suffer most from alienation and depression.
As a society, we’re trying to work our way through this phenomenon. Depression among teens isn’t new. I spent more than my share of time as a teen wallowing in the usual adolescent misery—and that was before the personal computer and cellphones had been invented. So what do we do? The usual answers may seem quaint, but they remain the gold standard. Parents need to be involved in their kids’ lives. Individuals need to take responsibility for their actions and develop good habits.
Unfortunately, in modern America the answers often involve blaming the companies that sell us the technologies that we really like, turning to legislatures to regulate them, and then suing those companies for outcomes that aren’t really their fault. The issue is bipartisan. Conservatives and progressives sound remarkably alike as they concoct legislation and lawsuits to “protect the children” from ill-defined harms.
The latest news: A California jury awarded $6 million to a 20-year-old woman who has suffered psychological issues that her lawsuit (involving many plaintiffs) blamed in part on Google and Meta. The day before, a New Mexico jury slammed Meta with a $375 million verdict, as it sided with prosecutors who said Facebook and Instagram violated the state’s consumer-protection laws by not providing enough safeguards against child online exploitation.
As NPR reported, the California jury “heard competing narratives about what role social media platforms played in the mental health struggles of Kaley, also identified as KGM. Now 20 years old, Kaley…said she first started using YouTube at 6 years old and Instagram when she was 11.” The plaintiffs argued the platforms were designed to keep teens addicted. The counter argument: The companies that create these apps can’t be blamed for the complex mental issues of everyone who might use them.
I’m a dad and grandad, so I’m sensitive to concerns about children’s mental-health struggles, but the latter argument is the right one. Social media can erode self-esteem, but it also provides valuable content. Studies also show social media provides incredible benefits for most teens in battling isolation, boosting writing, and providing access to information. Any new or old media platform can help make anyone feel happy or sad.
Americans accept far more brutal trade-offs, such as nearly 37,000 annual motor-vehicle deaths in exchange for our incredible mobility. We accept the ill effects of alcohol addiction not only because many Americans enjoy drinking, but because we learned a century ago about the futility of banning products the public strongly desires. Even under the most hysterical scenarios, the internet provides nowhere near those levels of carnage.
Thanks to the brilliance of our founding fathers, there are some areas where governments are strictly limited in their ability to consider trade-offs. The First Amendment—Congress shall make “no law…”—doesn’t allow lawmakers to restrict our speech and religious rights no matter what ill effects book-banners and atheists might raise.
This is where these verdicts are troubling, as they “will compel social media companies to restrict access to and features on their platforms in a way that would be unconstitutional if mandated directly by legislation,” as my R Street Institute colleague and tech expert, Josh Withrow, eloquently puts it. They serve as a work-around to federal Section 230 regulations that protect online platforms from liability.
There are an estimated 1,600 lawsuits awaiting—and it’s not clear what tech companies could do to protect every user from every scenario short of restricting public access to their services. I don’t want to sound callous, but when balancing the trade-off between allowing Americans the freedom to use whatever platform they want and the alternative, I’m all for the former.
This column was first published in The Orange County Register.
The post Lawsuits Targeting Social Media Are an Attack on Free Speech appeared first on Reason.com.
from Latest – Reason.com https://ift.tt/2Z5eTzL
via IFTTT
4/10/1967: Loving v. Virginia argued.
The post Today in Supreme Court History: April 10, 1967 appeared first on Reason.com.
from Latest – Reason.com https://ift.tt/HxiU6WD
via IFTTT
These Strange New Minds is a comprehensive book for lay readers wondering how large language models (LLMs) work and how they might help or harm human culture.
Its author, the cognitive neuroscientist Christopher Summerfield, faces an inherent challenge: The pace of change in AI makes it difficult for any traditionally published book to feel fully up to date. Books from major publishers can take more than a year to move from manuscript to finished copy. Summerfield addresses this by adding a later-written afterword noting that LLMs are already reasoning and conversing more effectively than they did just two years ago. They are becoming more “agentic,” helping users accomplish tasks rather than merely answering prompts, while also becoming more capable tools for crime and fraud.
Summerfield does not believe LLMs will destroy humanity. But he makes clear that dismissing what they can already do, or what they are likely to do, is shortsighted. Anyone who organizes their work or daily life through computers should not ignore AI’s looming impact. That remains true even if how “deep learning” achieves its results is still, in some respects, “mysterious.”
Summerfield engages seriously with skeptics who claim that, because LLMs merely predict or echo patterns derived from the vast corpus of human writing on which they are trained, they are not truly thinking or meaningfully imitating the human mind. LLMs, he acknowledges, “work by multiplying together large matrices of numbers,” while our brains operate through “electrical signals in an organic medium.” But that does not mean the outcomes—effective understanding and communication—are always meaningfully distinguishable. To “say that LLMs do not think at all,” Summerfield writes, “requires a new and rather convoluted definition of what it means to ‘think.'”
The post Review: A Cognitive Neuroscientist's Take on How AI Models Think appeared first on Reason.com.
from Latest – Reason.com https://ift.tt/Ss0byQj
via IFTTT
The incendiary Olivier Award–winning play Giant, which premiered in London last year, comes to Broadway this spring with a regrettably timely message. The titular giant is Roald Dahl, the cantankerous, 6′ 6”, much-loved children’s author.
The drama takes place during a 1983 summer luncheon and afternoon hosted by Dahl (John Lithgow) and his soon-to-be second wife, Felicity Crosland (Rachael Stirling). The guests are his British publisher (Elliott Levey) and his American publisher’s sales representative (Aya Cash), both of whom are Jewish.
Worried about sales of Dahl’s new book, The Witches, the publishers want the irascible author to craft an apology for an article in which his criticisms of Israel scandalously veered into antisemitism. Though playwright Mark Rosenblatt completed Giant before Hamas’ attack on Israel, the play’s concerns resonate at a time when criticism of Israel’s conduct in the Gaza war similarly commingles often with a hatred of Jews.
While the play deals effectively with these big issues, it is not in the least didactic. It dramatically presents characters subtly negotiating the entanglements of identity and the perils of cancel culture. The cast is superb, and Lithgow’s portrayal of Dahl’s simultaneous tenderness and monstrousness is perfection.
The post Review: <i>Giant</i> Dramatizes Roald Dahl's Antisemitism Controversy appeared first on Reason.com.
from Latest – Reason.com https://ift.tt/sJnv95t
via IFTTT
Revealed: All Members Of UK Government’s ‘Anti-Muslim Hostility’ Group Have Islamist Links
Authored by Steve Watson via Modernity.news,
The UK Labour government’s new definition of “anti-Muslim hostility” – rebranded from “Islamophobia” – is being shaped by a working group where every single member has links to Islamist organisations.
The details are exposed in the Free Speech Union’s latest investigative briefing which highlights ties between the group members and the Muslim Council of Britain (MCB) and Muslim Engagement and Development (MEND), groups that governments since 2009 have refused to engage with due to their extreme views.
One member, Baroness Gohir, tweeted in support of Hamas in 2014. Another stood for the far-left, Islamist-supporting Respect Party.
The Free Speech Union’s latest investigative briefing reveals that ALL FIVE members of the Government’s working group tasked with defining “Islamophobia” — now rebranded as “anti-Muslim hostility” — have troubling links to Islamist organisations.
These include the Muslim Council… https://t.co/MZsgc4amKF
— The Free Speech Union (@SpeechUnion) April 8, 2026
As the Free Speech Union states: “In a free society, no religion should enjoy greater protection than others — nor be shielded from legitimate criticism and challenge.”
The FSU adds: “This group was stacked with members already sympathetic to such a definition.” And with the government yet to appoint a new Islamophobia tsar, “there is deep cause for concern.”
Conservative MP Katie Lam put it bluntly in her video response: “The Government’s new ‘anti-Muslim hostility’ definition will make it harder to talk about Islamist extremism, FGM, and the grooming gangs. They’d rather restrict our right to criticise than deal with these problems head-on. It’s putting us all in danger.”
Parliament abolished blasphemy laws in 2008. Yet as the FSU warns: “This Government risks reviving them for Islam alone, via the back door.”
The wider context is the government’s “Protecting What Matters” report from March 2026, which rolled out the non-statutory definition alongside plans for a special representative on Muslim hostility. Officials insist it protects free speech – but the panel’s composition tells a different story.
Read the full Free Speech Union briefing here.
This comes just weeks after we reported on the government’s leaked social cohesion strategy that branded the Union Flag a “tool of hate” and told schools children’s drawings could be blasphemous under Islamic law.
It builds directly on the Orwellian push we exposed where UK schools are urged to snitch on “anti-Muslim hostility.”
The pattern is clear: criticism of Islam is being reframed as hostility, while real problems like grooming gangs, FGM and Islamist extremism are sidelined.
Challenging Islamist extremism or mass migration’s consequences is now being treated as the real threat. Legitimate debate on integration failures, cultural clashes, or grooming scandals gets reclassified as “hostility” while the actual problems fester.
Britain’s free speech tradition is under sustained assault – not from the public, but from a government more interested in shielding one ideology than defending open society.
The Free Speech Union is right to sound the alarm. Without pushback, this backdoor blasphemy regime will silence the very conversations the country desperately needs.
Your support is crucial in helping us defeat mass censorship. Please consider donating via Locals or check out our unique merch. Follow us on X @ModernityNews.
Tyler Durden
Fri, 04/10/2026 – 05:00
via ZeroHedge News https://ift.tt/NEs6ZTG Tyler Durden
OpenAI Pauses U.K. Stargate Over “Regulation And Power Costs”
OpenAI’s broader Stargate push to build next-generation AI infrastructure in the UK has been put on hold, with the company citing regulatory conditions and high energy costs as major obstacles to long-term investment. That outcome is hardly surprising: Britain, like much of dying Europe, has layered on regulatory burdens, while years of backfiring ‘green’ energy policies have left power costs structurally elevated. It’s a toxic mix for power-hungry AI data center buildouts.
“We see huge potential for the UK’s AI future,” OpenAI told Bloomberg in an emailed statement earlier today. “AI compute is foundational to that goal — we continue to explore Stargate UK and will move forward when the right conditions, such as regulation and the cost of energy, enable long-term infrastructure investment.”
Stargate UK is just one piece of OpenAI’s much larger global expansion plan, which involves spending hundreds of billions (up to $500bln) on AI infrastructure to localize and scale AI capabilities.
The pause in Stargate UK signals that growth in AI data center buildouts is colliding with power constraints and regulations in the Western world, as left-wing leaders prioritize de-growth economies with extremist climate policies, while on the other side of the world, China did the complete opposite and boosted baseload capacity on the grid with some of the dirtiest mix of power generation.
Similar OpenAI projects are underway in Norway and the United Arab Emirates. The core buildout has been in the US, specifically the flagship data center in Abilene, Texas. However, the company abandoned a planned expansion of that data center.
OpenAI’s global compute buildout takeaway:
US = scale + policy support
Middle East = capital + energy
Nordics = cheap power + cooling
UK/Europe = constrained by cost + regulation
Last week, Bloomberg reported that nearly half of the US data centers planned for this year were delayed or canceled – not because of memory chip shortages – but instead shortages of electrical equipment, such as transformers, switchgear, and batteries.
Related:
It certainly appears that data center buildouts are running into real-world constraints that could be a negative for AI momentum trades.
Tyler Durden
Fri, 04/10/2026 – 04:15
via ZeroHedge News https://ift.tt/a9lGIHz Tyler Durden
A federal judge sentenced former Milwaukee police officer Juwon Madlock to five years in federal prison followed by three years of probation after he pleaded guilty to misconduct in office. Prosecutors say Madlock used his position as an officer to help a local street gang, including helping them hide stolen cars, offering to sell them guns, and lying to the FBI. They say he also shared sensitive police information, including pictures of fellow officers, and even told members where to find rival gangs, knowing it could lead to violence.
The post Brickbat: A Man on the Inside appeared first on Reason.com.
from Latest – Reason.com https://ift.tt/zMVkAmT
via IFTTT