“Imagine A Technological Breakthrough That Will Give Its Inventor Power Over Humanity… Can Such A Breakthrough Be Used For Good?”

“Imagine A Technological Breakthrough That Will Give Its Inventor Power Over Humanity… Can Such A Breakthrough Be Used For Good?”

By Eric Peters CIO of One River Asset Management

“President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence,” announced the White House homepage in large, bold font [here].

It’s a rather weak name for such an important order, probably the product of a committee, or some first-year McKinsey consultant. So, I asked GPT-4 for a new name, perhaps biblical, playing on the theme of Eve and the forbidden fruit from the tree of knowledge, or maybe something Orwellian, like the deficit-fueled extravaganza they called the Inflation Reduction Act. And GPT-4 suggested the “AGI Safety Act.” 

“Imagine a technological breakthrough that will give its inventor power over humanity and can be used for both extreme evil and good,” I asked GPT-4.

The Bletchley Declaration – a pledge of international cooperation in the quest to regulate frontier Artificial Intelligence – had just been signed by 28 nations, including the US, China, Japan, the EU, UK, India, Brazil, Canada, Australia, UAE, and Saudi Arabia [here].

“Assume many players are investing massively to be first to make the breakthrough,” I continued typing in GPT-4’s neural interface.

“Will it be possible to coordinate their behaviors so that the breakthrough is used for good and spread across humanity in a way that is fair to all?” I asked.

“And how can this be possible, when the risks of losing this competition are potentially existential, while the rewards are so great?

I was just curious.

OpenAI’s Sam Altman was at the UK’s Bletchley conference, as was Elon Musk, Salesforce’s Benioff, Google’s Manyika and Hassabis, and Arm’s Haas.

Each major government had senior representatives. Naturally, Putin couldn’t make it. Nor any Ayatollahs. And the North Korean kid can’t stomach long train trips.

“Achieving coordination among competing players for the common good, especially when the stakes are incredibly high, is indeed a challenging endeavor. However, with the right combination of diplomacy, strategic planning, and incentives, it might be possible. Here’s a strategy to navigate such a situation,” answered GPT-4.

It gave me a sensible ten-point plan that humanity has used to address nuclear proliferation.

I asked if there are other examples.

GPT-4 highlighted the human genome project, ozone layer depletion, climate change via the Paris Agreement, and global health emergencies via the WHO which I think must have been a joke.

And this suggests that in all human history, there is just one example, which was our unsuccessful effort to prevent a nuclear arms race.

Which is surely the direction the AGI sprint is headed. And GPT-4 reluctantly agreed. 

Tyler Durden
Mon, 11/06/2023 – 07:20

via ZeroHedge News https://ift.tt/HriwZpX Tyler Durden

Leave a Reply

Your email address will not be published. Required fields are marked *