I finished listening to Yudowsky’s book, “If Anyone Builds It, Everyone Dies”. He has several compelling arguments, most notably the part where he discusses the topic with, essentially, if even a few AI builders suggested there was a hypothetical 20% chance that AI would ‘kill us all’, then a rational approach would be halt the progress and approach it much more cautiously than we are. As it is, most of the experts acknowledge a very real chance of annihilation, but because it’s a hyper-competitive race, everyone is forging ahead anyway. It’s insanity.
The parallels with climate change are striking. We know we are risking our future, but it isn’t stopping us.
Yudkowsky is different in that he claims with certainty that an AI super-intelligence definitely would kill us all, but he builds that argument from the basis that, as we are ‘growing’ AI more than ‘designing’ it, we have no idea what we’re creating. But, if we have no idea, is there a chance that AI might do something different than what we expect?
He also seems to believe that, when presented with the evidence that AI is super dangerous, mankind will back off the technology. Those involved with climate change will see how ridiculous this is. Instead of reaching consensus, we’ll endlessly argue about it, and instead of backing off, we’ll continue anyway.
The Pentagon and Hegseth are currently fighting to be able to let AI control its autonomous weapons systems and let it spy on U.S. citizens.
AGI, an AI that can pull from multiple fields and match or beat human-level intelligence in all fields, is thought to be less than 10 years away by many. Dario Amodei, founder of Anthropic, in a recent interview thinks it’s probably only a year or two away. He has also mentioned that Anthropic engineers aren’t certain if Claude isn’t conscious, right now.
ASI, artificial super-intelligence, is another step up from AGI, but it might be a very tiny step for a true AGI. It’s simply a matter of increasing capacity. We, as humans, have very little concept of what this could mean. Almost all of our scifi deals with AI that is ‘equal’ to us (Data in STNG, C-3PO in Star Wars, even the Cylons in Battlestar aren’t much better at humans). An ASI would be many, many times better than us at everything, and would be able to think and move many, many times more quickly than us. We’d look like we’re standing still next to it, barely smarter than a rock in comparison.
Even a benign ASI would create mass existential dread, as the joke commercial above alludes to. What is the purpose of us if we aren’t individually great at anything, or even necessary at anything?
Another parallel with climate change is the general feeling of helplessness for the average person with all this. We’re being dragged along for the ride whether we like it or not by a very tiny handful of the population leading it. It’s depressing and infuriating.
Finally, AI was put to comedic use!
I finished listening to Yudowsky’s book, “If Anyone Builds It, Everyone Dies”. He has several compelling arguments, most notably the part where he discusses the topic with, essentially, if even a few AI builders suggested there was a hypothetical 20% chance that AI would ‘kill us all’, then a rational approach would be halt the progress and approach it much more cautiously than we are. As it is, most of the experts acknowledge a very real chance of annihilation, but because it’s a hyper-competitive race, everyone is forging ahead anyway. It’s insanity.
The parallels with climate change are striking. We know we are risking our future, but it isn’t stopping us.
Yudkowsky is different in that he claims with certainty that an AI super-intelligence definitely would kill us all, but he builds that argument from the basis that, as we are ‘growing’ AI more than ‘designing’ it, we have no idea what we’re creating. But, if we have no idea, is there a chance that AI might do something different than what we expect?
He also seems to believe that, when presented with the evidence that AI is super dangerous, mankind will back off the technology. Those involved with climate change will see how ridiculous this is. Instead of reaching consensus, we’ll endlessly argue about it, and instead of backing off, we’ll continue anyway.
The Pentagon and Hegseth are currently fighting to be able to let AI control its autonomous weapons systems and let it spy on U.S. citizens.
This video has a decent synopsis of Yudkowsky’s arguments:
https://www.youtube.com/watch?v=xfMQ7hzyFW4
AGI, an AI that can pull from multiple fields and match or beat human-level intelligence in all fields, is thought to be less than 10 years away by many. Dario Amodei, founder of Anthropic, in a recent interview thinks it’s probably only a year or two away. He has also mentioned that Anthropic engineers aren’t certain if Claude isn’t conscious, right now.
ASI, artificial super-intelligence, is another step up from AGI, but it might be a very tiny step for a true AGI. It’s simply a matter of increasing capacity. We, as humans, have very little concept of what this could mean. Almost all of our scifi deals with AI that is ‘equal’ to us (Data in STNG, C-3PO in Star Wars, even the Cylons in Battlestar aren’t much better at humans). An ASI would be many, many times better than us at everything, and would be able to think and move many, many times more quickly than us. We’d look like we’re standing still next to it, barely smarter than a rock in comparison.
Even a benign ASI would create mass existential dread, as the joke commercial above alludes to. What is the purpose of us if we aren’t individually great at anything, or even necessary at anything?
Another parallel with climate change is the general feeling of helplessness for the average person with all this. We’re being dragged along for the ride whether we like it or not by a very tiny handful of the population leading it. It’s depressing and infuriating.