
Me, or someone inheriting this question will poll people on this question in 2100 and resolve any answer to the proportion that people on the poll answered "yes".
I'll give you 5~100 manifold bucks if you post another good possible answer in the comments.
The question about the polled being a person is there to control for scenarios where something weird happened during the passing down of the responsibility of resolving this question.
Sorry to non-humans that between now and 2100 join human discourse. I'll edit the term "humanity" when I find a nonconfusing term encapsulating the group of all nearby moral patients.
Huh, another AGI survival prediction market?
Yes, this one is not a "pick one from many" but just a collection of yes/no questions, which I think is more informative.
- By Isaac King
- By Yudkowsky
- By Yudkowsky's community
I can't see a path where this would ever be true. AI is too valuable of a thing for everyone to link hands and cooperate.
Additionally, this seems like an incomplete answer. Unless the moratorium lasts for nearly 75 years, the only way it'd help is by giving time to agree on some other "real" fix that will endure. But in that case, the moratorium will just be a footnote in history.
One path is that the most powerful AIs are never given intrinsic motivations of their own.
If a person uses AI as a tool to engineer a super virus, who actually killed humanity? The gun, or the person holding the gun?
In this "the person holding the AI did it" viewpoint, we're just as likely to see AI used to save humanity, e.g. is used to help develop a vaccine.
To develop this a little further, what if we handed every org of more than 100 people the ability to launch a nuclear warhead. That kind of power in the hands of so many people is obviously folly, because someone is going to decide to launch.
AI is of course the warhead, but AI is more flexible. Every org is going to consider the risk, and many of them will set their AIs to work on building a nuclear defense shield (or equivalent).
Offense is usually easier than defense. I think humanity will be in for a rough time. But AI will get used for both roles, and it may ultimately be a human who's "to blame" if AI shatters civilization.
This question got me thinking about optimal formats for this, so I'm trying a weird one where all trades are cancelled after a week but then it re-resolves in 2060.
I am sufficiently pessimistic about humanity's ability to coordinate that I think most surviving worlds in 2100 are ones in which we are just lucky and it turns out that the relevant technology is much harder to invent than we think it is. Specifically, we might be lucky and:
A) The next AI breakthrough on the order of Transformers simply never arrives. LLMs keep getting better, but no amount of additional training data makes them a superintelligence.
or
B) It turns out there are no superweapons. Nanotechnology just doesn't work how we currently expect it to, engineering super-viruses turns out to be impossible, etc. Without any easy way to kill us all instantly, AI decides to work with us instead.
I am sure someone could phrase these better than me, but they are what I'm hoping for. I still think we should be desperately trying to coordinate moratorium treaties and develop human intelligence augmentation etc, but I doubt we pull those off.
@Joshua I’d like to second (B) especially. I work in nanoscience and I’m shocked by how seriously people take Eric Drexler’s ideas (I hesitate to say pseudoscience, but they’re certainly not very rigorous). I just don’t think it’s plausible that even a superintelligence could figure out how to engineer self-replicating nanobots and the like.
@Joshua Thanks. I totally missed this despite doing some searching. Maybe I'll close the market if the overlap is too large.
@Jono3h Don't close it! An unlinked multichoice is much better than those old linked parimutual markets.
@Joshua Oh this also exists which is a duplicate of EY's market but with the current linked format that allows shorting:
So I would expect it to perhaps have more accurate percentages than EY's original, even though it has fewer traders. Probably also worth including the description?
My reasoning is that in general, large groups of people mostly make big changes in response to disasters.
Most of my probability mass is on things like energy scarcity, climate issues, etc. that just make AI research unfeasible.
Also significant is a failed takeover, causing everyone to understand the risk more viscerally. But that's hard to estimate. It's hard to imagine an AI causing significant enough damage without also just winning.
If it turns out to be too hard to make creative agents, then we survive for free. I wouldn't count on it but possibly it's true.