Update 2025-09-13 (PST) (AI summary of creator comment): - Quoted words must be used verbatim by Yudkowsky.
For the option "calls for a 'pause' or 'moratorium' or 'ban' on frontier AI training": he must say one of "pause"/"moratorium"/"ban"; the exact phrase "frontier AI training" is not required (equivalent phrasing is acceptable).
I don’t know. I think that OpenAI was caught flat-footed when they first published ChatGPT and that caused a massive shift in public opinion. I don’t think OpenAI predicted that. I didn’t predict it. It could be that any number of potential events cause a shift in public opinion.
I would definitely not call that a criticism. He's saying it's really hard to predict what will shift public opinion.
@Sketchy both looking at the transcript and relistening to the discussion around 31 minutes into the video, he says the word strike twice but never airstrike and never bomb. I think this must resolve NO. https://youtu.be/KKN0E3a2Yzs?si=nJ8FSl5kIbiJzLDa
@Sketchy actually, here also question about quotes. Does it need to literally be thing in quotes? And if not, then I am confused what was the purpose of those at all?
@EniSci yes, it needs to be the literal words in quotes (but not necessarily the phrase “frontier ai training”). That’s why it’s not resolved yet, I need to go listen more carefully.
@Sketchy there is available transcript on nytimes site (at least if they don't paywall it for everyone who reads them more than once at month): https://www.nytimes.com/2025/09/12/podcasts/iphone-eliezer-yudkowsky.html?showTranscript=1
(Oh, also I have another comment, I am not sure tagging worked after editing)
@EniSci someone else said the word moratorium, and eliezer said he supported it. So I think it's a yes?
kevin roose
So, just to finish the comparison to nuclear proliferation here, it would be immediate moratorium on powerful AI development, along with an international nuclear style agreement between nations that would make it illegal to build data centers capable of advancing the state of the art with AI. Am I hearing that right?
eliezer yudkowsky
All the AI chips go to data centers. All the data centers are under an international supervisory regime. And the thing I would recommend to that regime is to say, just stop escalating AI capabilities any further. We don’t know when we will get into trouble. It is possible that we can take the next step up the ladder and not die. It’s possible we can take three steps up the ladder and not die. We don’t actually know. So we got to stop somewhere. Let’s stop here. That’s what I would tell them.
@jack I supposed that if we check for concrete word in quotes, then it should be literal in context, not like you search for literal word and then vaguely interpret whole sentence?
@Sketchy why "yes"? Iirc he doesn't ever refuse, they just don't ask.
I am now worried that other questions will also be resolved not the way I expected from phrasing.
I think it will be better if resolve it to probability 50-50, or even go back and add two more exact variants and wait for people to retrade.
@EniSci oh right - sorry I’ll be honest I saw it traded at 99% and didn’t think too carefully beyond “he didn’t give number”.
I think it’s mildly subjective but agree with you that the better resolution is NO.
My memory is the same as yours, but if anyone finds a segment where he refuses I’ll flip.
@jack I personally thought that literal interpretation is that he didn't, but I thought that it may be intended as counterpart for another variant and just not considered that question may not come up, so I bet less on that than on others.
Also, I generally think that for such things mainly just interpret everything literally is better practice because lots of different people may have very different non literal interpretations and you better avoid ambiguity when you can?
Though because this meta is ambigious I think question is ambigious too, that's why I didn't just ask to flatly resolve it no.
@Sketchy while Yudkowsky only says conventional strike, later the host asks 'what is the equivalent of the bombs dropping on Hiroshima and Nagasaki for AI....' to which Yudkowsky responds '....OpenAI was caught flat footed...' https://youtu.be/KKN0E3a2Yzs?si=jZN-UKp4AgDir350&t=2222
@bashmaester I don't think this is about enforcement. This is making an analogy between nuclear risk and ai risk is how I heard it
@jack there is another:
kevin roose
And what do you do if a nation goes rogue and decides to build its own data centers and fill them with powerful chips and start training their own superhuman AI models? How do you handle that?
eliezer yudkowsky
Then that is a more serious matter than a nation refining nuclear materials with which they could build a small number of nuclear weapons. This is not like having five fission bombs to deter other nations. This is a threat of global extinction to every country on the globe. So you have your diplomats say, stop that. Or else we, in terror of our lives and the lives of our children, will be forced to launch a conventional strike on your data center.
And then if they keep on building the data center, you launch a conventional strike on their data center.
First is about enforcement, in second anyone does use 'bomb'. What I am not sure about is does plural form count? Or if it is not intended to be literal despite quotes, then I think conventional strike counts as airstrike. @Sketchy?
Insider trading from any one of the three users of Manifold that will participate in the podcast encouraged!