
Will AI accelerators improve in FLOPs/watt by 100x of an NVidia H100 by 2033?
17
100Ṁ33632033
90%
chance
1D
1W
1M
ALL
Compared to an H100, will tensor TFLOPs/Watt improve by 100x by 2033? AI accelerators in scope for this question must be deployed significantly - with at least 100k units or $100M (in 2024 dollars) in production, and have published perf/watt numbers.
This market will count peak FLOPs/watt at k bits of precision, adjusted by a factor of 2^(1 - 32/k). That is, 16 bit precision counts 1/4 as much as 32 bit, which counts 1/4 as much as 64 bit precision.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
Sort by:
Related questions
Related questions
What will be the maximum achievable flop utilization on the next generation of Nvidia server chips?
Will there be an announcement of a model with a training compute of over 1e30 FLOPs by the end of 2025?
5% chance
When will a US government AI run overtake private AI compute by FLOP?
Will any chip maker other than NVIDIA, Intel, and AMD create accelerators for Deep Learning and be profitable by 2030?
90% chance
Will China develop domestic GPU with comparable performance to NVIDIA H100 by 2026?
86% chance
Will a machine learning training run exceed 10^26 FLOP in China before 2026?
57% chance
Will a machine learning training run exceed 10^25 FLOP in China before 2025?
77% chance
Will the purchase of 3k NVIDIA H100 chips through Saudi's KAUST lead to a functional form of generative AI by June 2024?
15% chance
Will a machine learning training run exceed 10^27 FLOP in China before 2030?
66% chance
Will a machine learning training run exceed 10^25 FLOP in China before 2027?
82% chance