1 Comment
User's avatar
Gardenbuilderdom's avatar

Good read! It’d be cool to see you talk to Liron Shapira on his channel “Doom Debates” to kind of flush this out more! I’ll take part in this discussion a little tho if you don’t mind!

I think the argument in the post about delaying immediate benefits is a matter of what you value and the probability of risk you have.

I think most people including me would agree that AI helping people sooner would be great, and if you believe the probability of existential risk is low, then it may be worth it to roll the dice.

However I think a good counter to that to be explored, is if someone believes that the probability of existential risk is high,(let’s say 30-50%) then do we put a higher value on the potential lives of trillions and more than that of people that could exist in the future if humanity survives.

Going back to the benefits you’ve heard about in material science, neuroscience, etc. It seems like developing Tool AI is really beneficial and I agree that we shouldn’t just immediately ban all AI (especially for game theory reasons with international players). But I think there is a difference from AGI and Tool AI.

Running out of characters haha! Overall great read and I agree with you in that we need more people to discuss AI!

Expand full comment