2023: Even closer to the end.
When I joined Eliezer's sl4 forum over 20 years ago, I was hopeful Eliezer and his ultra-smart colleagues would solve Friendly AI before researchers figure out AGI. I thought they'd find a way to mathematically prove Friendliness that would be stable through recursive self-improvement. I threw out some ideas at the time to help but obviously nobody listened, then reposted these ideas on this blog years later. It's 20 years later and it looks like AGI is closer than ever. I think around 2007, I realized that the problem of what's now known as AI Alignment is likely too hard to solve for humans and that the default scenario would be companies, in pursuit of profit and fame, would eventually find themselves in insane arms race to develop the most capable AI to chase profits, disregarding Alignment, which, of course, would lead to the end of humanity. So, around 2007, I simply abandoned hope and started living my life as a "normie" the best I could, far away from transhumanist forums, trying to complete the few bucket list items that I could, knowing the end was inevitable. Well, it's 2023 now and recent developments in generative AI suggest there's less time than I thought we had. I honestly thought few years ago that we still had 10-20 years before AGI, perhaps enough for something like Nauralink to bootstrap an ethical human(s) for intelligence amplification so it could work on AI Alignment, but now it looks like synthetic AGI is coming within few years at most. It's too late to do anything about it. Society is too dumb to understand the implications of this technology and will not pause its development. So it's full steam ahead into oblivion. Only a miracle could save us at this point. But miracles are very low probability events.Labels: ai, alignment, chatGPT, eliezer, singularity
<< Home