The ongoing discourse surrounding AI safety and its potential for existential risks seems to have escalated into an intense polarization, with participants taking entrenched positions - "AI Doomers" versus "AI Denialists."
Such adversarial classification is not helpful for a productive discussion.
This conversation should be grounded in empirical evidence, rigorous cost-benefit analyses, and a comprehensive understanding of potential risks and benefits.
This is of utmost importance as we prepare for the AI revolution and the potential advent of superintelligent AI systems, ensuring that our civilization is sufficiently equipped to handle future challenges.
This is how I prefer to think about this:
On the one hand, some adhere closely to the Precautionary Principle. This perspective suggests that if a new technology, like AI, has the potential to cause significant or lasting harm to people or the environment, and if the science behind it isn't fully understood or agreed upon, then the burden of proof falls on those promoting this technology to demonstrate its safety.
On the flip side, some subscribe to the Proactionary Principle. This viewpoint encourages the continued pursuit of progress and innovation while acknowledging that mistakes may be made along the way. The key is to learn from these missteps and take measures to correct them.
I align more with the Proactionary Principle in discussing new technologies like AI.
That said, I don't dismiss the need for caution. However, if there's no conclusive evidence suggesting that a new innovation poses an existential threat, I believe in moving forward. But every step of the journey must be taken with thoughtful consideration and reasonable precautions.
Furthermore, I strongly advocate for the creation of a more resilient civilization. The strength of a society lies in its ability to recover from setbacks without total devastation. Therefore, as we embrace the potential of AI, we must also build safeguards that prevent one misstep from causing irrevocable harm.
In essence, I champion the advancement of technology, but not at the expense of caution and resilience. It's a balance that I believe is crucial to our responsible and beneficial engagement with AI and other emerging technologies.
Paradoxically, not advancing towards the development of superintelligent AI might actually pose a significant existential risk to humanity; as our world becomes more complex and interconnected, such AI may be our best—or even only—option to effectively manage our future.