In an era where AI's capabilities are rapidly advancing, the question of safety and risk management becomes paramount. While AI currently serves as a beneficial tool, its potential evolution poses significant risks. The possibility of AI surpassing human intelligence, outperforming human capabilities, or developing biases misaligned with human values raises concerns about existential risks comparable to, or potentially exceeding, those of nuclear warfare. High-level attention, including inquiries from Congress and presidential directives, underscores the gravity of these concerns