Rule-based artificial intelligence (RB-AI) systems offer a high degree of interpretability and control, making them particularly suitable for safety-critical domains such as healthcare, transportation, and industrial automation. However, ensuring their correctness is challenging, as rule interactions may cause logical conflicts, redundancies, and livelocks—issues difficult to detect through conventional testing, especially in dynamic environments.This article presents a formal approach for the specification and verification of RB-AI systems using Event-B, a state-based formal method grounded in set theory and first-order logic. In this approach, rule activation, condition monitoring, and action execution are specified as part of a formal Event-B model, where critical properties—such as safety, liveness, conflict freedom, redundancy elimination, and livelock prevention—are encoded as invariants and theorems. These properties are automatically verified using the Rodin platform, which generates and discharges proof obligations to ensure rule execution is both sound and traceable.A case study on SmartAir, an AI-driven fan control system, demonstrates how the method detects rule conflicts, eliminates redundant behavior, and prevents livelocks. The results show that the approach provides a solid foundation for building RB-AI systems with formal assurance and applicability to real-world contexts where verifiability is essential.