International Conference on Machine Learning, Artificial Intelligence and Data Science

Jyoti Kunal Shah Profile

Jyoti Kunal Shah

Jyoti Kunal Shah

Biography

Jyoti Shah is a seasoned technology leader with over two decades of experience in application development, digital transformation, and AI innovation. As Director of Application Development at ADP, she combines deep technical acumen with strategic vision to drive scalable enterprise solutions. With 15 years as a full stack developer, Jyoti has mastered modern technologies including React, Angular, Java, and JavaScript. In recent years, she has led AI-powered initiatives that optimize client engagement and sales intelligence. 
Jyoti is also a passionate advocate for inclusion and community growth?she is one of the leaders in the IWIN (International Women's Inclusion Network) chapter at ADP and actively volunteers across multiple social causes. A committed mentor and hackathon judge, she is known for nurturing talent, fostering cross-functional collaboration, peer reviewing other?s work and aligning technology with business value. Jyoti?s leadership lies at the intersection of innovation, operational excellence, and impact-driven development.
 

Research Interest

Jyoti Shah's research focuses on the intersection of Generative AI, Big Data Analytics, Cloud Computing, and Machine Learning, with a strong emphasis on scalable architectures and strategic program management for enterprise innovation.

Abstract

Balancing Narrative Complexity and Actionability in Real-Time Anomaly Detection Using Generative AI Real-time anomaly detection is a mission-critical component of modern analytics pipelines in domains such as cybersecurity, finance, healthcare, and cloud infrastructure. However, merely identifying anomalies is not sufficient; actionable interpretation of those anomalies is increasingly expected in the form of concise, human-readable narratives. Generative AI models, particularly large language models (LLMs), have emerged as powerful tools for producing such explanations, offering context-aware and natural language outputs. Yet, in latency-sensitive and high-volume environments, verbose explanations may overload operators, while overly concise messages risk losing critical diagnostic information. This paper explores how generative models can dynamically balance the trade-off between narrative complexity and brevity to deliver effective, real-time insights. We present an architecture that integrates LLMs into existing anomaly detection pipelines, introducing components such as verbosity controllers, context aggregators, and user-role adaptors. Through a case study involving a large-scale cloud infrastructure monitoring system, we demonstrate the tangible benefits of adaptive summarization in reducing mean-time-to-resolution (MTTR) and improving operator trust. We also discuss key limitations such as context window constraints, explanation fidelity, and the challenge of model drift. Finally, we outline future directions, including reinforcement learning with human feedback (RLHF), federated learning for explanation models, and multimodal summarization. This work offers a foundational framework for deploying scalable, interpretable, and human-centric anomaly explanation systems using generative AI.