International Conference on Artificial Intelligence and Cybersecurity

Chinmaya Kumar Nayak Profile

Chinmaya Kumar Nayak

Chinmaya Kumar Nayak

Biography

Career Objective

To contribute to academia and research by leveraging technical skills, innovation, and creativity, while working with dynamic minds to make a positive societal impact.

Educational Qualifications

1.Ph.D. in Computer Science & Engineering, VSSUT, Burla ? Feb 2021
2.M.Tech in Computer Science & Engineering, College of Engineering, Bhubaneswar (BPUT) ? CGPA: 8.33 ? 2010
3.B.E. in Information Technology, BCET, Balasore (BPUT) ? 70.14% ? 2006
4.Professional Experience (17+ Years)
5.Associate Professor & Head, CSE-AIML ? FET, Sri Sri University (July 2023 ? Present)
6.Sr. Asst. Professor & Program Coordinator, CSE-AIML ? FET, Sri Sri University (July 2021 ? June 2023)
7.Associate Professor, CSE ? GITA Autonomous College, Bhubaneswar (Apr ? June 2021)
8.Assistant Professor, CSE ? GITA, Bhubaneswar (2010 ? 2021)
9.Lecturer, CSE ? GITA, Bhubaneswar (2009 ? 2010)
10.Lecturer, CSE ? PIET, Rourkela (2007 ? 2008)
11.Software Engineer ? SR InfoTech, Gurgaon (2006)

Awards & Recognitions

1.Sandeep Mohapatra Memorial Medal ? Institution of Engineers (India), 2015 & 2016
2.Best Teaching Award ? International Education Awards (2020, 2022, 2023)
3.Research Excellence Award ? International Education Awards, 2020
4.Teaching Excellence Award ? World Charitable Trust, New Delhi (2022, 2023)

Research Interest

1.Artificial Intelligence & Machine Learning (AI/ML) 2.Wireless Sensor Networks (WSN) & Energy Optimization 3.Internet of Things (IoT) 4.Network Security & IP Subnetting 5.Data Science & Predictive Analytics

Abstract

Tricking the Brain of AI: The Cybersecurity Challenge Artificial Intelligence (AI) is rapidly transforming cybersecurity, enabling faster threat detection, smarter decision-making, and predictive defense mechanisms. However, AI systems themselves are not immune to attack. Adversarial Machine Learning (AML) techniques can manipulate AI ?brains? by subtly altering inputs?often in ways invisible to humans?causing models to make incorrect or even dangerous decisions. This presentation explores how hackers exploit these vulnerabilities to trick AI, from bypassing facial recognition systems to fooling malware detectors. We will discuss real-world examples, the science behind adversarial attacks, and the implications for AI reliability and trust. Finally, we will outline emerging defense strategies to make AI models more robust, secure, and resilient in the face of evolving threats. By understanding how AI can be deceived, we can build systems that are not only intelligent but also trustworthy and secure.