Kamini Chauhan Tanwar
Biography
Prof. (Dr.) Kamini C Tanwar, Professor and Director- Amity Institute of Clinical Psychology, Amity University, Haryana.
Research Interest
Psychosocial, Behaviour among Children, Clinical Psychology
Abstract
COGNITIVE DECISION MODELING IN ONLINE FRAUDS
Online exploitation and scams including phishing, romance fraud, investment cons, and deepfake-driven deception exploit cognitive biases and emotional triggers rather than purely technical vulnerabilities. While cybersecurity tools focus on detecting malicious content, they often ignore the human decision-making process that leads to victimization. Understanding and modelling how users cognitively process scam cues can enable proactive interventions before any harm occurs. Artificial intelligence (AI) now underpins much of cybersecurity communication across worldwide powering threat detection, auto-generated alerts, and public advisories. Yet the psychological dynamics by which users interpret, trust, and act on these AI-mediated messages remain scattered across policy documents, Human-Computer Interaction (HCI) studies, and awareness research. There are few gaps which can be broadly identified in the field of behaviour assessment related to online exploitation and scamming like (1) Existing fraud detection focuses on attacker techniques, not victim cognition, (2) Little integration of psychological decision making theories with Artificial Intelligence (AI) driven detection, (3) Lack of real-time risk scoring based on user?s mental state and behavioural cues. It needs to be examined that how AI-enabled or automated security communication shapes risk perception, trust in automation, affect (e.g., anxiety, reassurance), and compliance behaviors. Sources from previous articles and HCI studies suggest three key themes: First, trust in automation plays a decisive role in user behavior - appropriate trust increases compliance, while misplaced trust may result in over-reliance or disregard for legitimate alerts. Second, message framing and tone significantly affect perceived threat severity and emotional reactions; urgent framing can prompt quicker action but may also heighten anxiety. Third, the rise of AI-enhanced social engineering, such as deepfakes, has introduced new vulnerabilities, making culturally relevant and proactive communication strategies essential. To study quantitatively about AI-Psychology integrated cognitive decision modelling, to detect and predict a victim?s susceptibility to online scams based on behavioural, linguistic, and psychological indicators, there is need to (1) Identify and map key cognitive biases and heuristics exploited in common online exploitation methods, (2) Collect and annotate real-world scam communication datasets with psychological manipulation indicators or labels, (3) Build AI models to detect high-risk decision points in interactions related to cyber exploitation and scams and (4) Simulate intervention strategies to reduce the victim?s susceptibility. This type of research project will focus on psychological vulnerability modelling and possible relationship between cognitive biases and susceptibility to online exploitation techniques, applicable to email & text phishing, social media scams, investment and romance frauds, deepfake-enabled deceptions. The effectiveness and limitations of AI in predicting risky decision points will be brought out. The practical applications of such modelling in ensuring cybersecurity awareness, generating alerts like that at a Security Operations Centre and building scam prevention tools will be covered. Designers should consider cultural context and linguistic diversity when crafting alerts, while policymakers should integrate psychological principles into national cybersecurity standards. By aligning AI capabilities with human factors, world can strengthen user resilience and ensure that technology-driven alerts translate into timely, protective action.
Keywords: Cognitive decision-making modelling, online exploitation, fraud, scam, Artificial Intelligence- AI Psychology