The words you are searching are inside this book. To get more targeted content, please make full-text search by clicking here.

The Threats of Cyber Scams Using Artificial Intelligence (AI) Technology

Discover the best professional documents and content resources in AnyFlip Document Base.
Search
Published by NOR FAIDZAH ABDUL RASIDI, 2024-05-27 04:21:43

The Threats of Cyber Scams Using Artificial Intelligence (AI) Technology

The Threats of Cyber Scams Using Artificial Intelligence (AI) Technology

Keywords: Cyber scams,Artificial Intelligence

SCHOOL OF INFORMATION SCIENCE COLLEGE OF COMPUTING, INFORMATICS AND MATHEMATICS UNIVERSITI TEKNOLOGI MARA BACHELOR OF INFORMATION SCIENCE (HONS) INFORMATION SYSTEMS MANAGEMENT (IM245) MANAGEMENT OF INFORMATION SYSTEMS DEPARTMENT (IMS656) INDIVIDUAL ASSIGNMENT: THE THREATS OF CYBER SCAMS USING ARTIFICIAL INTELLIGENCE (AI) TECHNOLOGY BY NOR FAIDZAH BINTI ABDUL RASIDI 2021494446 (IM2456ST3) PREPARED FOR AHMAD NADZRI BIN MOHAMAD 26 MAY 2024


Author Profile CYBER SCA M S USING ARTIFICIAL INTEL L IGENCE i Born and raised in the vibrant city of Kuala Lumpur, Nor Faidzah has always been inspired by the blend of cultures and the dynamic pace of life in Malaysia's capital. From an early age, she exhibited a keen interest in technology and its potential to transform everyday life. Currently, Nor Faidzah is a dedicated student pursuing a degree in Information System Management at the UiTM Campus of Puncak Perdana. Her academic journey is marked by a passion for understanding how information systems can enhance organizational efficiency and drive innovation. In her free time, Nor Faidzah enjoys watching drama series, finding them to be a wonderful way to unwind and explore different cultures and stories. This hobby not only provides relaxation but also offers her a rich source of inspiration and a deeper understanding of human emotions and relationships. Her love for drama complements her academic pursuits, providing a well-rounded and fulfilling student life. Hi, I’m Nor Faidzah Abdul Rasidi


Cyber scams have evolved significantly with the advent of advanced technologies, including artificial intelligence (AI). AI has transformed the way frauds are carried out, making them more sophisticated and tougher to identify. This e-book investigates the risks posed by AI-driven cyber scams, including new research findings, case studies, essential terminology, guidelines, and tactics for combating these scams in Malaysia. The e-book emphasizes the growing complexity of these frauds and the need for improved cybersecurity measures by giving an in-depth investigation of how AI is used to commit fraud. It also provides useful insights and advice for people, corporations, and regulators looking to successfully reduce the dangers connected with AI-enhanced cyber fraud. This e-book seeks to provide readers with the information and skills they need to navigate the ever-changing environment of cyber dangers in the digital age by covering the most recent developments and trends in detail. Abstract CYBER SCA M S USING ARTIFICIAL INTEL L IGENCE ii


CYBER SCA M S USING ARTIFICIAL INTEL L IGENCE iii TableofContents 1 INTRODUCTION PREVIOUSRESEARCHSTUDIESRELATED 2-3 TOCYBERSCAMSANDAI STANDARDOFPROCEDURE (SOP)/GUIDELINEONCYBER SCAMSANDAI 4-5 DEFINITIONANDEXAMPLESOF CYBERSCAMSANDAI TERMINOLOGY 6-7 8-9 ISSUESANDCHALLENGESOF CYBERSCAMSANDAI TECHNOLOGYINMALAYSIA 10-11 SUGGESTIONSONHOWTO COMBATTHEGLOBALRISEIN CYBERSCAMSINMALAYSIA 12 13 CONCLUSION MALAYSIANCASESTUDIES ONCYBERSCAMSANDAI 14-15 REFERENCES


Artificial intelligence (AI) is a technology that enables machines to replicate human thinking or activities, such as learning and problem-solving. The first AI software, Logic Theorist, was released in 1957, and it sparked the area of AI study. While many people associate AI with science fiction films, it is a very real idea that plays an important role in this era of "big data." AI advantages have been felt in a variety of areas, including technology, banking, marketing, and entertainment. AI can assist with driving automobiles, setting step targets on a wristwatch, recommending music and shows on streaming services, and calculating the most efficient travel routes using a map app. Unfortunately, scammers have also found ways to twist the benefits of AI for malicious purposes. Scammers frequently collect personal information from social media sites and other internet sources. They then use this information, along with AI technology, to customize scam SMS and emails, making them more convincing and difficult to identify as fake. Scammers can employ AI technology and audio data obtained online to clone the voice of a kid, grandchild, or other family member to persuade a victim that a loved one is in distress and needs quick financial help. This might be utilized in a grandparent scam, an emergency scam, or a fake kidnapping scheme. Furthermore, AI-powered chatbots may successfully replicate human conversations, tricking victims into providing sensitive information or sending money. These chatbots may hold long-term discussions, establishing confidence before carrying out the crime. As AI advances, so will the tactics used by hackers. Individuals and organizations must be educated about developing risks and establish strong security procedures to guard against them. Introduction CYBER SCA M S USING ARTIFICIAL INTEL L IGENCE 1


The Threat of Offensive AI to Organizations CYBER SCA M S USING ARTIFICIAL INTEL L IGENCE 2 Enhanced attack efficiency and sophistication AI significantly boosts the efficiency and sophistication of cyberattacks. Adversaries can automate sophisticated activities, including reconnaissance, vulnerability finding, and social engineering. For example, AI may be used to identify the weakest link in a security chain by evaluating trends and behaviors, allowing attackers to concentrate their efforts more efficiently. AI's Offensive Dual Nature The study divided offensive AI into two categories: using AI to launch attacks and attacking AI systems. In the first category, AI is used to improve attack efficiency through tasks such as information gathering, attack automation, and content creation. For example, AI can create realistic fake media (deepfakes) to imitate people and help in phishing attempts. The second type, known as adversarial machine learning, involves attackers exploiting weaknesses in the AI system itself. This involves bypassing AI-based detection systems by changing input data or inserting malicious code into AI models. Challenges of Detection and Defense Detecting and fighting against AI-driven threats is a serious problem for companies. Attackers can escape typical security measures by using complex AI models, especially if the assaults are carried out stealthily, such as during off-hours or with insider aid. Traditional anomaly detection systems may fail to detect the intricate patterns used in AIdriven assaults. As a result, companies must build more sophisticated and adaptable security measures to successfully combat AI-enhanced attacks.


Key technological problems include protecting data from breaches and manipulation, developing AI algorithms that are both efficient and trustworthy in space, and ensuring these systems can operate independently. Given the isolated and dynamic nature of space operations, the study stresses the need for AI systems to be resilient to unforeseen events in the absence of human interaction. CYBER SCA M S USING ARTIFICIAL INTEL L IGENCE 3 The article gives a thorough evaluation of the dangers and problems connected with integrating AI into space technology. The study outlines major cyber vulnerabilities such as data poisoning, adversarial assaults, and AI system failures that might jeopardize the efficiency and security of space missions. Ensuring data security and integrity, establishing robust and trustworthy AI systems capable of working in hostile space environments, and dealing with the limits of limited processing resources are all identified as significant technological challenges. On the governance side, the paper emphasizes the significance of building international frameworks to address ethical, legal, and security issues around AI in space. It advocates for the development of international standards and best practices to reduce risks and assure safety, emphasizing the need for global collaboration among states, space agencies, and the commercial sector. Ethical aspects like transparency, accountability, and ethical AI deployment are also addressed, with a focus on maintaining trust and safety in AI-powered space missions. An extended review on cyber vulnerabilities of AI technologies in space applications: Technological challenges and international governance of AI


CYBER SCA M S USING ARTIFICIAL INTEL L IGENCE 4 AI voice cloning tools and ChatGPT are being used to aid cybercrime, extortion scams What Astro Awani's article from 2023 discusses the usage of AI voice cloning techniques and AI language models like ChatGPT in cybercrime, notably extortion fraud. Cybercriminals use these technologies to make their schemes appear more credible and sophisticated, tricking victims into complying with their requests. When This article also covered a real-life example of virtual abduction in April 2023, when an Arizona-based mother named Jennifer DeStefano stated that an anonymous caller claimed to have kidnapped her 15-year-old daughter and demanded a US$1 million ransom. If she didn't pay up, her child would be drugged and raped. She could hear her daughter screaming, yelling, and pleading in the background, but the caller refused to allow her to speak on the phone. It turns out that her daughter was safe and had not been taken before the ransom was paid. Who The perpetrators of these crimes are cybercriminals who have used modern technology like VoiceLab to mislead and deceive their victims. They frequently collect personal information and voice samples from social media networks to make their frauds appear more credible. How First, cybercriminals discover and get information about their prey from social media, after which they use AI voice cloning to create realistic voice recordings that resemble their victims' speech patterns. They then utilize the recordings for fraud, such as claiming to have abducted someone and demanding a ransom. ChatGPT and related AI language models are used to create convincing messages and scenarios that lead victims to believe the threats are genuine and impending. Why Young individuals and popular individuals, as early users of innovative technologies and rapidly growing social platforms, are more likely to have their biometrics gathered for use in virtual kidnapping attempts. This encouraged scammers to utilize AI to imitate victims' voices and craft highly convincing narratives, increasing the possibility of extorting money from unknowing people.


CYBER SCA M S USING ARTIFICIAL INTEL L IGENCE 5 CyberSecurity M’ sia warns of impersonation, scam activities on WhatsApp When The alert from CyberSecurity Malaysia was issued on December 7, 2023. However, the report states that these impersonation schemes have lately increased, implying that this has been an ongoing issue for some months. Who The key entities engaged are CyberSecurity Malaysia, the national cybersecurity specialized agency, and WhatsApp users. Scammers, generally unnamed people or organizations, are using this site to carry out their fraudulent operations. How In a Facebook post, CSM stated that fraudulent actions were taking place using the program, with the offender appearing as someone known to the victim before providing a link and requesting the victim to click it. Pretending to be a friend or family member in trouble or in desperate need of money, in order to persuade victims to give money or provide personal information. Once the victim clicks the link, they will lose access to their WhatsApp account. Why The motivation behind these frauds is essentially financial gain. Scammers imitate trustworthy contacts to trick victims into providing money or sensitive information. This strategy takes advantage of the victims' confidence and familiarity with their connections, increasing their likelihood of falling for the scam. What The Star’s article from 2023 highlights a warning given by CyberSecurity Malaysia on an increase in impersonation fraud activities on WhatsApp. Scammers use false accounts to imitate victims' connections and deceive them into disclosing sensitive information or sending money.


Natural Language Processing (NLP) Natural language processing, or NLP, is the combination of computational linguistics (rule-based modeling of human language) with statistical and machine learning models that allow computers and digital devices to detect, comprehend, and create text and speech. For Example: Translator CYBER SCA M S USING ARTIFICIAL INTEL L IGENCE 6 Artificial Intelligence (AI) Artificial intelligence (AI) enables machines or computer systems to mimic human intellect in activities like thinking, learning, and problem-solving. Artificial intelligence uses machine learning techniques to enable machines to execute cognitive tasks independently or semi-autonomously. As artificial intelligence advances, many processes become more efficient, and jobs that are difficult today will be completed more swiftly and precisely. For Example: Siri Phsing Scam "Phishing" is the effort to steal sensitive information, such as usernames, passwords, credit card numbers, bank account information, or other crucial data, in order to use or sell the stolen information. An attacker draws the victim in by impersonating a respectable source and making an intriguing request, just like a fisherman uses bait to capture a fish. For Example: Email Deep Learning Deep learning is a kind of machine learning that uses neural networks to train computers to perform tasks that humans do intuitively. Deep learning involves using a model to do classification or regression tasks directly from data such as photos, text, or sound. Deep learning algorithms may reach unprecedented accuracy, frequently outperforming humans. For Example: Facial Recognition Machine Learning (ML) Machine Learning is a subset of artificial intelligence that creates algorithms by learning the underlying patterns of datasets and using them to generate predictions on new similar kinds of data without being explicitly coded for each task. For Example: Spam Filter Cyber Scams and AI Terminology


Cyber Scams and AI Terminology Spear Phsing Spear-phishing is a deliberate attempt to steal sensitive information from a specific victim, such as account passwords or financial information, usually for malicious reasons. This is accomplished by gathering personal information about the victim, such as their friends, birthplace, employment, frequent destinations, and what they have just purchased online. For Example: Impostor as someone that person know CYBER SCA M S USING ARTIFICIAL INTEL L IGENCE 7 Vishing Scam Vishing, or "voice phishing, " is a phone-based hack in which cybercriminals use the phone as a weapon for their operations. During a phishing phone call, a fraudster may attempt to get personal and financial information, such as bank account numbers and passwords. For Example: Call from unknown Smishing Scam Smishing, often known as SMS phishing, is a type of phishing assault that targets victims using text messages. These scams target individuals or corporations in an attempt to steal money, sensitive data, or both. For Example: Fake text with malicious link ChatGPT ChatGPT is a natural language processing chatbot powered by generative AI that enables you to conduct human-like conversations while doing various activities. The AI tool may help you answer questions and do activities like writing emails, articles, and code. For Example: Help in writing the emails Chatbots A chatbot is an artificial intelligence (AI) computer that engages in conversation with another user via voice commands or text conversations. A chatbot is sometimes referred to as an Artificial Conversational Entity (ACE), chat robot, talk bot, chatterbot, or chatter box. A user can ask a chatbot a question or provide a command, and the chatbot will answer or perform the action. For Example: Chatbot from Shopee


The National Cyber Security Agency (NACSA) was officially established in February 2017 as the national lead agency for cyber security matters, with the goal of securing and strengthening Malaysia's resilience in the face of cyber attack threats by coordinating and consolidating the country's best cyber security experts and resources. NACSA is also committed to developing and implementing national-level cyber security policies and strategies, protecting National Critical Information Infrastructures (NCII), undertaking strategic measures to counter cyber threats, spearheading cyber security awareness, acculturation, and capacity-building programs, formulating strategic approaches to combating cybercrime, advising on organizational cyber risk management, and developing and optimizing shared resources. CYBER SCA M S USING ARTIFICIAL INTEL L IGENCE 8 NATIONAL CYBER SECURITY AGENCY (NACSA) GUIDELINES


Establishment of Governance Bodies The plan emphasizes national cybersecurity governance through organizations such as the National Cyber Security Committee, which supervises strategic direction and cybersecurity implementation. Capacity Building By concentrating on capacity creation, the approach intends to improve cybersecurity workers' skills and knowledge. Training programs and certifications must be offered to create a well-prepared staff capable of dealing with sophisticated cyber threats. CYBER SCA M S USING ARTIFICIAL INTEL L IGENCE 9 Strengthening Public-Private Partnerships Malaysia aims to establish an integrated approach to cybersecurity through collaboration between government agencies and the business sector. Public awareness programs are vital for educating individuals about the dangers of cyber fraud and the precautions they may take to protect themselves. Strengthening Legal Framework The Authority must review and revise existing cybersecurity laws while also introducing new legislation, such as the Cybersecurity Bill. This law aims to address both present and upcoming dangers, ensuring that legal safeguards are in place to prevent complex cyber frauds and the exploitation of AI technology.


CYBER SCA M S USING ARTIFICIAL INTEL L IGENCE 10 ISSUES AND CHALLENGES OF CYBER SCAMS AND AI TECHNOLOGY IN MALAYSIA Malaysia's cybersecurity infrastructure and regulatory framework are straining to keep up with fast advances in AI and the increasing sophistication of cyber fraud. Despite efforts by organizations such as CyberSecurity Malaysia and the National Cyber Security Agency (NACSA), there are still substantial gaps in both preventative and legal enforcement. Many Malaysian organizations, particularly small and mediumsized businesses (SMEs), cannot deploy advanced cybersecurity measures. This makes them ideal targets for fraudsters who employ AI to exploit their flaws. The current legislative framework, which includes the Personal Data Protection Act (PDPA), is also becoming out of date. Inadequate Cybersecurity Infrastructure and Legislation According to Zulhusni (2023), AirAsia suffered a ransomware assault in November 2022, exposing the personal information of around five million people. This event highlights the inadequacy of current security measures and the urgent need for stronger defenses. Example of Real-Life Case


CYBER SCA M S USING ARTIFICIAL INTEL L IGENCE 11 ISSUES AND CHALLENGES OF CYBER SCAMS AND AI TECHNOLOGY IN MALAYSIA Malaysia, like many other countries, is experiencing a serious lack of a cybersecurity workforce. This gap jeopardizes national security and the integrity of digital infrastructure, exposing numerous industries to cyber assaults. Malaysia's cybersecurity workforce is now estimated at roughly 15,000 professionals, but the country will require at least 25,000 by 2025 to fully defend its digital assets and infrastructure. The 10,000- professional disparity reveals a serious risk. The shortage of trained individuals has an impact on a variety of industries, including government, telecommunications, education, and retail, all of which have had severe data breaches. Shortage of Cybersecurity Professionals Datuk Dr. Amirudin Abdul Wahab, CEO of CyberSecurity Malaysia, stated that the CSM 2023 Mid-Year Threat Landscape study found that the government sector faced the most data breaches, accounting for 22% of all breaches impacting all sectors (Fam, 2024). Example of Real-Life Case


Early Education: Integrating cybersecurity education into educational programs may provide a solid foundation of knowledge from an early age. Furthermore, providing seminars and certification courses to the general public and certain industries helps improve knowledge and preparation for cyber risks. Partnerships with educational institutions to provide specialized training in cybersecurity can help close the gap for competent people. CYBER SCA M S USING ARTIFICIAL INTEL L IGENCE 12 SUGGESTION ON HOW TO COMBAT A GLOBAL RISE IN CYBER SCAMS IN MALAYSIA Raising Awareness: Raising awareness of cyber fraud and safe internet activities is critical. Government agencies, such as CyberSecurity Malaysia and NACSA, should increase public awareness campaigns using social media, television, and community outreach activities. These ads should focus on teaching individuals about typical scam methods, such as phishing and social engineering, and how to identify and prevent them. Advanced Security Technologies: AI-driven threat detection systems, multi-factor authentication (MFA), and end-to-end encryption are all examples of sophisticated security technology that organizations should implement. Regular vulnerability assessments and penetration testing can help uncover and correct security flaws. Using these tools can considerably lower the danger of cyber fraud.


As we go through the complexity of the digital era, it becomes clear that incorporating artificial intelligence (AI) into numerous aspects of life results in both great breakthroughs and major hazards. Our research has shown that, while AI technology may improve efficiency, convenience, and decision-making processes across a variety of industries, it also introduces new obstacles, notably in the arena of cyber fraud. The Malaysian context, via several case studies, has offered real instances of how these global concerns emerge locally, providing insights into unique challenges and emphasizing the importance of localized solutions. Taking on these challenges demands a diverse strategy. The Standard Operating Procedures (SOPs) and recommendations provided offer a foundation for risk mitigation, emphasizing the necessity of proactive actions and strong cybersecurity regulations. However, important challenges such as technology limitations, low public awareness, and the necessity for stricter regulatory measures remain. Enhancing AI literacy, improving legal frameworks, encouraging collaboration between the public and commercial sectors, and investing in sophisticated cybersecurity technology are all critical initiatives for combating the global growth in cyber scams, notably in Malaysia. In conclusion, while the rapid advancement of AI technology provides several benefits, it is critical to be attentive to its possible misuse. We can properly leverage the power of AI and reduce the dangers connected with cyber fraud by remaining educated, adopting solid security measures, and promoting collaboration across industries. Balancing innovation and security is critical for ensuring that technological advancement benefits society as a whole while maintaining safety and confidence. CYBER SCA M S USING ARTIFICIAL INTEL L IGENCE 13 CONCLUSION


CYBER SCA M S USING ARTIFICIAL INTEL L IGENCE 14 References Breda, P., Markova, R., Abdin, A. F., Mantı, N. P., Carlo, A., & Jha, D. (2023). An extended review on cyber vulnerabilities of AI technologies in space applications: Technological challenges and international governance of AI. Journal of Space Safety Engineering, 10(4), 447–458. https://doi.org/10.1016/j.jsse.2023.08.003 Deep learning. (n.d.). MATLAB & Simulink. https://www.mathworks.com/discovery/deep-learning.html Fam, C. (2024, April 29). Recruit and reinforce: Solving Malaysia’s cybersecurity shortfall. The Star. https://www.thestar.com.my/tech/technews/2024/04/29/recruit-and-reinforce-solving-malaysias-cybersecurityshortfall Florida Department of Agriculture and Consumer Services. (n.d.). Florida Department of Agriculture & Consumer Services. https://www.fdacs.gov/Consumer-Resources/Scams-and-Fraud/ArtificialIntelligence-and-Scams GeeksforGeeks. (2023, May 8). What is Machine Learning? GeeksforGeeks. https://www.geeksforgeeks.org/ml-machine-learning/ McGowan, E. (2024, February 20). What is vishing? Tips to spot and avoid voice phishing scams. https://us.norton.com/blog/online-scams/vishing Micro, T. (2023, August 5). AI voice cloning tools and ChatGPT are being used to aid cybercrime, extortion scams. Astro Awani. https://www.astroawani.com/berita-malaysia/ai-voice-cloning-tools-andchatgpt-are-being-used-aid-cybercrime-extortion-scams-430369 Mirsky, Y., Demontis, A., Kotak, J., Shankar, R., Gelei, D., Yang, L., Zhang, X., Pintor, M., Lee, W., Elovici, Y., & Biggio, B. (2023). The threat of offensive AI to organizations. Computers & Security, 124, 103006. https://doi.org/10.1016/j.cose.2022.103006


CYBER SCA M S USING ARTIFICIAL INTEL L IGENCE 15 References Morandín-Ahuerma, F. (2022). What is Artificial Intelligence? International Journal of Research Publication and Reviews, 03(12), 1947–1951. https://doi.org/10.55248/gengpi.2022.31261 National Cyber Security Agency (NACSA). (n.d.). MalaysiaCyberSecurityStrategy2020-2024. https://asset.mkn.gov.my/wpcontent/uploads/2020/10/MalaysiaCyberSecurityStrategy2020-2024.pdf Ortiz, S. (2024, April 11). What is ChatGPT and why does it matter? Here’s what you need to know. ZDNET. https://www.zdnet.com/article/what-is-chatgptand-why-does-it-matter-heres-everything-you-need-to-know/ What is natural language processing? | IBM. (n.d.). https://www.ibm.com/topics/natural-language-processing What is Smishing? Text Message Phishing Attacks. (2022, July 26). Abnormal. https://abnormalsecurity.com/glossary/smishing What is Spear-phishing? Defining and Differentiating Spear-phishing from Phishing. (n.d.). Digital Guardian. https://www.digitalguardian.com/blog/whatis-spear-phishing-defining-and-differentiating-spear-phishing-and-phishing What is a Chatbot? | Twilio. (n.d.). https://www.twilio.com/docs/glossary/whatis-chatbot Zulhusni, M. (2023, July 13). What’s going on with cyber security in Malaysia? Tech Wire Asia. https://techwireasia.com/07/2023/whats-going-on-with-cybersecurity-in-malaysia-in-2023-so-far/


Click to View FlipBook Version