Criminals' potential use of AI to catch more people
Current warnings regarding artificial intelligence (AI) are commonplace. Fearful messages about AI's possibility of leading to humanity's extinction are featured, evoking images from the Terminator movies. Even a symposium to examine AI safety has been organized by UK Prime Minister Rishi Sunak. However, we have been utilizing AI technologies for a while now, from the algorithms that shopping websites use to suggest relevant things to the cars with equipment that can read traffic signs and advise lane location.
AI is a tool to boost productivity, process and organize massive amounts of data, and delegate decision-making. However, anyone can use these technologies, including criminals. And we already see criminals utilizing AI in its infancy.
Using deep fake technology, for instance, revenge pornography has been created. Technology makes criminal behaviour more effective. It enables criminals to target more people and makes them appear more credible. Some insights into how criminals can use AI can be gained by looking at how they have previously embraced and adapted to technological advancements.
There is a more potent phishing hook.
AI systems like ChatGPT and Google's Bard offer writing assistance, enabling unskilled writers, for instance, to create powerful marketing messages. However, new technology might make it easier for thieves to appear more credible while speaking with potential victims. Consider all the spam and phishing texts and emails that are poorly worded and simple to spot. Being reasonable is essential to getting a victim to reveal information. An estimated 3.4 billion spam emails are sent daily, which makes phishing a numbers game. According to my calculations, there would be 6.2 million more phishing victims annually if thieves could improve their communications so that even 0.000005% of them presently convinced a victim to divulge information.
Comments