Follow BigDATAwire:

December 5, 2016

How AI Will Spoof You and Steal Your Identify

(svedoliver/Shutterstock)

People generally appreciate how the new wave of artificial intelligence-powered chatbots from Web giants can perform basic online tasks on our behalf. Who doesn’t like having the ability to order a large pepperoni pizza by speaking five words into a smartphone? But security experts warn there’s not much to stop cybercriminals from using AI technology to take social engineering hacks and identity theft to scary new levels.

A dire warning about the increasing use of AI technology by cybercriminal was issued by Intel (NASDAQ: INTC) last month. Eric Peterson, a Director of Threat Research at Intel Security, warned that machine learning is starting to crop up in sophisticated scams that have already netted $3 billion for the bad guys.

“With an ever-increasing footprint in education, business, and research, the availability of machine learning toolkits, documentation, and tutorials has exploded in recent years,” Peterson writes in the recent “McAfee Labs 2017 Threat Predictions” report. “In as little as an hour, an individual can be training complex models on large datasets on a distributed architecture….The learning period has become shorter, and accessibility for everyone, including cybercriminals, has never been better.”

Intel Research believes machine learning has helped cybercriminals to fine-tune their social engineering approach in sophisticated ongoing scams, like the Business Email Compromise (BEC) scam, which the FBI has been tracking since early 2015. “We believe that cybercriminals are leveraging machine learning to target victims for BEC and similar scams,” Peterson says.

While scams like BEC are relatively advanced and take a fair bit of effort to execute on the part of the attackers, there’s the potential for malicious AI to be unleashed against consumers on a much broader scale. AI-powered chatbots have already become quite popular, and products like Amazon Echo and Google Home are poised to be hot Christmas gifts this year.

When you consider that a Gartner analyst recently predicted that by 2020, we’ll have more conversations with chatbots than our spouses, then you begin to understand why security experts are wary of the amount of trust we’re giving the machines.

Malicious Chatbots

One security expert who’s suspicious of the machines is Shaun Murphy. The combination of massive data breaches and the rapid evolution of AI technology is a bad recipe for good security, the Sendr CEO and co-founder says.

drone_pizza_shutterstock_mipan

Predictive, drone-based pizza delivery is clearly in humanity’s best interest, but malicious chatbots are not (mipan/Shutterstock)

“Maybe you get a phone call someday from somebody who knows a lot of information about you, and you end up giving them a lot of information, or give them access to your bank account,” Murphy tells Datanami. “These are things that AI will absolutely be able to do, and you won’t be able to stop them.”

The pace of data breaches may be slowing down, but there’s already so much stolen data about consumers out there that it presents a fertile training ground for malicious AIs. Cybercriminals are already availing themselves of big data tools and techniques to collect, aggregate, clean, and filter their stolen bytes, so why wouldn’t they go further up the stack and use deep learning techniques to get really targeted with their attacks?

The answer is there isn’t a good reason, and they will use all available resources to drain consumers of their money, their data, and ultimately, their identities.

“They want to start learning who you are. They want to start becoming you,” Murphy says. “Getting access to your email is probably one of the most horrifying things because they then can go and reset your passwords. Online, they’re essentially you at some point.”

ML in Cyber Warfare

Intel’s Peterson agrees that the combination of massive data and advanced AI technology will be a boon to cybercriminals who are looking to exploit any weakness in the protective armor of businesses.

intel-security_ml_cybercriminals

How cybercriminals are using machine learning (Source: Intel Research)

“Between social media information, stolen data warehouses, and publicly disclosed business information, attackers have access to more than enough data to train predictive models to identify high-value targets,” he concludes. “Tools to perform the complex analysis behind target selection are readily available, and there are a plethora of public sources of data required to build and train malicious machine learning algorithms.”

Just as data scientists use tools like Google’s TensorFlow, Numpy, Scikit-learn, and Pandas to boost their data-crunching prowess, cybercriminals are also powering up with modern malware toolkits like Trillium, Zeus, and Angler, Peterson says. “Malware authors can inflict far more damage with the assistance of toolkits than they could with their own individual skillsets,” he writes. ” Machine learning tools are force multipliers for those of us in security roles. We would be negligent to assume that cybercriminals are not also adopting these powerful tools.”

Preventing Skynet

When you realize that AI has already gotten smart enough to start creating its own encryption, then you realize how sophisticated and believable the malicious chatbots will likely become.

“We’ll have a lot of problems with authentication and making sure the person you’re talking to on the other end is the person you want to talk to, and is not AI that’s been trained to do it,” Murphy says. “They’ll just hit you every angle—phone, email, text messages, all that stuff.”

digital-skull_enzozo_shutterstock

(enzozo/Shutterstock)

How do you protect yourself from the identity-sucking AI bots? Murphy says being tech savvy enough not to click on malicious hyperlinks or to not talk with the nice sounding man who called from Microsoft Technical Support may not be enough. In fact, it may be too late to put the genie back in the bottle.

The big problem, Murphy says, is that consumers have become accustomed to getting apps and services for “free.” The fact that consumers are paying in other ways—namely, by giving Web giants access to much of our data—will ultimately prevent any mass rollback of consumer data availability to legitimate online players and digital ne’er-do-wells alike.

The only way to stop this, Murphy says, is for people to take security seriously by limiting their sharing of data and insisting on encrypted communications. In short, it’s by bringing privacy back into the game.

“They [the web giants] made billions by not having privacy—by, in fact, selling your data to advertisers–so of course they’re not going to be interested in doing that,” he says. “It’s going to be a fight against the tech giants and a fight against the common trend of capturing as much data as humanly and technologically possible…. Reducing privacy and reducing security will only harm us in the long run.”

Related Items:

AI-Powered Bots Gearing Up to Serve You

AI is Coming, Prompting New IT Security Concerns

Machine Learning’s Big Role in the Future of Cyber Security

 

 

BigDATAwire