Follow BigDATAwire:

July 1, 2022

FBI Warns of Deepfakes in Remote Hiring

Have you ever been talking to someone face-to-face on a video chat and heard them sneeze without actually seeing them sneeze? No, this is not a new physics-bending symptom of Covid. It could be a warning that the person you are talking to is not real.

The FBI has issued a public service announcement warning of the use of deepfake technology and stolen personally identifiable information (PII) in remote work hiring.

The Federal agency reported that remote and work-at-home positions for IT and computer programming, database, and software jobs are being targeted by criminals looking to gain access to customer PII, financial data, corporate IT databases, and proprietary information.

Deepfakes are unauthorized digital twins created by malicious actors, and the AI technology supporting them has become so sophisticated that the human eye sometimes cannot tell the difference between a deepfake and an actual person.

The FBI says that deepfakes can include a video, an image, or a recording that is convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said. The agency stated that complaints have arisen regarding the use of voice spoofing or potential voice deepfakes during online interviews of potential applicants: “In these interviews, the actions and lip movement of the person seen interviewed on-camera do not completely coordinate with the audio of the person speaking. At times, actions such as coughing, sneezing, or other auditory actions are not aligned with what is presented visually,” the FBI warning said.

The agency has also received complaints that stolen PII is being used to apply for remote positions. Both employers and victims of identity theft are reporting that criminals are using identities belonging to other individuals while applying for these remote jobs.

Stuart Wells is the CTO of Jumio, an AI-powered identity verification platform, and he knows all about this treacherous chicanery: “Modern day cybercriminals have the knowledge, tools, and sophistication to create highly realistic deepfakes, while leveraging stolen personally identifiable information (PII), to pose as real people and deceive companies into hiring them,” he said. “Posing as an employee, hackers can steal a wide range of confidential data, from customer and employee information to company financial reports. This FBI security warning is one of many that have been reported by federal agencies in the past several months.”

This latest alert from the FBI follows a private industry notification from March of 2021 that warned “Malicious actors almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12-18 months.” It appears this warning has now come true, and organizations should take heed.

As remote and hybrid work continues to increase in the workforce, Wells recommends that companies step up their security practices for detecting deepfakes and malicious actors. For example, Jumio’s platform uses a combination of AI, biometrics, machine learning, liveness detection and automation to give companies more peace of mind. Identity verification solutions can help protect businesses and enable trust in hiring legitimate candidates.

Related Items:

Deepfakes, Digital Twins, and the Authentication Challenge

Security, Privacy, and Governance at the Data Crossroads in ‘22

U.S. Army Employs Machine Learning for Deepfake Detection

BigDATAwire