What are Deepfakes
Artificial Intelligence (AI) has come a long way in recent years. While many AI advancements help improve lives, some pose new security risks that must be considered. deepfakes are AI generated fake media (audio, video, or images) designed to deceive people into thinking it’s real. You can think of deepfakes as a very advanced form of media editing where a real person’s likeness or voice can be mimicked by AI software. In the early days of this technology, it was easy to spot the fakes. Now it has become much harder to determine what is real and what isn’t. You might have seen some deepfake videos seemingly portraying celebrities or politicians doing funny things on Youtube or Tiktok. While there are some fun uses for deepfakes, there are also many sinister ways they can be used to manipulate people. deepfakes have created many concerns about their use, but we will focus just on the security and privacy concerns.
AI Generated Imposters
Social Engineering can utilize deepfakes as another tool for scamming unsuspecting victims. deepfakes allow scammers to easily gain the trust of their targets by “becoming” someone they trust. One example is a CEO of a U.K based energy firm being tricked into transferring €220,000 to the scammer’s Hungarian account. The scammer used deepfake technology to mimic the voice of the CEO’s boss, even copying his slight German accent. Most deepfake scam attempts are not convincing enough to successfully trick people yet, but the technology is rapidly advancing. In the future the technology could be good enough to impersonate someone in a live video call. I can imagine a similar scenario as what happened to the U.K based energy firm but instead of a phone call, the CEO enters a Zoom call with what appears to be his boss. This frightening reality could be closer than we think.
Another potential security threat is malicious actors using deepfakes to trick facial recognition software. Imagine someone unlocking your iPhone using a deepfake. It may seem like a distant threat, but researchers have already accomplished similar feats. Researchers in South Korea used deepfakes to trick facial recognition services from Amazon and Microsoft. They acknowledged that these attacks won’t work on all facial recognition systems, but their effectiveness could rapidly increase as the technology improves.
How to Spot Deepfakes
Here are some things to look out for when potentially dealing with deepfakes:
- Unnatural eye movement.
- A lack of blinking.
- Unnatural facial expressions.
- Awkward head and body positioning.
- Bad lip-syncing.
- Robotic-sounding voices.
- Digital background noise.
- Blurry or misaligned visuals.