top of page

Photo and Video deepfakes

image.png

Photo deep fakes are driven by Generative Adversarial Networks(GANs) which allow for the creation of highly realistic media. Face-swapping is by far the most common application of photographic deep fakes which is just replacing one person's face with another. Deepfake videos are very similar to photos as they both use GANs. Video deep fakes are often made with the intention of making it appear as though an individual said or did something that never happened.  The manipulation of these videos are beyond face-swapping because they have to encompass intricate details that makes it challenging to distinguish between real and fake media. 

Audio Deepfakes

image.png

Deepfakes are not limited to visual data, as audio versions of deepfakes seem to be emerging as a threat. These audio deep fakes are often referred to as 'voice cloning' or 'voice synthesis' and they use advanced algorithms  to replicate someone's voice to the extent in which it sounds real. Due to the fact that it's not visual, there's no real way of determining whether it's real or not by just hearing it. For example, an Athletic Director at a school in  Baltimore, MD used artificial intelligence to mimic the principal's voice saying antisemitic statements which led to him being fired. This type of technology has significant implications for misinformation and poses a great threat to media reliability.

©2024 by Deepsandfakes.com

bottom of page