Deep fakes have rapidly emerged as one of the most ominous concerns within modern society. The ability to easily and cheaply generate convincing images, audio, and video via artificial intelligence will have repercussions within politics, privacy, law, security, and broadly across all of society. In light of the widespread apprehension, numerous technological efforts aim to develop tools to distinguish between reliable audio/video and the fakes. These tools and strategies will be particularly effective for consumers when their guard is naturally up, for example during election cycles. However, recent research suggests that not only can deep fakes create credible representations of reality, but they can also be employed to create false memories. Memory malleability research has been around for some time, but it relied on doctored photographs or text to generate fraudulent recollections. These recollected but fake memories take advantage of our cognitive miserliness that favors selecting those recalled memories that evoke our preferred weltanschauung. Even responsible consumers can be duped when false but belief-consistent memories, implanted when we are least vigilant can, like a Trojan horse, be later elicited at crucial dates to confirm our pre-determined biases and influence us to accomplish nefarious goals. This paper seeks to understand the process of how such memories are created, and, based on that, proposing ethical and legal guidelines for the legitimate use of fake technologies.

Full text

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Privacy Policy. By closing this message, you are consenting to our use of cookies.