Forget-Me-Not: Learning to Forget in Text-to-Image Diffusion Models
Gong Zhang
Committee: Humphrey Shi (chair), Thien Nguyen, Thanh Nguyen
Directed Research Project(Mar 2023)
Keywords: Computer Vision

Recently, personalized content generation has witnessed drastic progress thanks to the advance of large-scale text-to-image synthesis models as well as their efficient tuning algorithms. The study of its ``counter-question", concept forgetting (removing any unwanted style or content from a trained synthesis model), has hence entered the public spotlight too due to the natural concerns of privacy leaking and copyright infringement and intrusion. In this paper, we investigate a toolkit of novel algorithms for effectively removing a designated concept from a pretrained text-to-image diffusion model. A key knob here is to observe the influence of text tokens on image synthesis through the cross-attention probability, hence inspiring us to exploit attention as the objective. Extensive benchmark experiments validate the promising performance of our proposed methods, that can guide the pretrained model to forget a specific concept represented by either a prompt or a set of images without significantly hurting the generative ability of the rest.