Home AI Coaching Diffusion Fashions with Reinforcement Studying – The Berkeley Synthetic Intelligence Analysis Weblog

Coaching Diffusion Fashions with Reinforcement Studying – The Berkeley Synthetic Intelligence Analysis Weblog

0
Coaching Diffusion Fashions with Reinforcement Studying – The Berkeley Synthetic Intelligence Analysis Weblog

[ad_1]


Coaching Diffusion Fashions with Reinforcement Studying

Diffusion fashions have just lately emerged because the de facto commonplace for producing advanced, high-dimensional outputs. It’s possible you’ll know them for his or her skill to provide beautiful AI artwork and hyper-realistic artificial pictures, however they’ve additionally discovered success in different purposes similar to drug design and steady management. The important thing concept behind diffusion fashions is to iteratively rework random noise right into a pattern, similar to a picture or protein construction. That is sometimes motivated as a most probability estimation drawback, the place the mannequin is skilled to generate samples that match the coaching information as carefully as attainable.

Nonetheless, most use circumstances of diffusion fashions usually are not straight involved with matching the coaching information, however as a substitute with a downstream goal. We don’t simply need a picture that appears like present pictures, however one which has a particular kind of look; we don’t simply desire a drug molecule that’s bodily believable, however one that’s as efficient as attainable. On this put up, we present how diffusion fashions could be skilled on these downstream goals straight utilizing reinforcement studying (RL). To do that, we finetune Secure Diffusion on quite a lot of goals, together with picture compressibility, human-perceived aesthetic high quality, and prompt-image alignment. The final of those goals makes use of suggestions from a big vision-language mannequin to enhance the mannequin’s efficiency on uncommon prompts, demonstrating how highly effective AI fashions can be utilized to enhance one another with none people within the loop.

diagram illustrating the RLAIF objective that uses the LLaVA VLM


A diagram illustrating the prompt-image alignment goal. It makes use of LLaVA, a big vision-language mannequin, to judge generated pictures.

Denoising Diffusion Coverage Optimization

When turning diffusion into an RL drawback, we make solely essentially the most primary assumption: given a pattern (e.g. a picture), we now have entry to a reward operate that we will consider to inform us how “good” that pattern is. Our aim is for the diffusion mannequin to generate samples that maximize this reward operate.

Diffusion fashions are sometimes skilled utilizing a loss operate derived from most probability estimation (MLE), that means they’re inspired to generate samples that make the coaching information look extra doubtless. Within the RL setting, we not have coaching information, solely samples from the diffusion mannequin and their related rewards. A method we will nonetheless use the identical MLE-motivated loss operate is by treating the samples as coaching information and incorporating the rewards by weighting the loss for every pattern by its reward. This provides us an algorithm that we name reward-weighted regression (RWR), after present algorithms from RL literature.

Nonetheless, there are just a few issues with this strategy. One is that RWR isn’t a very precise algorithm — it maximizes the reward solely roughly (see Nair et. al., Appendix A). The MLE-inspired loss for diffusion can also be not precise and is as a substitute derived utilizing a variational certain on the true probability of every pattern. Because of this RWR maximizes the reward via two ranges of approximation, which we discover considerably hurts its efficiency.

chart comparing DDPO with RWR


We consider two variants of DDPO and two variants of RWR on three reward capabilities and discover that DDPO constantly achieves the most effective efficiency.

The important thing perception of our algorithm, which we name denoising diffusion coverage optimization (DDPO), is that we will higher maximize the reward of the ultimate pattern if we take note of all the sequence of denoising steps that bought us there. To do that, we reframe the diffusion course of as a multi-step Markov choice course of (MDP). In MDP terminology: every denoising step is an motion, and the agent solely will get a reward on the ultimate step of every denoising trajectory when the ultimate pattern is produced. This framework permits us to use many highly effective algorithms from RL literature which are designed particularly for multi-step MDPs. As a substitute of utilizing the approximate probability of the ultimate pattern, these algorithms use the precise probability of every denoising step, which is extraordinarily simple to compute.

We selected to use coverage gradient algorithms attributable to their ease of implementation and previous success in language mannequin finetuning. This led to 2 variants of DDPO: DDPOSF, which makes use of the easy rating operate estimator of the coverage gradient also referred to as REINFORCE; and DDPOIS, which makes use of a extra highly effective significance sampled estimator. DDPOIS is our best-performing algorithm and its implementation carefully follows that of proximal coverage optimization (PPO).

Finetuning Secure Diffusion Utilizing DDPO

For our predominant outcomes, we finetune Secure Diffusion v1-4 utilizing DDPOIS. We have now 4 duties, every outlined by a special reward operate:

  • Compressibility: How simple is the picture to compress utilizing the JPEG algorithm? The reward is the unfavorable file dimension of the picture (in kB) when saved as a JPEG.
  • Incompressibility: How onerous is the picture to compress utilizing the JPEG algorithm? The reward is the optimistic file dimension of the picture (in kB) when saved as a JPEG.
  • Aesthetic High quality: How aesthetically interesting is the picture to the human eye? The reward is the output of the LAION aesthetic predictor, which is a neural community skilled on human preferences.
  • Immediate-Picture Alignment: How effectively does the picture symbolize what was requested for within the immediate? This one is a little more sophisticated: we feed the picture into LLaVA, ask it to explain the picture, after which compute the similarity between that description and the unique immediate utilizing BERTScore.

Since Secure Diffusion is a text-to-image mannequin, we additionally want to choose a set of prompts to offer it throughout finetuning. For the primary three duties, we use easy prompts of the shape “a(n) [animal]”. For prompt-image alignment, we use prompts of the shape “a(n) [animal] [activity]”, the place the actions are “washing dishes”, “enjoying chess”, and “using a motorcycle”. We discovered that Secure Diffusion usually struggled to provide pictures that matched the immediate for these uncommon eventualities, leaving loads of room for enchancment with RL finetuning.

First, we illustrate the efficiency of DDPO on the easy rewards (compressibility, incompressibility, and aesthetic high quality). The entire pictures are generated with the identical random seed. Within the prime left quadrant, we illustrate what “vanilla” Secure Diffusion generates for 9 totally different animals; all the RL-finetuned fashions present a transparent qualitative distinction. Curiously, the aesthetic high quality mannequin (prime proper) tends in the direction of minimalist black-and-white line drawings, revealing the sorts of pictures that the LAION aesthetic predictor considers “extra aesthetic”.

results on aesthetic, compressibility, and incompressibility

Subsequent, we display DDPO on the extra advanced prompt-image alignment activity. Right here, we present a number of snapshots from the coaching course of: every collection of three pictures reveals samples for a similar immediate and random seed over time, with the primary pattern coming from vanilla Secure Diffusion. Curiously, the mannequin shifts in the direction of a extra cartoon-like model, which was not intentional. We hypothesize that it’s because animals doing human-like actions usually tend to seem in a cartoon-like model within the pretraining information, so the mannequin shifts in the direction of this model to extra simply align with the immediate by leveraging what it already is aware of.

results on prompt-image alignment

Sudden Generalization

Stunning generalization has been discovered to come up when finetuning giant language fashions with RL: for instance, fashions finetuned on instruction-following solely in English usually enhance in different languages. We discover that the identical phenomenon happens with text-to-image diffusion fashions. For instance, our aesthetic high quality mannequin was finetuned utilizing prompts that have been chosen from an inventory of 45 frequent animals. We discover that it generalizes not solely to unseen animals but additionally to on a regular basis objects.

aesthetic quality generalization

Our prompt-image alignment mannequin used the identical listing of 45 frequent animals throughout coaching, and solely three actions. We discover that it generalizes not solely to unseen animals but additionally to unseen actions, and even novel mixtures of the 2.

prompt-image alignment generalization

Overoptimization

It’s well-known that finetuning on a reward operate, particularly a discovered one, can result in reward overoptimization the place the mannequin exploits the reward operate to attain a excessive reward in a non-useful method. Our setting is not any exception: in all of the duties, the mannequin ultimately destroys any significant picture content material to maximise reward.

overoptimization of reward functions

We additionally found that LLaVA is vulnerable to typographic assaults: when optimizing for alignment with respect to prompts of the shape “[n] animals”, DDPO was in a position to efficiently idiot LLaVA by as a substitute producing textual content loosely resembling the proper quantity.

RL exploiting LLaVA on the counting task

There’s presently no general-purpose methodology for stopping overoptimization, and we spotlight this drawback as an vital space for future work.

Conclusion

Diffusion fashions are onerous to beat relating to producing advanced, high-dimensional outputs. Nonetheless, to this point they’ve largely been profitable in purposes the place the aim is to study patterns from heaps and plenty of information (for instance, image-caption pairs). What we’ve discovered is a technique to successfully prepare diffusion fashions in a method that goes past pattern-matching — and with out essentially requiring any coaching information. The probabilities are restricted solely by the standard and creativity of your reward operate.

The best way we used DDPO on this work is impressed by the latest successes of language mannequin finetuning. OpenAI’s GPT fashions, like Secure Diffusion, are first skilled on large quantities of Web information; they’re then finetuned with RL to provide helpful instruments like ChatGPT. Sometimes, their reward operate is discovered from human preferences, however others have extra just lately found out tips on how to produce highly effective chatbots utilizing reward capabilities based mostly on AI suggestions as a substitute. In comparison with the chatbot regime, our experiments are small-scale and restricted in scope. However contemplating the large success of this “pretrain + finetune” paradigm in language modeling, it actually looks as if it’s price pursuing additional on this planet of diffusion fashions. We hope that others can construct on our work to enhance giant diffusion fashions, not only for text-to-image technology, however for a lot of thrilling purposes similar to video technology, music technology,  picture modifying, protein synthesis, robotics, and extra.

Moreover, the “pretrain + finetune” paradigm isn’t the one method to make use of DDPO. So long as you have got reward operate, there’s nothing stopping you from coaching with RL from the beginning. Whereas this setting is as-yet unexplored, this can be a place the place the strengths of DDPO may actually shine. Pure RL has lengthy been utilized to all kinds of domains starting from enjoying video games to robotic manipulation to nuclear fusion to chip design. Including the highly effective expressivity of diffusion fashions to the combination has the potential to take present purposes of RL to the following degree — and even to find new ones.


This put up is predicated on the next paper:

If you wish to study extra about DDPO, you’ll be able to try the paper, web site, authentic code, or get the mannequin weights on Hugging Face. If you wish to use DDPO in your individual mission, try my PyTorch + LoRA implementation the place you’ll be able to finetune Secure Diffusion with lower than 10GB of GPU reminiscence!

If DDPO conjures up your work, please cite it with:

@misc{black2023ddpo,
      title={Coaching Diffusion Fashions with Reinforcement Studying}, 
      writer={Kevin Black and Michael Janner and Yilun Du and Ilya Kostrikov and Sergey Levine},
      12 months={2023},
      eprint={2305.13301},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}




[ad_2]