|
| 1 | +In the context of deep learning, **generative** refers to models that are capable of generating new data samples that are similar to the training data they were trained on. These models learn the underlying probability distribution of the training data and use it to create novel samples[1][2]. |
| 2 | + |
| 3 | +The key principles behind generative deep learning models are: |
| 4 | + |
| 5 | +## Learning the Data Distribution |
| 6 | + |
| 7 | +Generative models learn the probability distribution of the training data. This allows them to generate new samples that are statistically similar to the original data[2]. |
| 8 | + |
| 9 | +## Sampling from the Learned Distribution |
| 10 | + |
| 11 | +Once the model has learned the data distribution, it can sample from this distribution to generate new samples. This sampling process introduces randomness, which allows the model to produce varied outputs for the same input[1]. |
| 12 | + |
| 13 | +## Adversarial Training (GANs) |
| 14 | + |
| 15 | +One popular type of generative model is the Generative Adversarial Network (GAN). GANs consist of two neural networks - a generator and a discriminator. The generator generates new samples, while the discriminator tries to distinguish between real and generated samples. Through this adversarial training process, the generator learns to produce more realistic samples that can fool the discriminator[2]. |
| 16 | + |
| 17 | +## Variational Autoencoders (VAEs) |
| 18 | + |
| 19 | +Another important class of generative models are Variational Autoencoders (VAEs). VAEs learn a latent representation of the data and use this representation to generate new samples. They are trained to maximize the likelihood of the training data under the learned generative model[3]. |
| 20 | + |
| 21 | +In summary, generative deep learning models learn the underlying probability distribution of the training data and use this knowledge to generate novel samples that are statistically similar to the original data. This allows them to create impressive outputs like realistic images, coherent text, and plausible audio[3][4][5]. |
| 22 | + |
| 23 | +Citations: |
| 24 | +[1] https://www.cmu.edu/intelligentbusiness/expertise/genai-principles.pdf |
| 25 | +[2] https://www.sixsigmacertificationcourse.com/the-basic-principles-of-generative-models-with-an-example/ |
| 26 | +[3] https://www.shroffpublishers.com/books/9789355429988/ |
| 27 | +[4] https://www.amazon.in/Generative-Deep-Learning-David-Foster-ebook/dp/B0C3WVJWBF |
| 28 | +[5] https://www.amazon.in/Deep-Learning-Scratch-Building-Principles/dp/935213902X |
0 commit comments