Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit bec9781

Browse files
Create 1.What does generative truly mean.md
1 parent c5f339e commit bec9781

File tree

1 file changed

+28
-0
lines changed

1 file changed

+28
-0
lines changed
Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
In the context of deep learning, **generative** refers to models that are capable of generating new data samples that are similar to the training data they were trained on. These models learn the underlying probability distribution of the training data and use it to create novel samples[1][2].
2+
3+
The key principles behind generative deep learning models are:
4+
5+
## Learning the Data Distribution
6+
7+
Generative models learn the probability distribution of the training data. This allows them to generate new samples that are statistically similar to the original data[2].
8+
9+
## Sampling from the Learned Distribution
10+
11+
Once the model has learned the data distribution, it can sample from this distribution to generate new samples. This sampling process introduces randomness, which allows the model to produce varied outputs for the same input[1].
12+
13+
## Adversarial Training (GANs)
14+
15+
One popular type of generative model is the Generative Adversarial Network (GAN). GANs consist of two neural networks - a generator and a discriminator. The generator generates new samples, while the discriminator tries to distinguish between real and generated samples. Through this adversarial training process, the generator learns to produce more realistic samples that can fool the discriminator[2].
16+
17+
## Variational Autoencoders (VAEs)
18+
19+
Another important class of generative models are Variational Autoencoders (VAEs). VAEs learn a latent representation of the data and use this representation to generate new samples. They are trained to maximize the likelihood of the training data under the learned generative model[3].
20+
21+
In summary, generative deep learning models learn the underlying probability distribution of the training data and use this knowledge to generate novel samples that are statistically similar to the original data. This allows them to create impressive outputs like realistic images, coherent text, and plausible audio[3][4][5].
22+
23+
Citations:
24+
[1] https://www.cmu.edu/intelligentbusiness/expertise/genai-principles.pdf
25+
[2] https://www.sixsigmacertificationcourse.com/the-basic-principles-of-generative-models-with-an-example/
26+
[3] https://www.shroffpublishers.com/books/9789355429988/
27+
[4] https://www.amazon.in/Generative-Deep-Learning-David-Foster-ebook/dp/B0C3WVJWBF
28+
[5] https://www.amazon.in/Deep-Learning-Scratch-Building-Principles/dp/935213902X

0 commit comments

Comments
(0)

AltStyle によって変換されたページ (->オリジナル) /