|
517 | 517 | "cell_type": "markdown",
|
518 | 518 | "metadata": {},
|
519 | 519 | "source": [
|
520 | | - "Please notice that the early version of PyMC use `ElemwiseCategorical()` for categorical variable. But in PyMC 4, it was deprecated. But new PyMC provieds a new functions to do the categorical sampling, the `CategoricalGibbsMetropolis` optimized for categorical variables." |
| 520 | + "Please notice that the early version of PyMC use `ElemwiseCategorical()` for categorical variable. But in PyMC, it was deprecated. But new PyMC provieds a new functions to do the categorical sampling, the `CategoricalGibbsMetropolis` optimized for categorical variables." |
521 | 521 | ]
|
522 | 522 | },
|
523 | 523 | {
|
524 | 524 | "cell_type": "markdown",
|
525 | 525 | "metadata": {},
|
526 | 526 | "source": [
|
527 | | - "There are another interesting thing with new PyMC 4. The PyMC 4 use a powerful new sampling principle be called Hamiltonian Monte Carlo (HMC). We'll not talk too much about it here, since it's a complex physical principle. But we should know that [HMC and NUTS take advantage of gradient information from the likelihood to achieve much faster convergence than traditional sampling methods, especially for larger models. ](https://www.pymc.io/projects/docs/en/stable/learn/core_notebooks/pymc_overview.html#pymc-overview)\n", |
| 527 | + "There are another interesting thing with new PyMC. The PyMC use a powerful new sampling principle be called Hamiltonian Monte Carlo (HMC). We'll not talk too much about it here, since it's a complex physical principle. But we should know that [HMC and NUTS take advantage of gradient information from the likelihood to achieve much faster convergence than traditional sampling methods, especially for larger models. ](https://www.pymc.io/projects/docs/en/stable/learn/core_notebooks/pymc_overview.html#pymc-overview)\n", |
528 | 528 | "\n"
|
529 | 529 | ]
|
530 | 530 | },
|
|
0 commit comments