|
92 | 92 | } |
93 | 93 |
|
94 | 94 | .news-section { |
95 | | - margin-bottom: 30px; |
96 | | - max-height: 0; |
97 | | - overflow: hidden; |
98 | | - transition: max-height 0.3s ease-out; |
99 | | - } |
100 | | - |
101 | | - .news-section.active { |
102 | | - max-height: 500px; |
103 | | - } |
| 95 | + margin-bottom: 30px; |
| 96 | + max-height: 0; |
| 97 | + overflow: hidden; |
| 98 | + transition: max-height 0.3s ease-out; |
| 99 | +} |
| 100 | + |
| 101 | +.news-section.active { |
| 102 | + max-height: 400px; /* Fixed height */ |
| 103 | + overflow-y: auto; /* Enable vertical scrolling */ |
| 104 | + padding-right: 10px; /* Space for scrollbar */ |
| 105 | +} |
| 106 | + |
| 107 | +/* Custom scrollbar styling */ |
| 108 | +.news-section::-webkit-scrollbar { |
| 109 | + width: 6px; |
| 110 | +} |
| 111 | + |
| 112 | +.news-section::-webkit-scrollbar-track { |
| 113 | + background: var(--card-bg); |
| 114 | + border-radius: 3px; |
| 115 | +} |
| 116 | + |
| 117 | +.news-section::-webkit-scrollbar-thumb { |
| 118 | + background: var(--accent-color); |
| 119 | + border-radius: 3px; |
| 120 | +} |
| 121 | + |
| 122 | +.news-section::-webkit-scrollbar-thumb:hover { |
| 123 | + background: var(--accent-hover); |
| 124 | +} |
| 125 | + |
| 126 | +/* For Firefox */ |
| 127 | +.news-section { |
| 128 | + scrollbar-width: thin; |
| 129 | + scrollbar-color: var(--accent-color) var(--card-bg); |
| 130 | +} |
104 | 131 |
|
105 | 132 | .news-item { |
106 | 133 | margin-bottom: 20px; |
@@ -394,23 +421,63 @@ <h1 class="name">Abdallah Dib</h1> |
394 | 421 | </header> |
395 | 422 |
|
396 | 423 | <section class="news-section" id="newsSection"> |
397 | | - <h2 class="section-title">Recent News</h2> |
398 | | - |
399 | | - <div class="news-item"> |
400 | | - <div class="news-date">June 25, 2025</div> |
401 | | - <p>SEREP paper got accepted to <strong>ICCV 2025</strong>. We introduce a novel learning-based method for monocular facial expression capture and retargeting.</p> |
402 | | - </div> |
403 | | - |
404 | | - <div class="news-item"> |
405 | | - <div class="news-date">May 20, 2025</div> |
406 | | - <p>One paper accepted to 'AI for Creative Visual Content Generation Editing and Understanding' (CVEU), <strong>CVPR 2025</strong>. We propose a texture generator model giving artists control over shape, skin tone and fine details.</p> |
407 | | - </div> |
408 | | - |
409 | | - <div class="news-item"> |
410 | | - <div class="news-date">March 1, 2024</div> |
411 | | - <p>Mosar paper got accepted to <strong>CVPR 2024</strong>. MoSAR turns a portrait image into a relightable 3D avatar.</p> |
412 | | - </div> |
413 | | - </section> |
| 424 | + <h2 class="section-title">Recent News</h2> |
| 425 | + |
| 426 | + <div class="news-item"> |
| 427 | + <div class="news-date">06/07/2025</div> |
| 428 | + <p>We've published a new <a href="https://www.ubisoft.com/en-us/studio/laforge/news/5hypnC0mKU3LY4t4eHxnjR/mosar-gnration-davatars-de-personnage-fiables-partir-dun-simple-portrait-photo" target="_blank">blog</a> post on Ubisoft website, showcasing our work on the paper <a href="https://ubisoft-laforge.github.io/character/mosar/" target="_blank">MoSAR</a>, a technique used by artists to streamline their workflow.</p> |
| 429 | + </div> |
| 430 | + |
| 431 | + <div class="news-item"> |
| 432 | + <div class="news-date">25/06/2025</div> |
| 433 | + <p>SEREP paper got accepted to <strong>ICCV 2025</strong>. We introduce a novel learning-based method for monocular facial expression capture and retargeting. More details from <a href="https://ubisoft-laforge.github.io/character/serep/" target="_blank">Here</a></p> |
| 434 | + </div> |
| 435 | + |
| 436 | + <div class="news-item"> |
| 437 | + <div class="news-date">20/05/2025</div> |
| 438 | + <p>One paper accepted to 'AI for Creative Visual Content Generation Editing and Understanding' (CVEU), <strong>CVPR 2025</strong>. We propose a texture generator model giving artists control over shape, skin tone and fine details. More details from <a href="https://ubisoft-laforge.github.io/character/GeoAwareTextures3D/index.html" target="_blank">Here</a></p> |
| 439 | + </div> |
| 440 | + |
| 441 | + <div class="news-item"> |
| 442 | + <div class="news-date">09/03/2024</div> |
| 443 | + <p>We released <strong>FFHQ-UV-Intrinsics</strong> dataset that contains intrinsics texture maps for 10K subjects at HD resolution. Download it from <a href="https://github.com/ubisoft/ubisoft-laforge-FFHQ-UV-Intrinsics" target="_blank">here</a></p> |
| 444 | + </div> |
| 445 | + |
| 446 | + <div class="news-item"> |
| 447 | + <div class="news-date">01/03/2024</div> |
| 448 | + <p>Mosar paper got accepted to <strong>CVPR 2024</strong>. MoSAR turns a portrait image into a relightable 3D avatar. More details from <a href="https://ubisoft-laforge.github.io/character/mosar/" target="_blank">Here</a></p> |
| 449 | + </div> |
| 450 | + |
| 451 | + <div class="news-item"> |
| 452 | + <div class="news-date">05/04/2023</div> |
| 453 | + <p>We published a technical paper showcasing our FaceLab solution, which was used by artists to capture 3D facial performance for the 2019 film <a href="https://www.imdb.com/title/tt5697572/" target="_blank">Cats</a>.</p> |
| 454 | + </div> |
| 455 | + |
| 456 | + <div class="news-item"> |
| 457 | + <div class="news-date">01/02/2023</div> |
| 458 | + <p>S2F2 paper got accepted to <strong>FG2023</strong>. S2F2 is a robust self-supervised model that estimate 3D shape and reflectance from a monocular image. More details from <a href="https://youtu.be/DiHpZjx1sxc" target="_blank">Here</a></p> |
| 459 | + </div> |
| 460 | + |
| 461 | + <div class="news-item"> |
| 462 | + <div class="news-date">18/05/2022</div> |
| 463 | + <p>DeepNextFace is a 3D face reconstruction library from a single monocular RGB image via deep convolutional neural networks and differentiable ray tracing. Check it from <a href="https://github.com/abdallahdib/DeepNextFace" target="_blank">https://github.com/abdallahdib/DeepNextFace</a></p> |
| 464 | + </div> |
| 465 | + |
| 466 | + <div class="news-item"> |
| 467 | + <div class="news-date">21/04/2022</div> |
| 468 | + <p>NextFace is a lightweight open source library, written in pytorch, for high fidelity face reconstruction. Check it from <a href="https://github.com/abdallahdib/NextFace" target="_blank">https://github.com/abdallahdib/NextFace</a></p> |
| 469 | + </div> |
| 470 | + |
| 471 | + <div class="news-item"> |
| 472 | + <div class="news-date">11/10/2021</div> |
| 473 | + <p>Our paper on self-supervised monocular 3D face reconstruction got accepted to <strong>ICCV 2021</strong>. More details from <a href="https://www.youtube.com/watch?v=VVr_bbXEjxE" target="_blank">Here</a></p> |
| 474 | + </div> |
| 475 | + |
| 476 | + <div class="news-item"> |
| 477 | + <div class="news-date">09/03/2021</div> |
| 478 | + <p>Our paper on monocular 3D face reconstruction got accepted to <strong>EuroGraphics 2021</strong>. We achieve realistic 3D face reconstruction from a single image. More details from <a href="https://github.com/abdallahdib/NextFace" target="_blank">Here</a></p> |
| 479 | + </div> |
| 480 | +</section> |
414 | 481 |
|
415 | 482 | <div class="main-content"> |
416 | 483 | <img src="https://abdallahdib.github.io/images/me-crop.png" alt="Abdallah Dib" class="profile-image"> |
|
0 commit comments