[フレーム]

Creative roundup: avatars, lightsabers, and LoRA tricks

Posted March 28, 2025 by

There has never been a more exciting time to play around with AI. Every week, new models drop, unexpected use cases emerge, and people push boundaries in ways that are equal parts strange and delightful.

Here are some highlights of the coolest things happening — new models you can try, creative experiments from the community, and novel creations.

ShieldGemma 2 by Google DeepMind

ShieldGemma 2 is a powerful new model that detects NSFW ("not safe for work") content, violent material, and unsafe instructions with high accuracy. It’s the first DeepMind model of its kind on Replicate, and a useful tool for building safer AI experiences — especially for social or user-facing apps.

Hunyuan3D 2Mini by Tencent

Hunyuan3D 2Mini is a faster, smaller version of Hunyuan’s earlier 3D generation model. It’s perfect for game asset creation and stylized characters, and it’s already showing up in workflows across X, with creators using it to build vibrant 3D worlds in a fraction of the time.

Tencent just announced two new upgrades for 3D generation models on Hugging Face

3D 2.0 MV (Multi-View Generation) and 3D 2.0 Mini pic.twitter.com/muPCXCCKEH

— AK (@_akhaliq) March 18, 2025

CSM-1B and Orpheus-3B by Sesame Labs

These new speech models do more than just talk — they breathe, pause, and chuckle. With human-like quirks built in, they’re ideal for realistic voices, game dialogue, or just making your AI sound a little more alive.

Text-to-video, upgraded

Luma now generates 720p video in ~30 seconds, making it faster than ever to turn text into cinematic video. There’s also a lighter version available for 540p output if you’re optimizing for speed.

You can try the new faster and cheaper Luma models now on Replicate:https://t.co/iTuP65EQy5 https://t.co/3tyNK80evj

720p 5 second video in ~30s https://t.co/peR1Y03GLm pic.twitter.com/KLqXduTIsk

— fofr (@fofrAI) March 17, 2025

Kling v1.6 Pro has introduced end frame support, giving you even more control over video generation. With both start and end frames now available, it’s easier to guide your videos to hit the perfect timing and composition. Combined with its 1080p resolution, Kling is a powerful tool for sharper, more dynamic video results.

Kling v1.6 pro on Replicate now supports start and end frames. 1080p, 24fps, 5 or 10 second videos 💥https://t.co/l4V705kOl5 pic.twitter.com/rOtwKLuvZn

— fofr (@fofrAI) March 19, 2025

Fine-tuning experiments

Custom LoRAs to create effects like "cakeify", "squish", and "dissolve" on Wan2.1 are leading to strange and fascinating transformations. Fine-tuning with custom LoRAs like cakeify unlocks some surprisingly flexible results. By adjusting the LoRA weight and tweaking the prompt, you can swap a knife for anything handheld — an axe, a lightsaber, even a toothbrush — and change what’s inside to something that isn’t cake.

So if you slightly lower the lora weight on the cakeify lora, and change the prompt you can:

- change the knife to any handheld thing, like an axe, a lightsaber, a toothbrush
- change what's inside, something that's not cake https://t.co/2oWs3Mpp4i

— fofr (@fofrAI) March 19, 2025

If you’re interested in fine-tuning your own models, there are two powerful tools available:

Community creativity

Flux, Kling, and Wan2.1 are powering a surge of viral creativity — including animated humans and AI-generated avatars.

UGC avatars with Wan2.1 img2vid 🔥 https://t.co/cYKDUHZBbO pic.twitter.com/4IYrP3PJ80

— Luis Catacora (@lucatac0) March 17, 2025

That’s it for now, but stay tuned for more around models, experiments, and cool ideas worth playing with. Until then, try something new at replicate.com/explore, and follow us on X to see what the community’s building in real time.

AltStyle によって変換されたページ (->オリジナル) /