Skip to main content
Game Development

Return to Answer

added 167 characters in body
Source Link
Kevin
  • 6.9k
  • 1
  • 12
  • 32

I don't develop with Unreal, but I was curious about the feature you're describing, so I tried researching it. I couldn't find much official information from the Unreal team, but did find this staff tutorial. Note this passage in the tutorial:

While the current intention of the Chaos Flesh system is to provide highly accurate simulation results that can be cached and then used to train a GPU based ML-Deformer, the 5.2 release will also support in-game deformations of low resolution tetrahedron.

In other words, it sounds like the system is expected to have a high performance impact and is better suited for simulation which doesn't need to be rendered in real time, rather than for real-time use in a game or other interactive experience. That's in line with what I'd expect for this type of feature.

Combining Chaos Flesh with VR, which already has a high performance cost, sounds like a recipe for performance issues. You will probably need to accept a compromise on visual fidelity if you want to maintain stable performance in VR. However, if your application content is more or less fixed (e.g. Chaos Flesh is used during cutscenes where the character always performs the exact same animation(s) in the exact same way), I'd suggest looking into whether it'sSome possible to bake the flesh simulation into an animation so that the end-user's device doesn't need to run the simulation in real-time.solutions:

  • Accept a compromise on visual fidelity to maintain stable performance in VR.
  • Try training a GPU-based ML-Deformer as the quoted passage suggests. If properly trained, this in theory would allow for a good approximation of the simulation with a lower performance cost.
  • If your application content is more or less fixed (e.g. Chaos Flesh is used during cutscenes where the character always performs the exact same animation(s) in the exact same way), look into whether it's possible to bake the flesh simulation into an animation so that the end-user's device doesn't need to run the simulation in real-time.

I don't develop with Unreal, but I was curious about the feature you're describing, so I tried researching it. I couldn't find much official information from the Unreal team, but did find this staff tutorial. Note this passage in the tutorial:

While the current intention of the Chaos Flesh system is to provide highly accurate simulation results that can be cached and then used to train a GPU based ML-Deformer, the 5.2 release will also support in-game deformations of low resolution tetrahedron.

In other words, it sounds like the system is expected to have a high performance impact and is better suited for simulation which doesn't need to be rendered in real time, rather than for real-time use in a game or other interactive experience. That's in line with what I'd expect for this type of feature.

Combining Chaos Flesh with VR, which already has a high performance cost, sounds like a recipe for performance issues. You will probably need to accept a compromise on visual fidelity if you want to maintain stable performance in VR. However, if your application content is more or less fixed (e.g. Chaos Flesh is used during cutscenes where the character always performs the exact same animation(s) in the exact same way), I'd suggest looking into whether it's possible to bake the flesh simulation into an animation so that the end-user's device doesn't need to run the simulation in real-time.

I don't develop with Unreal, but I was curious about the feature you're describing, so I tried researching it. I couldn't find much official information from the Unreal team, but did find this staff tutorial. Note this passage in the tutorial:

While the current intention of the Chaos Flesh system is to provide highly accurate simulation results that can be cached and then used to train a GPU based ML-Deformer, the 5.2 release will also support in-game deformations of low resolution tetrahedron.

In other words, it sounds like the system is expected to have a high performance impact and is better suited for simulation which doesn't need to be rendered in real time, rather than for real-time use in a game or other interactive experience. That's in line with what I'd expect for this type of feature.

Combining Chaos Flesh with VR, which already has a high performance cost, sounds like a recipe for performance issues. Some possible solutions:

  • Accept a compromise on visual fidelity to maintain stable performance in VR.
  • Try training a GPU-based ML-Deformer as the quoted passage suggests. If properly trained, this in theory would allow for a good approximation of the simulation with a lower performance cost.
  • If your application content is more or less fixed (e.g. Chaos Flesh is used during cutscenes where the character always performs the exact same animation(s) in the exact same way), look into whether it's possible to bake the flesh simulation into an animation so that the end-user's device doesn't need to run the simulation in real-time.
Source Link
Kevin
  • 6.9k
  • 1
  • 12
  • 32

I don't develop with Unreal, but I was curious about the feature you're describing, so I tried researching it. I couldn't find much official information from the Unreal team, but did find this staff tutorial. Note this passage in the tutorial:

While the current intention of the Chaos Flesh system is to provide highly accurate simulation results that can be cached and then used to train a GPU based ML-Deformer, the 5.2 release will also support in-game deformations of low resolution tetrahedron.

In other words, it sounds like the system is expected to have a high performance impact and is better suited for simulation which doesn't need to be rendered in real time, rather than for real-time use in a game or other interactive experience. That's in line with what I'd expect for this type of feature.

Combining Chaos Flesh with VR, which already has a high performance cost, sounds like a recipe for performance issues. You will probably need to accept a compromise on visual fidelity if you want to maintain stable performance in VR. However, if your application content is more or less fixed (e.g. Chaos Flesh is used during cutscenes where the character always performs the exact same animation(s) in the exact same way), I'd suggest looking into whether it's possible to bake the flesh simulation into an animation so that the end-user's device doesn't need to run the simulation in real-time.

default

AltStyle によって変換されたページ (->オリジナル) /