Skip to main content
Game Development

Return to Answer

Commonmark migration
Source Link

On my engine I also needed the ability to dynamically add and remove nodes to the graph at runtime (see this) so the precomputed route only made things more complicated, so I scrapped it (not to mention my runtime A* solution was already running perfectly). Still, I was left wondering...

Bottom line, is this technique still relevant nowadays in any scenario?

I can see no benefit from using such a technique.

I lacks the flexibility of a Graph (you can have different LODs, they don't have to be any specific shape, ect...). Also any user of your engine is going to know what a graph is and how to use one. So if they want to add extra functionality their going to have to learn how to implement their extension using a situation completely novel to them.

As you mentioned it looks like it would scale horribly. Also its worth noting that if a graph fits on the cash and you run all of your path findings back to back it really cuts down on the IO time. It looks like you implementation would soon grow too large to fit on any cache.

I've also heard that nowadays memory lookups might even be slower than general computation, which is why creating sine and cosine look up tables is not as popular anymore.

Unless you can fit all of your program and its need memory in the cache you are going to bottle neck at pulling things in and out of memory way before you bottle neck the processor.

I'm suspecting that with today's powerful hardware, coupled with the memory requirements of doing this for every level, any benefits this technique once had are now outweighted by simply performing an A* at runtime

Also realize that many games have separate loops for updating the AI. I believe he way my project is set up is that there is an update loop for user input at 60hz, the AI is only 20hz, and the games draws as quickly as possible.

Also as a side note I did some GBA programming just for fun and nothing at all transfers over to using a modern device. For the GBA everything was about minimizing the workload of the processor (because it was pathetic). You also have to realize that most high level languages C# and Java (not so much C++ or C) do tons of optimizations for you. As for optimizing you code their isn't much to do other than access memory as little as possible and when you do run as many computations on it as possible before bringing in new memory that will bump it out of the cache and making sure you are only doing things once.

Edit: Also to answer your title yes it is. Precomputing frequently used paths is an excellent idea and can be done with A* anywhere outside of your game loop. For example from you base to a resource in a RTS so that the gathers don't have to recalculate every time they want leave or return.

On my engine I also needed the ability to dynamically add and remove nodes to the graph at runtime (see this) so the precomputed route only made things more complicated, so I scrapped it (not to mention my runtime A* solution was already running perfectly). Still, I was left wondering...

Bottom line, is this technique still relevant nowadays in any scenario?

I can see no benefit from using such a technique.

I lacks the flexibility of a Graph (you can have different LODs, they don't have to be any specific shape, ect...). Also any user of your engine is going to know what a graph is and how to use one. So if they want to add extra functionality their going to have to learn how to implement their extension using a situation completely novel to them.

As you mentioned it looks like it would scale horribly. Also its worth noting that if a graph fits on the cash and you run all of your path findings back to back it really cuts down on the IO time. It looks like you implementation would soon grow too large to fit on any cache.

I've also heard that nowadays memory lookups might even be slower than general computation, which is why creating sine and cosine look up tables is not as popular anymore.

Unless you can fit all of your program and its need memory in the cache you are going to bottle neck at pulling things in and out of memory way before you bottle neck the processor.

I'm suspecting that with today's powerful hardware, coupled with the memory requirements of doing this for every level, any benefits this technique once had are now outweighted by simply performing an A* at runtime

Also realize that many games have separate loops for updating the AI. I believe he way my project is set up is that there is an update loop for user input at 60hz, the AI is only 20hz, and the games draws as quickly as possible.

Also as a side note I did some GBA programming just for fun and nothing at all transfers over to using a modern device. For the GBA everything was about minimizing the workload of the processor (because it was pathetic). You also have to realize that most high level languages C# and Java (not so much C++ or C) do tons of optimizations for you. As for optimizing you code their isn't much to do other than access memory as little as possible and when you do run as many computations on it as possible before bringing in new memory that will bump it out of the cache and making sure you are only doing things once.

Edit: Also to answer your title yes it is. Precomputing frequently used paths is an excellent idea and can be done with A* anywhere outside of your game loop. For example from you base to a resource in a RTS so that the gathers don't have to recalculate every time they want leave or return.

On my engine I also needed the ability to dynamically add and remove nodes to the graph at runtime (see this) so the precomputed route only made things more complicated, so I scrapped it (not to mention my runtime A* solution was already running perfectly). Still, I was left wondering...

Bottom line, is this technique still relevant nowadays in any scenario?

I can see no benefit from using such a technique.

I lacks the flexibility of a Graph (you can have different LODs, they don't have to be any specific shape, ect...). Also any user of your engine is going to know what a graph is and how to use one. So if they want to add extra functionality their going to have to learn how to implement their extension using a situation completely novel to them.

As you mentioned it looks like it would scale horribly. Also its worth noting that if a graph fits on the cash and you run all of your path findings back to back it really cuts down on the IO time. It looks like you implementation would soon grow too large to fit on any cache.

I've also heard that nowadays memory lookups might even be slower than general computation, which is why creating sine and cosine look up tables is not as popular anymore.

Unless you can fit all of your program and its need memory in the cache you are going to bottle neck at pulling things in and out of memory way before you bottle neck the processor.

I'm suspecting that with today's powerful hardware, coupled with the memory requirements of doing this for every level, any benefits this technique once had are now outweighted by simply performing an A* at runtime

Also realize that many games have separate loops for updating the AI. I believe he way my project is set up is that there is an update loop for user input at 60hz, the AI is only 20hz, and the games draws as quickly as possible.

Also as a side note I did some GBA programming just for fun and nothing at all transfers over to using a modern device. For the GBA everything was about minimizing the workload of the processor (because it was pathetic). You also have to realize that most high level languages C# and Java (not so much C++ or C) do tons of optimizations for you. As for optimizing you code their isn't much to do other than access memory as little as possible and when you do run as many computations on it as possible before bringing in new memory that will bump it out of the cache and making sure you are only doing things once.

Edit: Also to answer your title yes it is. Precomputing frequently used paths is an excellent idea and can be done with A* anywhere outside of your game loop. For example from you base to a resource in a RTS so that the gathers don't have to recalculate every time they want leave or return.

added 298 characters in body
Source Link
ClassicThunder
  • 8.4k
  • 37
  • 49

On my engine I also needed the ability to dynamically add and remove nodes to the graph at runtime (see this) so the precomputed route only made things more complicated, so I scrapped it (not to mention my runtime A* solution was already running perfectly). Still, I was left wondering...

Bottom line, is this technique still relevant nowadays in any scenario?

I can see no benefit from using such a technique.

I lacks the flexibility of a Graph (you can have different LODs, they don't have to be any specific shape, ect...). Also any user of your engine is going to know what a graph is and how to use one. So if they want to add extra functionality their going to have to learn how to implement their extension using a situation completely novel to them.

As you mentioned it looks like it would scale horribly. Also its worth noting that if a graph fits on the cash and you run all of your path findings back to back it really cuts down on the IO time. It looks like you implementation would soon grow too large to fit on any cache.

I've also heard that nowadays memory lookups might even be slower than general computation, which is why creating sine and cosine look up tables is not as popular anymore.

Unless you can fit all of your program and its need memory in the cache you are going to bottle neck at pulling things in and out of memory way before you bottle neck the processor.

I'm suspecting that with today's powerful hardware, coupled with the memory requirements of doing this for every level, any benefits this technique once had are now outweighted by simply performing an A* at runtime

Also realize that many games have separate loops for updating the AI. I believe he way my project is set up is that there is an update loop for user input at 60hz, the AI is only 20hz, and the games draws as quickly as possible.

Also as a side note I did some GBA programming just for fun and nothing at all transfers over to using a modern device. For the GBA everything was about minimizing the workload of the processor (because it was pathetic). You also have to realize that most high level languages C# and Java (not so much C++ or C) do tons of optimizations for you. As for optimizing you code their isn't much to do other than access memory as little as possible and when you do run as many computations on it as possible before bringing in new memory that will bump it out of the cache and making sure you are only doing things once.

Edit: Also to answer your title yes it is. Precomputing frequently used paths is an excellent idea and can be done with A* anywhere outside of your game loop. For example from you base to a resource in a RTS so that the gathers don't have to recalculate every time they want leave or return.

On my engine I also needed the ability to dynamically add and remove nodes to the graph at runtime (see this) so the precomputed route only made things more complicated, so I scrapped it (not to mention my runtime A* solution was already running perfectly). Still, I was left wondering...

Bottom line, is this technique still relevant nowadays in any scenario?

I can see no benefit from using such a technique.

I lacks the flexibility of a Graph (you can have different LODs, they don't have to be any specific shape, ect...). Also any user of your engine is going to know what a graph is and how to use one. So if they want to add extra functionality their going to have to learn how to implement their extension using a situation completely novel to them.

As you mentioned it looks like it would scale horribly. Also its worth noting that if a graph fits on the cash and you run all of your path findings back to back it really cuts down on the IO time. It looks like you implementation would soon grow too large to fit on any cache.

I've also heard that nowadays memory lookups might even be slower than general computation, which is why creating sine and cosine look up tables is not as popular anymore.

Unless you can fit all of your program and its need memory in the cache you are going to bottle neck at pulling things in and out of memory way before you bottle neck the processor.

I'm suspecting that with today's powerful hardware, coupled with the memory requirements of doing this for every level, any benefits this technique once had are now outweighted by simply performing an A* at runtime

Also realize that many games have separate loops for updating the AI. I believe he way my project is set up is that there is an update loop for user input at 60hz, the AI is only 20hz, and the games draws as quickly as possible.

Also as a side note I did some GBA programming just for fun and nothing at all transfers over to using a modern device. For the GBA everything was about minimizing the workload of the processor (because it was pathetic). You also have to realize that most high level languages C# and Java (not so much C++ or C) do tons of optimizations for you. As for optimizing you code their isn't much to do other than access memory as little as possible and when you do run as many computations on it as possible before bringing in new memory that will bump it out of the cache and making sure you are only doing things once.

On my engine I also needed the ability to dynamically add and remove nodes to the graph at runtime (see this) so the precomputed route only made things more complicated, so I scrapped it (not to mention my runtime A* solution was already running perfectly). Still, I was left wondering...

Bottom line, is this technique still relevant nowadays in any scenario?

I can see no benefit from using such a technique.

I lacks the flexibility of a Graph (you can have different LODs, they don't have to be any specific shape, ect...). Also any user of your engine is going to know what a graph is and how to use one. So if they want to add extra functionality their going to have to learn how to implement their extension using a situation completely novel to them.

As you mentioned it looks like it would scale horribly. Also its worth noting that if a graph fits on the cash and you run all of your path findings back to back it really cuts down on the IO time. It looks like you implementation would soon grow too large to fit on any cache.

I've also heard that nowadays memory lookups might even be slower than general computation, which is why creating sine and cosine look up tables is not as popular anymore.

Unless you can fit all of your program and its need memory in the cache you are going to bottle neck at pulling things in and out of memory way before you bottle neck the processor.

I'm suspecting that with today's powerful hardware, coupled with the memory requirements of doing this for every level, any benefits this technique once had are now outweighted by simply performing an A* at runtime

Also realize that many games have separate loops for updating the AI. I believe he way my project is set up is that there is an update loop for user input at 60hz, the AI is only 20hz, and the games draws as quickly as possible.

Also as a side note I did some GBA programming just for fun and nothing at all transfers over to using a modern device. For the GBA everything was about minimizing the workload of the processor (because it was pathetic). You also have to realize that most high level languages C# and Java (not so much C++ or C) do tons of optimizations for you. As for optimizing you code their isn't much to do other than access memory as little as possible and when you do run as many computations on it as possible before bringing in new memory that will bump it out of the cache and making sure you are only doing things once.

Edit: Also to answer your title yes it is. Precomputing frequently used paths is an excellent idea and can be done with A* anywhere outside of your game loop. For example from you base to a resource in a RTS so that the gathers don't have to recalculate every time they want leave or return.

Source Link
ClassicThunder
  • 8.4k
  • 37
  • 49

On my engine I also needed the ability to dynamically add and remove nodes to the graph at runtime (see this) so the precomputed route only made things more complicated, so I scrapped it (not to mention my runtime A* solution was already running perfectly). Still, I was left wondering...

Bottom line, is this technique still relevant nowadays in any scenario?

I can see no benefit from using such a technique.

I lacks the flexibility of a Graph (you can have different LODs, they don't have to be any specific shape, ect...). Also any user of your engine is going to know what a graph is and how to use one. So if they want to add extra functionality their going to have to learn how to implement their extension using a situation completely novel to them.

As you mentioned it looks like it would scale horribly. Also its worth noting that if a graph fits on the cash and you run all of your path findings back to back it really cuts down on the IO time. It looks like you implementation would soon grow too large to fit on any cache.

I've also heard that nowadays memory lookups might even be slower than general computation, which is why creating sine and cosine look up tables is not as popular anymore.

Unless you can fit all of your program and its need memory in the cache you are going to bottle neck at pulling things in and out of memory way before you bottle neck the processor.

I'm suspecting that with today's powerful hardware, coupled with the memory requirements of doing this for every level, any benefits this technique once had are now outweighted by simply performing an A* at runtime

Also realize that many games have separate loops for updating the AI. I believe he way my project is set up is that there is an update loop for user input at 60hz, the AI is only 20hz, and the games draws as quickly as possible.

Also as a side note I did some GBA programming just for fun and nothing at all transfers over to using a modern device. For the GBA everything was about minimizing the workload of the processor (because it was pathetic). You also have to realize that most high level languages C# and Java (not so much C++ or C) do tons of optimizations for you. As for optimizing you code their isn't much to do other than access memory as little as possible and when you do run as many computations on it as possible before bringing in new memory that will bump it out of the cache and making sure you are only doing things once.

default

AltStyle によって変換されたページ (->オリジナル) /