0

The GitHub API supports HATEOAS quite elegantly, so navigating it from a graphical REST client like Postman is intuitive and fast.

They don't list all possible permutations of the endpoints when requesting "/", my guess is they only list the top 'root' endpoints that get you started so that you can then use the responses to dig deeper down the graphs.

As we know, one of the benefits of navigating the graphs using the URLs from the responses is that if the URL structure changes, end users will always know where to go.

Now what if you are writing a framework that wraps an API such as this. If you wanted to return data from deep in the graph, would you navigate the URL structure like an end user would and make multiple requests as you go or would you bake the URLs into the code and request them directly?

I know GitHub provides direct URLs to most (all?) child data but in my API this doesn't exist.

asked Sep 6, 2016 at 15:40
5
  • Like RobertHarvey pointed out, it really depends on your usecase. If you tell us more about your application, we may be able to provide better answers. Commented Sep 7, 2016 at 0:18
  • My main requirement is for ease of use and least brittle for the end user. I don't mind implementing something robust. Performance isn't a major consideration at this point. Commented Sep 7, 2016 at 9:08
  • so there is a user? Some person physically clicking links? Commented Sep 7, 2016 at 9:09
  • Yes. A tool that will utilise this framework. Commented Sep 7, 2016 at 9:11
  • Okay. One more time. Please clarify. Will there be a user deciding on where to navigate to next, or is this some kind of automated job where the computer just does things without human intervention? Commented Sep 7, 2016 at 11:00

1 Answer 1

1

If you wanted to return data from deep in the graph, would you navigate the URL structure like an end user would and make multiple requests as you go or would you bake the URLs into the code and request them directly?

That will depend on whether or not your emphasis is on flexibility or performance/simplicity.

If you want to fully embrace the HATEOAS model, you will navigate the tree every time. Potentially, your software might be able to automatically adjust to a change in the HATEOAS model, if it is smart enough to perform that discovery itself. It might even cache urls it discovers for later use.

If you want an emphasis on performance and simplicity, you'll create an Adapter that accesses the URLs of interest directly. If a change occurs in the API, you use HATEOAS to discover the change, and bake in new URLs.

answered Sep 6, 2016 at 15:55

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.