Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Multi-dimensional support - a spec for overviews? #1097

scottstanie started this conversation in General
Discussion options

Hi,

I'm trying to understand the current state and plans for multidimensional support for titiler.

Specifically, I'm wondering about the standardization for overviews/quicklooks/other name for pre-computed, smaller versions of data. It looks like discussion along those lines is happening in #1071 , but I didn't know if that has some specific use case, or is trying to generally specify the format for holding overview layers for Zarr (which, if well specified, also seems useful for HDF5/NetCDF/other multi-dimensional raster formats)

My use case has been to take a time series of geospatial raters (from InSAR data) and make a little viewer on top of titiler to explore different pixels' deformation patterns, using the masking suggestion you gave:
bowser-js-demo-20250207-small

I brought up the other non-Zarr multidimensional formats since that's what new data is going to be produced in, but I'd understand if you were focusing efforts on a single file format.

You must be logged in to vote

Replies: 1 comment 4 replies

Comment options

Hi @scottstanie

It looks like discussion along those lines is happening in #1071 , but I didn't know if that has some specific use case, or is trying to generally specify the format for holding overview layers for Zarr

Yeah we are looking specifically at the geozarr specification

(which, if well specified, also seems useful for HDF5/NetCDF/other multi-dimensional raster formats)

Sadly it seems there is no other multiscale/overview specification for other raster formats 🤷 so it will be hard to support those.

I brought up the other non-Zarr multidimensional formats since that's what new data is going to be produced in, but I'd understand if you were focusing efforts on a single file format.

What is the file format you're going to use?

You must be logged in to vote
4 replies
Comment options

The InSAR Displacement product that's coming out soon is using HDF5 with the "cloud optimized" aggregated metadata. But since it's pretty far along and close to production, it would be hard to add extra quicklook datasets to it anyway.
I'm mostly tracking it to look for improvements in the output format for the underlying processing library. Since the GeoZarr spec hasn't solidified yet, I haven't tried to add that as an output option. I've just output a bunch of COGs to use with titiler.

Comment options

GeoZarr is in 0.4 and we are looking at using it in production for some project 🤷

Right now it is missing tools and example dataset but we're working with @maxrjones to fix this 😄

Comment options

I've had this idea that we should be able to combine the functionality of virtualizarr for accessing raw data in HDF5 combined with downsampled versions stored as native Zarr. One could then use titiler with the xarray extension to access both the raw data and overviews. As in the current version of GeoZarr, the overviews would follow https://docs.ogc.org/is/17-083r4/17-083r4.html. @vincentsarago raised some valid concerns that simple downsampled versions would be preferable to common TMS's in many cases, but we hope to demonstrate that custom TMS grids can both be GeoZarr compliant and contain simple downscaled versions. All of the necessary pieces are under rapid development - my hope is that we'll have this demoed by EGU (late April) or Scipy (July). @scottstanie what's your timeline for finalizing the approach for dynamic tiling of OPERA products?

Comment options

Awesome thanks for the background.

I don't know if there's a firm timeline for adding easy visualization sidecar files actually... the main products should start coming out soon (within a month-ish), but the setup we have is that JPL runs the science algorithms to make the main, full resolution products (30 meters, in this case), and these get sent to the ASF DAAC for archive.
ASF has done a lot of work to make accessible visualizations on top of that, and they're already using kerchunk for the first version of the upcoming visualization portal. Right now they are doing static tiling (as it's part of the current stack for https://search.asf.alaska.edu ), but having separate down-sampled versions could be interesting for both visualization, and for certain kinds of analysis so that people can download less data if the need only ~100s of meters of resolution.
I'll bring this up to them at our next meeting

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

AltStyle によって変換されたページ (->オリジナル) /