To date I have created many workflows that involve programmatic deployment of new data to AGOL Feature Services using the API and Python. The inherent issue with this is the potential for failure of an update or similar.
My current general work flow is to either grab the feature service and then overwrite the dataset with whatever the new data set is. Or similarly, if applicable update attribute data on existing features if there is no substantial change in the data. While this works 99% of the time, there are special cases where data connections may break or data formats are not matched appropriately in the update process and the update fails all together.
Is there any means within AGOL (no Enterprise or Server solution) where data could be staged in some sort of replica before being merged with the main data source?
1 Answer 1
Not sure if this is a red herring for you but are you aware of the new ArcGIS Online data pipelines app? It provides a sort of modelbuilder/FME like equivalency for preparing datasets. It can be scheduled so it can run overnight. I've explored it but not used in in anger so not sure it offers the type of preparation you need, but I did find it quite intuitive and easy to use.
-
1Unfortunately, in research and exploration of the tool this does not accomplish what I am looking for in my question. Data Pipelines is essentially a possible replacement for the scripts that I already have working, but does not really offer managed replicas.Austin Averill– Austin Averill2024年10月21日 16:47:17 +00:00Commented Oct 21, 2024 at 16:47
-
I think you can use data pipelines to connect to exisiting features layers in AGOL. So would it work if you had one pipeline getting data into a 'staging' feature service, then another pipeline that would copy from there into the master version?SirLostalot– SirLostalot2024年10月24日 16:51:56 +00:00Commented Oct 24, 2024 at 16:51
Explore related questions
See similar questions with these tags.