-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Allow S3-Compatible Cloud Object Storage in CloudETL Example #1264
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow S3-Compatible Cloud Object Storage in CloudETL Example #1264
Conversation
🎉 All Contributor License Agreements have been signed. Ready to merge.
✅ metadaddy
Please push an empty commit if you would like to re-run the checks to verify CLA status for all contributors.
CLA assistant check
All committers have signed the CLA.
CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.
Hi @davetroiano - you replied to me on #1262 last month, pointing to the PR for the CLI 4 compatibles samples. Any interest in taking this on? It shows how you can send data to any S3-compatible cloud object store (Backblaze, Minio, IBM Cloud Object Storage, etc etc) by setting the endpoint and region in the AWS profile (which you've likely already done if you're using any of the AWS SDKs/CLI with one of those providers).
Uh oh!
There was an error while loading. Please reload this page.
Description
This change allows the use of an S3-compatible cloud object store such as Backblaze B2 with the CloudETL example as an alternative to Amazon S3.
With this change, the user may configure an AWS profile with
endpoint_urlset to a value such ashttps://s3.us-west-004.backblazeb2.comin theconfigfile. For example:The
setup_s3_storage.shscript reads this value (viaaws configure get endpoint_url --profile $S3_PROFILE) into a newS3_ENDPOINT_URLenvironment variable which is used asstore.urlfor the connectors. If the endpoint URL is not set in the profile, then the Amazon S3 global default,https://s3.amazonaws.com, is used.There is also a fix to
read-data.sh- thelist-objectscall was missing the--profile $S3_PROFILE, so it incorrectly used the default profile.Author Validation
[x] cloud-etl
Reviewer Tasks
Describe the tasks/validation that the PR submitter is requesting to be done by the reviewer.
[ ] cloud-etl