Sign up to take part
Registered users can ask their own questions, contribute to discussions, and be part of the Community!
I want to build one process which copy my Snowflake table data to s3.
I have one table which capture the details like which table data i need to copy to s3 based on a copy_flag column.
Based on that flag i have to move my data to s3 location.
Could you please help me in that.
Thanks in Advance
The recommended way to sync data from Snowflake to S3 in DSS would be to use a Visual Sync recipe in DSS. This will ensure it uses fastpath to unload data from Snowflake to S3.
If you need to dynamically create a table on the value of the column. You could create a view in snowflake and this as a database table dataset in DSS and then sync this to S3.
Hope this helps.
Yes, A sync recipe can only have 1 dataset as input. So you would need to create ahigh number of datasets please note you can create datasets through the API:
Then create a sync recipe through the API as well
Is there any way to do this through python code?
Just creating cursor and then executing the unloading command.
As this will help in defining the path in during the execution and even to track the failed execution.