My flow contains reading 5 input files from s3 bucket based on trigger file.
Sometime it is not necessary to get all 5 files. But still trigger files will be placed. My flow works well when all files are present. But it fails when one or more files are missing and scenario is started.
Is there any way to skip the part of the flow for which we dont have file? Or any other solution to overcome this issue?
My current design is:
Folder->Create dataset(for all 5 files)->Sync->Stack recipe to combine the data ->Final calculation
Just to add a note.
From the trigger file, i can find what are all the files are placed in the input path.
So writing python code based on the trigger file content. But unable to read excel files from the path.
with handle1.get_download_stream('/Sites.xlsx') as f:
Is there a way to read excel specifically from the S3 and i want to use that as a DataFrame.
Not sure if your still looking for this, I just came across this when looking for something similar solution. For me Pandas read_excel worked.
In Dataiku I created a folder for my s3 that have some excel files in them and created a Python recipe to read my excel files for the s3 folder.
s3_folder = dataiku.Folder('vx343....')
df = pd.read_excel(s3_folder.get_download_stream('my_excel.xlsx'),sheet_name='mydata')
Hope this is helpful for you or others who come across this thread.