Submit your inspiring success story or innovative use case to the 2022 Dataiku Frontrunner Awards! ENTER YOUR SUBMISSION

Exporting a Dataset into Multiple .CSV files

Solved!
tgb417
Neuron
Neuron
Exporting a Dataset into Multiple .CSV files

I have a dataset that will have to be imported into another system as a series of .CSV files.

Is there a way to have DSS write several files into a folder based on selection from several columns of the dataset.

For example if I had a vehicle dataset.

With Cars and Trucks, with 4, 6 and 8 Cylinder Engines.

I'd like all of the 

4 Cylinder Cars in an appropriately named .csv file.

Like wise the 6 Cylinder Cars in a seperate .csv file

The 6 Cylinder Trucks in a seperate .csv file and So forth.

 I could do this kind of thing with the Split Recipe, However I'd have to write out a formula for each of the datasets.  I've got about 22 data sets.  Although I could do this with visual split recipe.  It would be tedious at best.  Thoughts for a #Easy Button.

For this project I'm using the community edition with a Nonprofit, so methods using partitions will not work out for this project.


Operating system used: Ubuntu 18.4 LTE (Running under WLS2 on Windows 11)

--Tom
0 Kudos
2 Solutions
Ignacio_Toledo

Hi @tgb417,

I guess you'd like to avoid using a python recipe? That is what I'd do. I can't think about a way of doing this using no-code recipes, without being tedious, as you say.

Cheers!

View solution in original post

tgb417
Neuron
Neuron
Author

Here is what I ended up with.  Not elegant.  But quick, and it seems to work ok.

import dataiku
import pandas as pd, numpy as np
from dataiku import pandasutils as pdu

# Get the Output Folder
output_folder = dataiku.Folder("<the identifier of the managed folder>")

# Read recipe inputs
Book_Data_For_Import = dataiku.Dataset("Book_Data_For_Import")
Book_Data_For_Import_df = Book_Data_For_Import.get_dataframe()

# Read the Unique Books List
Books = Book_Data_For_Import_df['sp_act_desc'].unique()

# Loop through the Books and get the Eligible and Not Eligible Books
for Book in Books:
Book_df = Book_Data_For_Import_df.loc[Book_Data_For_Import_df['sp_act_desc'] == Book]

file_name = Book + ' True.csv'
print('Writing :', file_name)
Book_Eligible_df = Book_df.loc[Book_df['Eligible']]
output_folder.upload_data(file_name, Book_Eligible_df.to_csv(index=False).encode("utf-8"))
print()

file_name = Book + ' False.csv'
print('Writing :', file_name)
Book_Not_Eligible_df = Book_df.loc[~Book_df['Eligible']]
output_folder.upload_data(file_name, Book_Not_Eligible_df.to_csv(index=False).encode("utf-8"))
print()

 

--Tom

View solution in original post

5 Replies
Ignacio_Toledo

Hi @tgb417,

I guess you'd like to avoid using a python recipe? That is what I'd do. I can't think about a way of doing this using no-code recipes, without being tedious, as you say.

Cheers!

tgb417
Neuron
Neuron
Author

@Ignacio_Toledo ,

Ok,  I'm writing up something right now.

Thanks. 

--Tom
tgb417
Neuron
Neuron
Author

Here is what I ended up with.  Not elegant.  But quick, and it seems to work ok.

import dataiku
import pandas as pd, numpy as np
from dataiku import pandasutils as pdu

# Get the Output Folder
output_folder = dataiku.Folder("<the identifier of the managed folder>")

# Read recipe inputs
Book_Data_For_Import = dataiku.Dataset("Book_Data_For_Import")
Book_Data_For_Import_df = Book_Data_For_Import.get_dataframe()

# Read the Unique Books List
Books = Book_Data_For_Import_df['sp_act_desc'].unique()

# Loop through the Books and get the Eligible and Not Eligible Books
for Book in Books:
Book_df = Book_Data_For_Import_df.loc[Book_Data_For_Import_df['sp_act_desc'] == Book]

file_name = Book + ' True.csv'
print('Writing :', file_name)
Book_Eligible_df = Book_df.loc[Book_df['Eligible']]
output_folder.upload_data(file_name, Book_Eligible_df.to_csv(index=False).encode("utf-8"))
print()

file_name = Book + ' False.csv'
print('Writing :', file_name)
Book_Not_Eligible_df = Book_df.loc[~Book_df['Eligible']]
output_folder.upload_data(file_name, Book_Not_Eligible_df.to_csv(index=False).encode("utf-8"))
print()

 

--Tom
Ignacio_Toledo

It looks good! One could perhaps save some lines at the end, but that is just being pedantic 😁

Have a great weekend Tom!

tgb417
Neuron
Neuron
Author

@Ignacio_Toledo 

Agreed, The reason for the lack of loop through the true and false variants is; in the moment, I could not see a quick way to loop through the true and false variants of the second column.  Because of the cool ~ negation operator I was using to select from the Boolean column in the data.  

I guess I could of nested some for loops.  The first time iterate over the book type, and the second pass iterating through the items in true and false column.  That could cut some redundant code and be a tad more elegant.  😎 … Maybe …

However, that is a project for some other day.  This was about getting something done.

--Tom
0 Kudos