Survey banner
The Dataiku Community is moving to a new home! Some short term disruption starting next week: LEARN MORE

PySpark Notebook - Insert column with the Partition ID

Level 1
PySpark Notebook - Insert column with the Partition ID


I am working with a PySpark Notebook.

I have a partitioned Dataset and I would like to create a column in this Dataset with the partition ID value. 

The result I want would be the same dataset without any partition but having a column "id_partition" that I can't get by importing the Dataset in the Notebook.

The goal is also not to manipulate the flow, only the Notebook.

Thanks in advance !

0 Kudos
3 Replies

Maybe this part of the documentation might help:

But I thinks option is only available when you connect to a dataset with dataiku.Dataset. If you are using dataiku.spark.get_dataframe(sqlContextdataset), I'm not sure what the solution could be.

Hope this help a bit

0 Kudos
Level 1

Thanks for your answer, I am very positive that I have to use the DSS Dataset library rather than some Spark function.

However, I am really struggling using this library to go from :




to :

row1 | partition_name 1

row2 | partition_name 1

row3 | partition_name 2

0 Kudos

Maybe it is something you can do before starting to work in the Pyspark notebook. There is thread where you can "enrich" your partitioned dataset with the partition ID or name:

In my case, I created a dataset using a connection to an HDFS set partitioned by 'day', and this is reflected on the path of the data: /home/data/day=Y-M-D/data.csv

When creating the dataset, I didn't get a column with the 'day', so I used the "prepare recipe" as recommended in that ticket:


 After running the recipe, I had the data columns, plus a column with the day:



If your case is similar, that might help. I couldn't find a solution using the pyspark or the dataiku API in python.

0 Kudos