How to read parquet file from GCS using pyspark?
Hi,
My parquet files, stored in GCS are made with a too higher version to be used in DSS in a GCS managed dataset.
So I try to read them via Spark and save it in another dataset.
Doing it with local files is very easy, but how to do it with files stored in GCS?
folder = dataiku.Folder("SpTdwpr2") path = folder.get_path() df = sqlContext.read.parquet(f'{path}/test_parquet.parquet')
With many thanks for your help.
C.
Best Answer
-
Sarina Dataiker, Dataiku DSS Core Designer, Dataiku DSS Adv Designer, Registered Posts: 317 Dataiker
Hi @Chiktika
,I'll walk through a setup that worked for me, and hopefully that will help.
Here's a bucket I have in GCS, that contains a parquet file:
I created a managed folder that points to this bucket with the following settings:
Here are a couple of options for using sqlContext.read.parquet to read in parquet files in this folder. In the first example it gets the filenames from a bucket one by one. The printed out filename could be used directly like so: sqlContext.read.parquet('gc://sarina-bucket/dataiku/DKU_HAIKU_STARTER/gcp_parquet_file/part-r-00000.snappy.parquet') The latter example shows reading the directory: sqlContext.read.parquet('gc://sarina-bucket/dataiku/*/*/*.parquet which could also be generated based on the get_info() and list_paths_in_partition() functions.
And then to write this to a dataset:
I'm not sure if this addresses your use case, so please feel free to add any details if it does not.
Thanks,
Sarina
Answers
-
-
In fact I did not do exactly the same, I did not create a managed folder, I directly read inside my GCS bucket.
bucket = storage_client.lookup_bucket(bucket_name) blobs = bucket.list_blobs(prefix=bucket_path) for blob in blobs: df = sqlContext.read.parquet(f"gs://bucket_name/{blob.name}")