In dataiku's default pyspark recipes, dataiku.spark's get_dataframe takes a sqlContext to return a spark dataframe. This has been a legacy call to the API since Spark 2.0, as the entry point for SQL operations is now via the SparkSession, which has a few subtle, but important differences. While one can create a SparkSession manually, it doesn't appear to work with dataiku's dataframe API.
Please see here for specifics:
https://spark.apache.org/docs/2.3.2/api/python/pyspark.sql.html#pyspark.sql.SQLContext