Dear all, When I let my recipe export/store the output in S3 it creates a file with this name: out-s0.csv.gz. Is there a way to change the name of the output file? Kind regards TonyR
Hello, I am using Dataiku in AWS - how can I add Python 3.7 to the environment? On my local machine I have added it to PATH, and all works well, however, not sure how to proceed with adding it in AWS …
Hello, Anyone have any suggestions for how to get a dashboard that I have set as public to display on the homepage in the Dashboards section? I have made the dashboard public and yet I even as owner o…
While working with a team partner, we started to see an unexpected behavior with a join recipe. We are trying to join 2 partitioned datasets, both at the day level, and we are using this dependence de…
Hi Team, I am having a use case where I have to create a new column based on 3 columns X, Y and Z. If a value of X is in between the range of columns Y and Z then the new column has the value X else t…
Hi there, I think it must be a very simple task, but I can't find any recipe or formula in order to count the number of sells per customer (I have a colum "orderTransaction" and a column "IDbuyer"). T…
Where can I find a comprehensive list of charset supported by the S3 connector (utf-8, windows-1252, etc.)? I'm looking for a specific charset, but I don't know if it's unsupported or if DSS doesn't r…
Is there a way to use an equivalent of dtype (from pd.read_table()) inside dataiku.Dataset() or dataiku.Dataset.get_dataframe() ? my_file = pd.read_table("input_file" , dtype={ 'field1':str, ,'field2'…
Hi, I installed DSS with Hadoop integration. Hive, SparkSQL recipe work well. Hive, SparkR, Pyspark, Python and R notebook alos work. But Python and R recipe doesn't work (button are gray). Does someo…
I have several Redshift datasets that have already been built in a project, and was trying to deploy some very basic recipes. For instance, on one dataset - let's call it dataset X - I was simply tryi…