-
Change Auto-Typing to an off or on option with default “Off”
Would like to have the Auto-Typing setup as an option that can be turned off and on with the default being “Off”. This feature is changing my unit serial numbers (230836735F) to a Float (2.30836735E8) which causes me to lose records when joining on the unit serial numbers field in a following step. This will cause my…
-
Renaming a dataset using Python API
Dear Community, I am trying to rename a dataset from a project using the python API using the rename method from the dataikuapi.dss.dataset.DSSDataset class (https://developer.dataiku.com/latest/api-reference/python/datasets.html#dataikuapi.dss.dataset.DSSDataset.rename) but I get an AttributeError: 'DSSDataset' object has…
-
Longer Connection text box on New Snowflake dataset page as needed
Request for the text box for Connection on New Snowflake dataset page to get longer to fit the full connection text if the connection text is longer than the current text box length. Our organization has a standard prefix for connections based on division/team/project, so I have multiple connections with the same prefix…
-
Exception: Unable to fetch schema for PROJECT.dataset: b'Ticket not given or unrecognized
Hi there, I encounter the sudden issue of not being able to load datasets into a Jupyter Notebook. Changing environment/Kernel doesn't help. System reboot doesn't help. Force reloading doesn't help neither. Nothing was changed in the code. Flow still runs, so it runs as a receipt but not when trying to work in the…
-
Perform quick SQL query on SQL dataset from UI
For my workflow it would be very helpful to have the option to perform a quick SQL query on a (SQL) dataset in the Flow from the UI. For example by right clicking. Things like count distinct values of a specific column, etc. Right now, I go to my separate SQL client to perform these quick checks, but that requires tool…
-
Setting up Stages in Snowflake to work with Dataiku
In Dataiku DSS when working with Snowflake there is an option to use a stage. This apparently speeds up performance by increasing the number of different types of processes one can do inside Snowflake without having to ship data back to the DSS server for processing. Are folks using this feature? What has your experience…
-
refresh partitions in dss via API
Hi, we have added by a python api a new dataset into the project and pointing it to an existing location in HDFS where partition folders are stored. (This location is managed by another DSS instance). This kind of "import" of read only dataset works, but I did not find a way how to "refresh" the list of partitions, i.e.…
-
How to programmatically refresh input dataset partitions with Snowflake?
Hi, I’m working with a Snowflake-partitioned dataset that serves as an input in my project flow. I’d like to automate the refresh of the partition listing, which is normally done manually using the "REFRESH PARTITIONS" button in the Metrics tab. We previously managed to do this with S3 using the…
-
The recipe execution is taking long time due to handling a large volume of data in dataiku
We are experiencing long execution times for a recipe in Dataiku due to handing large datasets, while we have implemented partitioning using a filter on a specific column, it still takes 1.5-2 hours to partitioning 30M records. Is there a more efficient way to handle and process this data quickly and effectively because…
-
How to execute a recipe after an empty dataset ?
Is there any possible way of checking readyness of a dataset? I have a dataset that might be empty after a Hive query, it shouldn't be a problem but since it is (I cannot use it in a left join...) I decided to build another dataset that would contain either the result if it exists or a dummy line if it does not. All this…