-
Integration for Software Component Analysis (SCA) and Static Analysis (SA)
Allow the integration with tools like BlackDuck to perform a Software Composition Analysis of Code Environments, notebooks and webapps developed within Dataiku.
-
Override the sample size for snowflake datasets
One of the challenges we face with the newer versions of Dataiku is the ability to override the sample size. By default dataiku chooses a sample size of 10k rows and hence with snowflake a query gets fired to snowflake like 'select * from table limit 10000' If the snowflake table is large with many columns then generating…
-
ReOrdering Tables in Join Visual Recipies
User Story As an Analyst using Dataiku DSS with a very wide dataset made from more that 5-6 source tables. When specifying Joins and Selected columns, I'd like to be able to re-order the list of tables. This would allow better gross organization of resulting column names. COS: * This should work smoothly with the primary…
-
Document the JSON behind each visual step type in a visual recipie
User Story As an analysis that regularly uses visual recipes I will find from time to time that the User Interface will not allow me to maintain the recipe as I would like. This usually happens with replace recipes that do not allow for reordering steps. In order to get around this problem I will copy the recipe step to my…
-
Labeling > Annotate > Option to Zoom In/Out
The new labeling and annotate image feature works great for normal sized images. But for big images where you need to label smaller sections, a zoom-in function could make the labeling process easier and faster.
-
Bulk type change from prepare recipe
Currently, it's possible to bulk change the meaning of many columns using the prepare recipe's column view. However, it's not possible to change the types the same way from this interface. While the schema can be changed from the downstream dataset, there's no way to change the target schema of the recipe without manually…
-
partition dependency function - allow custom testing
When establishing a dependency function, the predefined test offered by Dataiku is not always relevant. This is especially true when working with time-based partitions, as the default test may extend back as far as 2013 . It would be highly beneficial to empower users to customize or override Dataiku's default test.…
-
partition custom dependency function - make the dataiku library available
When creating a custom dependency function, having access to Dataiku's API can be highly beneficial. In my experience, there was a frequent requirement to parse input partitions. Unfortunately, this was not possible because the Dataiku library was not accessible in the current namespace. Attempting to import the Dataiku…
-
Control over scenario catch-up fuctionality
Turns our there is a "hidden" and undocumented scenario catch-up fuctionality built-in into Dataiku. You can read about it here. This Idea is to be able to have control over this hidden scenario catch-up fuctionality. In particular we would like to Disable the scenario catch-up fuctionality so that Dataiku Scenarios do not…
-
Instance wide Senario On/Off Swtich
User Story: As a Dataiku DSS instance Administrator that occasionally has to "turn off" Triggered Scenarios while working on a Dataiku DSS Instance. I would like a Master Triggered Scenario "Off switch". This would inactivate triggered Scenarios across all projects within a Single DSS Instance. This would mean that I can…