-
Window recipe not producing expected results when using DSS engine
Hi there, The issue I am having is that the DSS engine is producing a completely different result than when I use the SQL engine. Has anyone faced a similar issue? I would appreciate some insight on this. Basically, all I want to do is produce a columns with the MAX() value inferred from another column. No partitions, no…
-
Create SQL table for Dataset using python API
Using python API, I can create an SQL Dataset, or clear it using the DSSDataset.clear() method, but afterwards I have to manually click the "Create Table Now" in the settings tab of the dataset before using it in recipes. Is there a way to achieve the same effect as clicking the button using the python API? I checked the…
-
remapping connections for API services
Goodday! In the API Designer, we can define connections to use with SQL Query Endpoints. How do we remap these connections based on deployments to different API nodes? (ie. use different connection for deployments to a production API node vs. deployments to an acceptance API node) I don't see any option in the deployer UI…
-
variable expansion in SQL query endpoint
Goodday, is it possible to use variables (instance-level, project-level, otherwise) in SQL query endpoint SQL statements, in an API service? And if so, can they be used as database object identifiers? (schema name, table name, etc) Kind regards
-
Load from Oracle to Vertica
Hi everyone, How can I load data from an Oracle database to Vertica without drop the destiny table each time that I run the process? Because I try to use the sync recipe but each time that I run the flow, dataiku recreate the table in Vertica instead of append the new rows. I already try with the configurations of free…
-
Using date in DataIKU
Hi, Despite going through documentation multiple times, I still don't really understand how dates work in DSS. I'm importing dataset from a connection. Without turning on any of the options in Date & Time handling, this is how data looks like: It says that the data type is string, while in the database itself it is, in…
-
DSS visual recipes defaulting to max column length with Redshift tables
Hi everyone, When working with Redshift tables in DSS visual recipes we noticed that the table creation settings sometimes defaults to setting certain column lengths to the redshift max (65,000). In many cases this becomes excessive. For example, in the screenshot below the "brand" column has a length of 65k but most of…
-
Monthly Partitioning changes partition column value
I am trying to setup monthly partitioning on a date column in my snowflake database. I have the source table and output dataset set as monthly partitioning. In the middle I have a prepare recipe where I use the time range to get a month (screenshot below), the output of the posting_date field changes from an actual date,…
-
OAUTH authentication possibilities for Python library
Hi, We have an internal library that queries Snowflake. On Jupyter lab, users are authenticated using external browser. Is this possible on Dataiku? If not possible, is there a way for our Python code library to pass the username to the Dataiku Snowflake connection and get back an access token to run queries? Can we access…
-
Unable to see connections while creating new snowflake dataset
Hello everyone, I am new to dataiku and need your help asap. I have set of snowflake connections created in dataiku already. when I created new snowflake dataset in new project. I am able to see some of the connections only. the required connection is missing which I can see in the credentials tab in profile in setting.…