Not a question but an answer as I couln't find any relevant posts. I solved this problem using a SQLExecutor2 in a Python recipe: from dataiku import SQLExecutor2 executor = SQLExecutor2(connection="c…
Hello, I am using the SQLExecutor2 to read a temporary table and write to a Snowflake dataset in a Python recipe. Here is the column data type: {"type":"ARRAY","length":16777216,"byteLength":16777216,…
Hi there, The issue I am having is that the DSS engine is producing a completely different result than when I use the SQL engine. Has anyone faced a similar issue? I would appreciate some insight on t…
Using python API, I can create an SQL Dataset, or clear it using the DSSDataset.clear() method, but afterwards I have to manually click the "Create Table Now" in the settings tab of the dataset before…
Goodday! In the API Designer, we can define connections to use with SQL Query Endpoints. How do we remap these connections based on deployments to different API nodes? (ie. use different connection fo…
Goodday, is it possible to use variables (instance-level, project-level, otherwise) in SQL query endpoint SQL statements, in an API service? And if so, can they be used as database object identifiers?…
Hi everyone, How can I load data from an Oracle database to Vertica without drop the destiny table each time that I run the process? Because I try to use the sync recipe but each time that I run the f…
Hi, Despite going through documentation multiple times, I still don't really understand how dates work in DSS. I'm importing dataset from a connection. Without turning on any of the options in Date & …
Hi everyone, When working with Redshift tables in DSS visual recipes we noticed that the table creation settings sometimes defaults to setting certain column lengths to the redshift max (65,000). In m…
I am trying to setup monthly partitioning on a date column in my snowflake database. I have the source table and output dataset set as monthly partitioning. In the middle I have a prepare recipe where…