-
Automated table definition not creating long enough varchar fields
Anyone have an idea on why my tables are creating with only varchar(18) when the query produces text longer than that? Getting this error: ERROR: Value too long for character type Detail: ----------------------------------------------- error: Value too long for character type code: 8001 context: Value too long for type…
-
Change Email to Work Email
Hi, I need to change my email to my work email so the certification can be valid at work. But I used google sign in, so I can't change it in edit profile. What should I do
-
How do I reach Sales to ask a pricing question
I've done the "Contact Us " thing twice now over the last two weeks and still not heard from anyone. Does the Sales department still exist at Dataiku?
-
install-graphics-export air-gapped
I want to install install-graphics-export so I can export my flow or a dashboard to PDF. I’m in an air-gapped environment. When I start the installation, I get the following error message: ./dssadmin install-graphics-export [+] Saving installation log to /data/dataiku/data/run/install.log [+] Checking dependencies +…
-
VISUAL RECIPE
Hi, Is there a way to exclude the column used to Split the dataset in SPLIT recipe without using prepare/precomputed column
-
データセットにてカラムへ記載したdescriptionはAIエージェントの解釈に寄与しますか?
SQL query tool のData context informationにもカラムのメタデータを記載できるため、データセット内のdescriptionのみでも十分ならSQL query tool のData context informationには何も記載せずでもいいのか気になりました。 またSQL query tool が実行したクエリを確認する方法はありますでしょうか
-
DSS code-env package extraction
Hi All, i am trying to extract the list of all installed packages with each and every code-env does any one has any idea how can we do so. Basically my object is to resolve the list of vulnerable packages identified. to do so can someone please suggest how can i find or search a certain package within dss. can some suggest…
-
STACK recipe changes the datatype
Hi Team, Stack recipe is behaving strangely. It changes the datatype of output dataset column based on the column values. How can we retain the datatype of input dataset
-
Use first row as column headers/column names
I have an excel file whose format is somewhat different. I need to skip first 9 lines and use 10th line as column names. I have tried using "parse next line as column headers" but it is not working for me. Has anyone faced this type of error. Please let me know how to resolve it. I am pasting input file format below:
-
Issue with get_connection in Python Endpoint for Snowflake – API Node
Hi all, I'm currently working on a Python endpoint in Dataiku API Node, where I need to dynamically generate and execute SQL queries against a Snowflake connection. These SQL queries are generated via OpenAI, so I cannot use a predefined SQL endpoint (as those require fixed queries). I followed the official Dataiku…