I want to clarify the difference between Global and Local Variables. I told them the definition from here So, then what's the point of having Local Variables if in Production I'm just going to use global Variables only. Isn't Local Variable redundant to have?
Hi, I’m using a Dataiku LLM connection to Azure OpenAI and I would like to use OpenAI’s built‑in web_search capability (the same one you get when calling the Responses API with tools=[{"type": "web_search"}]). I created a code agent but unable to use the langchain framework to invoke web search with openAI…
Hi Dataiku Community, We are working on integrating Dataiku DSS (hosted on EC2) with AWS services and would appreciate guidance/documentation reference on the recommended approach and authentication practices. From what we know: currently Dataiku DSS is deployed on an EC2 instance (private network) DSS is in a different…
import re import dataiku import urllib3 import datetime urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) def get_project(client, project_key): return client.get_project(project_key) def get_datasets(client, project_key): project = get_project(client, project_key) return project.list_datasets() def…
Hi, I'm unable to get SQL autocomplete to work in Dataiku's SQL Notebook when connected to BigQuery.
hello, how can i call agent hub/LLM mesh from outside Dataiku through POST-Man app or python code. Is there any way as API designer service in order to call it. I've read about Rest API but don't understand how to configure it over agent hub and how it's going to be accessible for the external systems Thanks, Dataiku…
How can I perform pattern matching similar to the SQL join with a LIKE operator in Dataiku as below: SELECT Data1 .* Data2.NewName FROM Data1 INNER JOIN Data2 ON Data1.Cust_Name LIKE CONCAT('%', Data2.Contains, '%')
Job Summary: We are seeking an experienced Dataiku Platform Architect with strong expertise in managing and supporting the Dataiku ecosystem across multiple environments. The ideal candidate will have hands-on experience in setting up and architecting various Dataiku nodes (Design, Automate, Fleet, Deployer, Govern, API)…
Dataiku Cloudにおけるリソース管理方法を検討しています。 以下のContainer execution configurationsで作成できるconfigurations単位でのリソース利用状況を取得・ダッシュボードで可視化する方法はありますでしょうか。 以下のページの記載によるとKubernetesのpod単位でしか利用ログを取得できなさそうでした。 Reference | Managing elastic AI compute capacity - Dataiku Knowledge Base
お世話になります。 AWS上にDataiku環境を構築しており、EKS上にクラスタも配置しております。この環境でNotebookおよびPySparkレシピにおいてPySparkを使ってEKSクラスタで処理を実行するようなコードを書こうとしています。 このコード内でSparkに追加ライブラリ (JARファイル) を読ませた状態でSparkSessionを起動したいのですが、クラスタに対してJARファイルをどうやって配置するべきなのか分からず困っております。 クラスタへJARファイルを配置する良い方法について、ご存じの方がおりましたらお知恵を拝借できますでしょうか。よろしくお願いいたします。 Dataiku version used:…
Create an account to contribute great content, engage with others, and show your appreciation.