If I use Gemini 1.5, the upload option is visible to upload user files as you can see below: But there's an error when using it with or without uploaded files: File "/data/dataiku/dss_data/web_apps/DS1948/V9u5gZh/python-lib/backend/routes/ws/answer_streaming.py", line 173, in process_and_stream_answer for chunk in…
Hi Team I'm reading the data from SharePoint and the format of the file is Cost Center_06092024.xlsx As the file comes with the date format, I partitioned reading the data as /Cost Center_%M%D%Y.xlsx and in my prepare recipe set the option as Last available by that the flow ONLY get the latest file. I'm trying to create a…
Hello, I declare a variable List_x in Dataiku project as a tuple of 3 elements A, B and C. { "List_x" : "('A', 'B', 'C')"} With SQL query, how could I extract each element of this variable ? The goal is to use "List_x" in a Case when syntax, like this: SELECT *, CASE WHEN X = List_x(0) THEN 1 WHEN X = List_x(1) THEN 2 WHEN…
From what I could see, both seem to have the same functionality when I used them outside the Dataiku web interface. I see documentation for both the packages but they seem to be conflicting. For example, in this page section, in the illustrative code, we are importing dataiku, but just below it, it is written that a…
I have an connection / output recipe that is outputting to a table in snowflake, I don't see any option in here to choose between outputting and appending data, where can I set that up? does the option pop up after I press run? or is that something I can set up before hand? Operating system used: windows
I am using the Pivot Table functionality in the Dashboard. My rows are some hospital names, and I am showing counts of some other variables for each hospital. There are in total 93 distinct hospitals. When I put hospital name under the row section in the pivot table, it is showing the names of 20 hospitals and clubs the…
I have a .txt file in an s3 connection being parsed in excel style. There are 7 columns. The DSS is creating new columns off the actual data rather than the column names because some entries have commas. This is odd because I have other data where this IS NOT happening. How can I fix this?
Hi, I'm looking for the way to use : "Update local variables on automation project". Indeed when I'm trying to use this option with for exemple : { "BON": "JOUR" } on deployer node : Deployments / $projecKey-on-dataiku-automation-2/settings/ I get this error : Invalid format: Cannot read properties of undefined (reading…
import dataikuimport pandas as pd, numpy as npfrom dataiku import pandasutils as pduimport time def write_random(name: str): sample = dataiku.Dataset(name) sample.write_schema([{"name":"data", "type":"double"}]) with sample.get_continuous_writer("source-id-string-dummy") as sample_writer: while True: val = np.random.rand()…
Hello, I am using Dataiku 12.5.2 and Spark 3.2.2. I am utilizing PySpark in Dataiku. Upon checking the log, I see that it is initialized in the form of "Init: running in flow, JEK port=32929" from dku.spark.context. Is there a way to set the JEK port mentioned here to a fixed port or to allocate it within a specific range?…
Create an account to contribute great content, engage with others, and show your appreciation.