-
Want to Create an Application as Recipe for all Zones with Datasets
I have looked up that you need to create a scenario before creating the application as a recipe. However, I am looking to create the entire project as a recipe for an application. Besides needing to every now and again adjust columns, the majority of the schema and recipes will not change for duration of the project and…
-
Can not Duplicate a Project- Have a lot of Zones and Need to use all Recipes for a New Project
I have a project that has a framework designed as a set up to re-use for other months. However, besides needing to make sure datasets are replaced in the input for the months of July from June, there are also other new datasets that are being added that were not in the month of June (example: there are now two additional…
-
Who should be able to deploy bundles into Production Automation node?
A user must be Admin on the Automation node project in order to be able to deploy a bundle there. But Admin permissions in Production seems like too much for an ordinary user to have given the risk of impacts to production assets. Should deployment then be restricted to a smaller set of users, such as project leads or a…
-
Allow post-join computed columns for columns that begin with underscores
When a column starts with an underscore, it cannot be used in a post-join computed column. For example, this column is defined for every record in the table: However, the preview fails when it is used in a post-join computed column formula: Other columns beginning with underscores that I know are fully defined have the…
-
How can I add data to an existing dataset using an Dataiku DSS API?
Operating system used: Dataiku Operating system used: Dataiku Operating system used: Dataiku
-
Same input and output external database table in python recipe
Hi There, I am using an external snowflake table to capture metrics, say Table ABC. During training/retraining, I would like the flow to read the same table ABC before updating the model for predictions and write the new metrics and the path to the model in the same table ABC. I have written the python recipe to do this…
-
Oops: an unexpected error occurred java.lang.IllegalStateException: Expected a double but was BEGIN
Hi, I am working on the project "Advanced Designer Assessment" after modified Prepare recipe to add column "qualifies", when i open dataset "Online_Retail_Prepared", i got this error message. Oops: an unexpected error occurred java.lang.IllegalStateException: Expected a double but was BEGIN_ARRAY at line 377 column 21 path…
-
Integration with Microsoft Fabric and its OneLake
Hi, Couldn't find anything on an integration with Microsoft Fabric through OneLake yet in the Dataiku docs/release notes. Is this coming soon? As I read the Microsoft docs I understand we can't connect directly via ADLS, only via APIs or SDKs. Thanks in advance, Jonathyan Operating system used: RHEL 8
-
How to overwrite 'standard project variables' via 'Edit project variables' tile in Application?
already resolved Operating system used: Windows
-
Dark Mode
Every developer needs a dark mode A dark theme for the flow, datasets, and recipe configs would go a long way toward making Dataiku fit into workflows that involve many other dark mode tools. Dataiku is definitely very bright when swapping from other tools which operate in dark mode. Extensions like Dark Reader do a pretty…
-
Python recipe random connection aborted errors to managed folder
We have a on-prem DSS installation. I have a folder of SFTP type to read from a remote server. My python recipe uses this as input. I'm using get_download_stream() to read the files like so: handle = dataiku.Folder('my_folder') handle.get_download_stream('path/to/file') Now this is looping through several hundred files and…
-
[Webapps] Shiny for Python ?
Are there any plans to add Shiny for Python to web application development frameworks? Same question for Django and Flask. Thank you, Alexis
-
Dataiku development settings: CI, VSCode, Remote repository
Hi all, I've read the guide to connect dataiku dds to a github remote repository to manage branch creations through the use of duplication of projects, but I have some doubts about its usage since what seems to me is that to use it I still need to have a master project, and many copies each for each branch. And this number…
-
How to rate limit number of records processed per second
Hello, I'm new to dataiku and I have a scenario where I have to send emails. Now I am using send_emails plugin to send emails, but my smtp server has a limit that it can process only 40 mails/second and I want to ensure that only 30 mails are being sent per second. Is there anything that can limit only 30 records being…
-
DSS Memory Management Webapp consumption buildup
Hi, so we have quite a number of webapp backends runing on production server and this leads to out of memory situations. We would like to tend to this issue by implementing a backend cleanup routine for long running webapps. Is there a way to get a timestamp for webapp start ? Or is there a better way to go around for this…
-
How Can I Integrate Speakatoo’s Text-to-Speech API with Dataiku for Audio Data Insights?
I’m looking to enhance my Dataiku workflows by integrating Speakatoo’s text-to-speech (TTS) API to turn data insights or alerts into audio. Has anyone tried using a TTS service like Speakatoo within Dataiku for this purpose? I think it could help make data monitoring or reporting more accessible. What challenges should I…
-
Connecting to a sharepoint list and appending new rows
I am trying to connect to a particular SharePoint list of our team, so we can automatically append new entries. However, i have a very hard time establishing the connection. I am doing this with a manual Python recipe. Can someone help me? Operating system used: Windows
-
How to optimize dataset batch writer for long-running tasks?
We have a compute and memory-intensive Python recipe that we are building. The recipe both reads and writes from/to Snowflake-based datasets. We are running into an issue where the task fails at the end of the code when the context manager closes and the dataset writer attempts to commit the transaction. Solution 1: The…
-
how to generate rows in DataIku
Hi Team, I want to generate some date rows based on a criteria in DataIKu like the attached screenshot from Alteryx This will generate a new column 'Date'. I came across below link from the community but there is no proper solution given in that. Any help would be greatly appreciated. Thanks Thanks & Regards Dil Operating…
-
Is it possible to download a trained model as Python file?
I've got confused by the documents provided. I'm currently working with the trial edition and so far haven't been able to get my trained model as a .py file. I'd be glad if someone helped me.
-
Creating a QABot about Dataiku using "Dataiku Answers"
Hello, I thought it would be very helpful for all users if we could create a QABot about Dataiku using "Dataiku Answers." However, obtaining text data related to documentation or knowledge is not easy. According to Dataiku's website terms of services, web scraping is prohibited, so I gave up on preparing data through…
-
Properly implement support for Building Flow Zones in Scenarios and the Dataiku API
In Dataiku v12.0.0 a new feature was added that allows users to build flow zones from the flow UI: https://knowledge.dataiku.com/latest/data-preparation/pipelines/tutorial-build-modes.html#build-a-flow-zone This works well however this capability was never added properly to Scenarios and to the Dataiku API. In 12.1.0…
-
HTTP status codes returned by DSS API node/service
Two questions: 1. Will an exception, raised during execution of API service code, always result in a response with a 500 HTTP status code? 2. Aside from that, errors can occur outside of API service code execution (e.g. service is unavailable). Is there an overview available of which situations map to which HTTP status…
-
Write the schema of a pandas dataframe using the "dataikuapi" variant
Hello, Dataiku team. How can I write the schema of a pandas dataframe using the "dataikuapi" variant? I'm doing it this way because I want to send information from the api node to a table in my project in the layout node. I want to write the schema like when using the dataiku library: df_final.write_with_schema(df). But,…
-
How can I override k8s resource requests/limits on a recipe-by-recipe basis?
We have several containerized execution configurations for a variety of workload types. There have been instances where we would like to override the resource limits/requests at a recipe-level. I have so far attempted to set the memory limit via a property (recipe settings → advanced → variables & properties → properties)…
-
Remote references can't be fetched
Hi all, Getting the following error when specifying the Git remote in the libraries (Import from Git). Please assist me with this matter. Remote references can't be fetched Branches and tags could not be fetched from remote Git, caused by: IOException: Process failure, caused by: IOException: Process execution failed…
-
Renaming a saved model using Dataiku Python API
How to rename a saved model using API?
-
How to create sync recipe using create_recipe or new_recipe?
I want to create a sync recipe using python script. I found I can use create_recipe or new_recipe functions. But I am facing this error. The output dataset should be in MS SQL Server. How do I do this? Operating system used: Windows Operating system used: Windows
-
What is the certification cost for Code designer, Developer and ML Practitinoer
-
Data behavior when importing data from a DB connected to Dataiku
When Snowflake and Dataiku are connected and data is imported, Dataiku will physically import the data? Thank you in advance.
-
Remapping Connection (under Activation Settings) in Bundle Management using Dataiku Python API
I am trying to remap/update the already existing Blob container connection name I have created a Bundle in Design Node and imported & activated in Auto Node. Both the nodes are pointing to separate Azure Blob Containers Unable to figure out if the connection settings have to be updated before the creation of the bundle OR…
-
Possbile lazy loading bug
I'm running version 13.0.3. I've been mass importing some tables and at one point you get to a screen where the names of the datasets are shown. I then want to change those one by one and I navigate by using tab. I've noticed that the navigation goes wrong after some tabs and I have to scroll to where I was and place my…
-
Possible flow copy bug
I've been copying a few flows within a project, reusing them of sorts. When you copy to the same project you get a screen where you can rename the datasets (or it will do the _1 suffix). I've noticed some recepies seem to be missing in these lists. I haven't tested extensively but I've seen missing join recipes, filter…
-
How to do Nested IF conditions in DataIku
Hi Team, I have a requirement like below to do a nested if condition. Let me know how to achieve this in DataIkU where I searched for few options but couldn't able to fully resolve. IF [Created Month] = [Rprt_Month] then 60 + (if [No. of buyer]>=6 then ([No. of buyer] -5) else 0 endif)*15 else (IF [No. of buyer changed]>0…
-
Can a data quality soft check affect a scenario status?
I would like to alert stakeholders when a soft data quality check has failed. The solution I'm currently developing involves updating the scenario status to a warning state when the soft check has filed. However, it does not seem like a failed soft check affects the greater scenario status. Is there a way to do this that…
-
Kernel not starting (Jupyter Notebook Python)
I see the above 'kernel starting' message when I open my jupyter notebook in the python code recipe, but the kernel isn't starting. After a while, I get the following error message: I tried interrupting, restarting, reconnecting, changing kernel etc. but it doesn't solve the issue. I have opened other code recipes in the…
-
How to append dataframe in existing output dataset
Hello experts, In dataiku v12.3.0, I was trying to append dataframe using write_dataframe() in existing dataset (with same schema). But it always overwrites with last dataframe even though the dataset spec is configured like: dataset.spec_item["appendMode"] = True The dataset is classified as output so it doesn't let me…
-
Trigger a Recipe and its dowstream receipes/Datsets using Python APIs.
I require help in triggering a recipe and its downstream recipes & datasets until the end of the flow.
-
Add/Delete rows button for VisualEdit Plugin
Hi, I am using the visual edit plugin as a solution for editable datasets, lots of our projects at Convex are using editable datasets in our production environment and we have been looking for a solution for a while now that allows user to edit to datasets without needing write permissions. The visual edit plugin is really…
-
Remapping Connection under Activation Settings of Bundle Management using Dataiku Python API
I am trying to remap/update the already existing Blob container connection name (Update DesignNode Blob Name → AutoNode Blob Name). I have created a Bundle in Design Node and imported & activated in Auto Node. Both the nodes are pointing to separate Azure Blob Containers Unable to figure out if the settings have to be…
-
Govern node jointly used with API node and/or Automation node
Hi everyone My setup is the following: * Design Node * Deployer Node * Automation Node * API Node * Govern Node Each node is a separate instance (e.g the deployer node is remote to the design node). I recently setup a govern node with which I track all bundles deployed on the automation node, as well as all APIs deployed…
-
User based data filtering in Dash webapp
In a Dash webapp that requires Dataiku authentication we would like to filter displayed based on the webapp user. It is possible to retrieve the authenticated user in the webapp: dataiku.api_client().get_auth_info_from_browser_headers(request_headers) My question is how to best propagate the user ID to the backend so it…
-
change model version
Hello, I have trained and created a model in the flow. I have after trained new versions of the model but I can't find the way to change the version on the flow (the active model is always the frist one trained). Thanks Operating system used: Linux
-
Github Configuration within Dataiku
Hi Team, Do you have any recommendations on how to clone our git repo within Dataiku? I did try cloning within our project and had trouble retrieving that code. Is it necessary to have github plugin installed by default? Thanks!
-
Using darts python library to create a custom Saved Model
I've followed the tutorial here: Importing serialized scikit-learn pipelines as Saved Models for MLOps - Dataiku Developer Guide and I've been able to develop a model using the darts==0.30.0 library, having wrapped it in the standard scikit-learn pipeline. My issue is with the very last part of step 3 of this tutorial…
-
where is add ouput option while using filter tool by formula?
-
Jupyter notebook kernel pip
Hello Dataiku, I am trying to install packages on a new kernel that I have built using the "python env" code environment option in the administration page. However when I try to install the package, it still clashes with the default python's packages on the machine. This is also evident when I execute `!pip list` and check…
-
mise à jour hebdomadaire d'un fichier csv uploadé
Bonjour, chaque semaine, j'ai un scénario qui prend en compte un fichier csv uploadé, également chaque semaine. J'au donc un dataset uploadé qui s'appelle contrat_fraude_verif sur lequel commence un flow (voir ci-dessous). Quelle est la meilleure pratique pour avoir un minimum d'opérations à faire lors de l'intégration…
-
DSS engine error : while joining Database Engine and Spark Configuration Issue
When I attempt to join datasets using the DSS engine, I encounter an error stating that the recipe cannot utilize the in-database engine and will default to the slower DSS engine instead. Additionally, it warns that the 'national_joined' dataset is not a SQL table dataset. If I switch to the Spark engine, I receive a…
-
Is there any limit to the number of projects within a subscription?
Is there any upper limit to the below within a subscription? No of projects No of workspaces
-
why quick model options is not coming in my instance
-
How can I use the as_langchain_chat_model() method to get the langchain compatible llm?
Hello, I am a new user of LLM Mesh and I am experimenting with creating a multi-agent workflow within Dataiku. The documentation specifies that there is a way to get a langchain compabible llm by using the as_langchain_chat_model() method (https://developer.dataiku.com/latest/api-reference/python/llm-mesh.html). However,…
-
can we Obfuscating plugin code
obfuscating plugin code written in python and the code should run in dataiku platform Operating system used: all
-
dataiku failed to load data from snowflake(large dataset)
Exception: Reading dataset failed: failed to read data from table, caused by: SnowfalkeSQLLoggedException: JDBC driver internal error: Timeout waiting for the download of #chunk0(Total chunks: 79)
-
git - clear notebooks before commit
Several times we've encountered projects that could not be exported due to "git saturation." From memory, I believe the export limit for a project occurs when version control exceeds 2 GB (but I think this limit has recently been raised). After investigating, we found that this issue was caused by committing notebooks,…
-
Save ML Lib pipeline model in pyspark recipe to hdfs managed folder without using local file system
I can't use a Dataiku Lab feature to train our model for various reasons, and I need to do it in a pyspark recipe (spark submit). I am training an ML Lib GBTRegressor. Once the pipeline model is trained, I would like to save it. I have no access to the local filesystem (our IT policies). I also don't have access to hdfs…
-
I'm somewhat embarrassed as this should be basic and incredibly simple but ...
I'm working with CSV files and developed a couple of recipes with one dataset until they did what I wanted. I assumed that replacing the development dataset would be dead simple and I would just click to replace the old data set and be able to specify the new csv file and run the recipes with the new data using the UI. The…
-
Create dataset based on criteria
I have a folder of 40k+ images, and I have created a separate dataset from multiple sources that resulted in a list of image names that I want to keep, plus some key attributes. How can I create a new image folder from my big image folder that only contains the images with names that match a column in one of my prepared…
-
Improve Git configurations in general.
Hello, I believe it could be interesting to improve the interface for configuring the security of dataiku nodes. On the visual aspect but also to have more config parameters. a few examples; At the interface level, to be able to choose groups from the list available on the instance. To be able to manage segregation methods…
-
Maintain case of SQL table name when creating SQL datasets
Currently, when a SQL dataset is created, the name of the associated SQL table is set to PROJECTKEY_tablename regardless of the case of the SQL dataset name. It would be great if either the case of the dataset name was maintained in the SQL table name (so dataset ABC would result in a SQL table name of PROJECTKEY_ABC…
-
Test Amazon S3 connection
I setup a Amazon S3 connection to a Minio server using an http endpoint and using path style option. When i click on the TEST button , the following error appears: Connection failed: Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 17EC370F655CA6AA; S3 Extended Request ID:…
-
Question about the install path of Dataiku
Due to the space in my username(Zhao Guanghao)of my laptop, I can not run dataiku correctly. Meanwhile, the filename under the 'User' file in C disk can't be modified, once you rename it directly, the system can not identify the account and it will collapse. Besides, Dataiku can only be installed in the path 'C…
-
Seeking Examples for Sending Emails via Shell Channel in DSS Settings
Hi everyone. You need a way to send emails using 'Shell channel' in 'Channel' of 'Notification emails' in 'DSS settings'.In a closed network environment, it is difficult to use notification emails or scenario reports because it uses its own mail system rather than SMTP. Currently, Notification emails appear to be available…
-
Dataiku flow documentation
From the Flow documentation, there are values for Datasets (total size and record count). How do I add more datasets metrics such as Min, Max, Std Dev, etc into the documentation? Hope that someone can share the specific steps. If there are samples, it will be much appreciated.
-
AWS EC2 instance change inquiry
Hello, I am currently using Dataiku on an m5.4xlarge instance on EC2 on AWS. I need to train a deep learning model (YOLOv8n), so I'm going to use a g4dn instance. Is it possible to use a g4dn instance only during the training process and use an m5.4xlarge instance for other tasks such as preprocessing or model evaluation?…
-
dataiku internal client not compatible with Python 3.12
Hi Dataiku Guru, We used the dataiku internal client outside of Dataiku in a python script, but in python 3.12, when import dataiku module an error raised due to the depreciation of imp python module in pyhon 3.12 https://github.com/python/cpython/issues/98040 The dataiku/base/utils.py still uses the imp When will the…
-
Creating a data extract with a SQL and passing a filter parameter
I am trying to create a data extract from SnowFlake . I have selected the connection and written my sql . I want this SQL to be available to multiple users such that they can apply filters to the dataset to retrieve the information . For example, in my dataset I have a "COUNTRY" column. I want a way for users to use the…
-
MongoDB connection
I was wondering anyone help me the process of making or connecting MongoDB database inside a dataiku project? Operating system used: Windows Operating system used: Windows
-
Calculate a single metric/check via the Dataiku API
Hi, It is currently not possible to calculate a single metric or check via the Dataiku API while this is possible via the GUI. The following APIs exists: dataset.compute_metrics() dataset.run_checks() but they will calculate all enabled metrics/checks which may take a lot of time. So this idea is to provide an API to allow…
-
I am not able to install dataiku on my MacOS
Please look into this and help me know how can I help this person in installing dataiku with the installation Operating system used: Mac OS Version 10.12.6 Operating system used: Mac OS Version 10.12.6
-
Dataiku DSS start failed : The Dataiku DSS installer closed with code 1
Hi everyone, Just noticed there's an updated for the latest Dataiku release couple of days back. So I just updated them through the launcher. But once it's finished, it always giving me the following error : Dataiku DSS start failed : The Dataiku DSS installer closed with code 1 Can anyone point out how to roll back to the…
-
How can I replace a dataset created from a csv?
I have uploaded a CSV and stored it in the filesystem_folders. I have built several recipes from this dataset. I have now received an updated version of the CSV, but cannot figure out how to upload it and overwrite the original dataset. It seems to require I create a new dataset. If I do create a new dataset, there doesn't…
-
Data Masking
Hi, I have a general question about data masking in Datiku. How well can data masking be implemented and are there tools for this out of the box? Especially static masking as well as deterministic masking. Than you. Best regards Mic
-
How to monitor automatically the status of the API services deployed on one node ?
Today, we have several API services deployed in a production environment used by different IT applications. We would like to monitor the state of these services : Are they active and ok ? Please, do you know a way to automate this check and be notified in case of issue ? If nothing has been designed for this purpose, I was…
-
anyway for Streamlit code studios to share code for multi-person development?
Hi, We have a project with multiple developers that want to write a Streamlit web-app using code studios. However, it seems that only the person who created the code studio can view the code. Is there any way around this behavior? If this is how Code Studios work, I fail to see how they are a good solution, for frankly…
-
Schedule export functionality in flow
I am exporting a dataset to Power BI and would like to schedule the export on a daily basis such that the latest data gets updated in Power BI. How can I achieve that?
-
Support image segmentation in labelling tasks and Visual ML
I have a couple of use cases where I need to train image (instance) segmentation models (as opposed to predicting bounding boxes in object detection). I'd love for the ML labelling to support image segmentation approaches. For example using SAM (Segment Anything Model) to pre-segment images which can then be annotated by…
-
Comment arrêter un scénario en échec
Bonjour, ne trouvant pas une solution. Je m'en remet à vous. J'ai deux scénarios : le premier est un scénario construisant un flow. (aucun soucis ici) le deuxième est constitué de deux steps. Le premier step incrémente "selected_MOIS" (exemple : 202201) puis le deuxième step vérifie la condition suivante afin de lancer le…
-
Concactenating numbers as strings in a formula
I am trying to merge year and month as follows : Year: 2023Month: 12 Result : '202312' Year and month are currently numbers. When I use concat i get '2023.012.0' when I try strval I get "" . Below is some testing I performed with the year variable. Any advice? Thanks!
-
RAG LLM for multiple datasets
Greetings, While working with the embedding recipe, we faced a limitation where we have two datasets, we want to apply the rag on, how can we apply the knowledge bank on them specifically? Regards
-
Inclusion rules of files in path
Hello, How do we define the inclusion rules of files with the option 'Glob' and 'Path in dataset' ? For example, I want all the paths including the string 'ODNA'. Thanks
-
Disabling input fields in plugin UI using parameters
I do see "visibilityCondition" to show/hide a parameter depending on a condition, but is there option to enable or disable input fields or buttons based on user interactions without using custom UI?
-
Application as Recipe Inputs are Broken (or insanely obtuse to use)
I have a project that I built out to be an Application-As-A-Recipe to upload a Dataiku dataset as a file to our API. I will refer to my Application-as-a-Recipe as my "child process" for brevity's sake. Another Project calls this recipe within its flow. The child process has a scenario to build out all datasets and…
-
How to get newly appended rows in sync recipe ?
Hello, this is not a huge problem but I'm sure there is a way to do this properly. I'm syncing rows that I append to an SQL table. The "id" column of this table is auto incremented, and thus before syncing, the corresponding row is set to null (in order for it the have a correct id once in the SQL table). In the case "id"…
-
code environment for python helper code in dataiku application
I'm using dataiku application. I want to know if there is a way to set code environment for python helper code, I'm trying to import a library for a functionality in the app but it gives "ModuleNotFoundError". So i was wondering if there is a way i can use code environment for python helper code to use libraries of…
-
Displaying decimal as percentage
I am summarizing the results of testing for a disease. I would like to show the positivity rate by year. In order to display positivity rate as a percentage, I need to define it as text and format it as a percentage adding % etc. However, if I use this approach I cannot plot the positivity rate by years as it is a text…
-
Dash Web App callback issue
I am currently building a web app that is using Dash to allow users to input data and store into tables. The app works but when I add in a call back for a specific use case the web app just shows a blank page, no errors or logs are outputted the page is just white. The purpose of the two callbacks is to populate one…
-
Support fot 2way partial dependence plots
I'd love to see support for 2way partial dependence plots in mode summary reports to get insights into the interaction of 2 features on their model impact. This would give some deeper insight into feature behavior in the model at hand. See here under 4.1.1 for the sklearn implementation 4.1. Partial Dependence and…
-
Architecture diagram
Team, Can someone please provide the block diagram for dataiku architecture ? thanks, Bhushan
-
Creating a Web App with React Frontend
Hi Community , Does anyone know if it is possible or even recommended to use React for the frontend of a web app in DSS? If so, can you please share how you accomplished setting this up with the Flask backend. Thank You!
-
Weird results when I run a scenario ascending or descending , when looking back
I have data for Oct, Nov and Dec. I have to look back at the prior month to get the beginning of period values. I have the scenario set up so that it builds the inputs for the current and prior months. (If I run December it builds December and November). I have python code that looks back and gets the BOP and EOP values…
-
Scenario fails when using Application as Recipe in a project
I created an “Application as a Recipe” in which the flow reads a user input dataset and filters the data accordingly. I have written a Python script to handle this task. While the workflow runs as expected and produces the required output file, I’m encountering an issue when using this application as a recipe in a project…
-
How can I save a fine-tuned HuggingFace model to a Dataiku Managed Folder?
Hello, I am having an issue with saving a HuggingFace model that I am fine-tuning. The default implementation for saving HF models is to use: model.save_pretrained("/path/to/directory") However, as far as I can tell, this does not work in dataiku. Even after I use the following code to get the path: folder =…
-
is there a way to check if python code is ran in a recipe vs notebook?
I recently bumped into an issue where my python code was not executed the same way wether it was ran from a notebook or its corresponding recipe*. I eventually used the following function but I was wondering if there were a native function in dataiku to detect the running environment? def in_ipynb(): try: get_ipython()…
-
Recommended usage of Schema Description
Hello, I recently found out you can make a schema description to note some details about specific columns. This seems very useful but I find it hard to create legible descriptions as they only show what fits in the column width: This makes it hard to read unless you go into the settings to see what exactly is written there…
-
Copy the zone recipes into another zone with new datasets
Hello, I have a zone that has recipes I want to have copied into a new zone. This zone has the same format but a different dataset (the month of February vs the original zone was for the month of January). I attempted to copy the zone but all it did was copy the zone into the same zone with the same dataset in that zone. I…
-
Error during Dataiku DSS installation
Error appear as screenshot attached below. Also provide the error log. Operating system used: Windows 10 Pro
-
changing python code env
I am unable to change my python code env, I was creating a plugin ,I am unable to test it because every time I run it Dss uses default environment even if I explicitly select my environment with has Biopython installed , any help? [2024/08/16-01:57:09.508] [ActivityExecutor-39] [INFO] [dku] running…
-
Get the Index ID based on Max Value
Hi Team I'm trying to calculate the index of the maximum value from the list. Attached is the file for reference. As an example, For 1st record, Col_5 has maximum value and I want to assign the index number Note: This is something similar to MaxIDX function in Alteryx. Operating system used: Windows
-
Python script for test Api Service Endpoint by launch remote test query
Is it possible to call End point de service API queries on a Node design instance with the client's python api? I'd like to be able to run a python script when evaluating API services that would briefly execute an adapted test query to check the solvency of the service before pushing it to the deployer. Best, Randy