I have Dataiku setup in a virtual machine using kubevirt in kubernetes. This means I am unable to open all ports between 1024-65536 since the kubevirt VM is behind a kubernetes service which doesn't allow for intervals. Does anybody have an idea how I can use a small interval of ports? (around 10) for container ←> dss…
Hi, We are developing using a single Design node, but we want to select two or more Remote Deployers to which we will deploy our projects and APIs. The reason is that one is a Deployer in the same subnet as the Design, and the other or more are Deployers in different VPCs. We want to treat the local Deployer as for…
Hi, We would like to create technical users for each production use-case. These users should only have permission to run jobs related to their respective use-case, meaning we would require one technical user per use-case. However, we have dozens of use-cases in production, and creating a separate user for each one would…
I have a shared drive , where my input files are located, I want to extract data from those files dynamically like date wise . I need help in that to make a connection .
Any roadmap on when we can leverage snowflake containers to execute models instead of running in local or in aks clusters. Operating system used: Linux
Hello We automated the process of setting up many connections in DSS using dataiku's API. However, when upgrading to the latest dss version we noticed that somes connections do not work anymore because their configuration have changed. So as we are adapting our configuration as code, I am looking for a way to test a…
Hello, I'm trying to automate the deployment of Dataiku so I need to configure a few things to be set up automatically. I figured out how to do that for users and groups, for fine grained permissions I use the python API. I still need to figure out how to do this for creating containerized executions and code envs I didn't…
OK so without UIF, no issue with K8S+Spark. I'm using just the library from the DSS download site. When I try to do UIF : non spark recipes run fine, but the Spark recipe no longer succeeds. The issue summary is simply : Oops: an unexpected error occurred The Spark process failed (exit code: 1). More info might be…
Hi everyone, I'm facing an issue while writing a CSV file to S3 using a Sync Recipe in Dataiku. Even though the dataset looks correct inside Dataiku, when it gets saved to S3, all the data appears ediiiiiiiiiiiiiiiieted Operating system used: os
We recently upgrade Dataiku to 13.3 version. we have Automation and Design node and when we are trying to run something on Automation node or deploy new bungle it responds with error : "User does not exist on node" Operating system used: RHEL
Create an account to contribute great content, engage with others, and show your appreciation.