I would like to alert stakeholders when a soft data quality check has failed. The solution I'm currently developing involves updating the scenario status to a warning state when the soft check has filed. However, it does not seem like a failed soft check affects the greater scenario status. Is there a way to do this that…
I see the above 'kernel starting' message when I open my jupyter notebook in the python code recipe, but the kernel isn't starting. After a while, I get the following error message: I tried interrupting, restarting, reconnecting, changing kernel etc. but it doesn't solve the issue. I have opened other code recipes in the…
I require help in triggering a recipe and its downstream recipes & datasets until the end of the flow.
I am trying to remap/update the already existing Blob container connection name (Update DesignNode Blob Name → AutoNode Blob Name). I have created a Bundle in Design Node and imported & activated in Auto Node. Both the nodes are pointing to separate Azure Blob Containers Unable to figure out if the settings have to be…
In a Dash webapp that requires Dataiku authentication we would like to filter displayed based on the webapp user. It is possible to retrieve the authenticated user in the webapp: dataiku.api_client().get_auth_info_from_browser_headers(request_headers) My question is how to best propagate the user ID to the backend so it…
Hello, I have trained and created a model in the flow. I have after trained new versions of the model but I can't find the way to change the version on the flow (the active model is always the frist one trained). Thanks Operating system used: Linux
I've followed the tutorial here: Importing serialized scikit-learn pipelines as Saved Models for MLOps - Dataiku Developer Guide and I've been able to develop a model using the darts==0.30.0 library, having wrapped it in the standard scikit-learn pipeline. My issue is with the very last part of step 3 of this tutorial…
Hello Dataiku, I am trying to install packages on a new kernel that I have built using the "python env" code environment option in the administration page. However when I try to install the package, it still clashes with the default python's packages on the machine. This is also evident when I execute `!pip list` and check…
Bonjour, chaque semaine, j'ai un scénario qui prend en compte un fichier csv uploadé, également chaque semaine. J'au donc un dataset uploadé qui s'appelle contrat_fraude_verif sur lequel commence un flow (voir ci-dessous). Quelle est la meilleure pratique pour avoir un minimum d'opérations à faire lors de l'intégration…
When I attempt to join datasets using the DSS engine, I encounter an error stating that the recipe cannot utilize the in-database engine and will default to the slower DSS engine instead. Additionally, it warns that the 'national_joined' dataset is not a SQL table dataset. If I switch to the Spark engine, I receive a…
Create an account to contribute great content, engage with others, and show your appreciation.