Can you please elaborate on your use case? I want to understand the requirement behind running a Databricks notebook from Dataiku, and also what's your storage layer (S3, Snowflake, etc.).
I can share some options assuming your use case pattern:
1. Option 1: Run the Notebook on EKS (w Spark): Setup a Compute using EKS cluster with Spark in Dataiku. Create a Spark Recipe (copy-paste the code) and run it into EKS (no code change required). This option is recommended as you can containerize your code as well as the associated libraries, which simplify your Ops process
2. Option 2: Keep it in Databricks: Convert your Notebook into a Databricks Job. A Databricks Job has a REST endpoint. Here you have 2 options to integrate into your Dataiku Project. Create a Code recipe to hit that endpoint or use Dataiku Scenarios to add a step(custom python) for triggering the NLP Job in Databricks.
Option 1 is Recommended but the solution may change based on your actual use case.