The philosophy of running a flow of datasets, recipes and models in Dataiku revolves around the concept of Job and Scenario. In the API, you do not run a Recipe but rather build its output, either using a Job or a Scenario.
If you plan on using different elements of the API to create a Dataiku project, test it and automate it, I would advise:
1. Creating the datasets, recipes and models using https://doc.dataiku.com/dss/latest/publicapi/client-python/datasets.html, https://doc.dataiku.com/dss/latest/publicapi/client-python/recipes.html and https://doc.dataiku.com/dss/latest/publicapi/client-python/ml.html
2. Build/train some datasets/models by launching Jobs building the outputs(s) of the recipe: https://doc.dataiku.com/dss/latest/publicapi/client-python/jobs.html
3. Create a scenario to automate the update of datasets and models: https://doc.dataiku.com/dss/latest/publicapi/client-python/scenarios.html
In general, it may be faster to use the interface to initialize a "template project", including scenarios. Then copy this template several times with some programmatic changes using the API.
Hope it helps,