The Dataiku Frontrunner Awards are now accepting submissions until July 15 to recognize your achievements! ENTER YOUR SUBMISSION

for monitoring is possible show all info project/scenario/trigger on python

Solved!
AGdB
Level 1
Level 1
for monitoring is possible show all info project/scenario/trigger on python

code python :

# this_project = this_client.get_default_project()
#print("projectKey, Nome_Scenario, Active, Type-Trigger , Name-trigger, Delay")
import pandas as pd
import json

for this_project in dss_projects:
curr_project = this_client.get_project(this_project)
for scenario in curr_project.list_scenarios(as_type="objects"):
curr_settings = scenario.get_settings()
if curr_settings.active:
print (curr_settings.get_raw())

 

ex. output :

{'projectKey': 'AAAAAAAA', 'id': 'unloadnotebooks', 'type': 'step_based', 'name': 'unloadnotebooks', 'active': true, 'versionTag': {'versionNumber': 2, 'lastModifiedBy': {'login': 'admin'}, 'lastModifiedOn': 1608300844622}, 'checklists': {'checklists': ' '}, 'delayedTriggersBehavior': {'delayWhileRunning': true, 'squashDelayedTriggers': true, 'suppressTriggersWhileRunning': true}, 'tags': , 'triggers': [{'id': 'IFACIlOZ', 'type': 'temporal', 'name': 'UNLOADNOTEBOOKS', 'delay': 5, 'active': true, 'params': {'frequency': 'Daily', 'daysOfWeek': ['Saturday'], 'dayOfMonth': 1, 'minute': 0, 'hour': 2, 'count': 5}}], 'reporters': , 'params': {'steps': [{'id': 'runmacro_pyrunnable_builtin-macros_kill-jupyter-sessions', 'type': 'runnable', 'name': 'Step #1', 'runConditionType': 'RUN_IF_STATUS_MATCH', 'runConditionStatuses': ['SUCCESS', 'WARNING'], 'runConditionExpression': '', 'resetScenarioStatus': false, 'delayBetweenRetries': 10, 'maxRetriesOnFail': 0, 'params': {'runnableType': 'pyrunnable_builtin-macros_kill-jupyter-sessions', 'config': {'maxIdleTimeHours': 8, 'maxSessionAgeHours': 24, 'dontKillBusyKernels': true, 'dontKillConnectedKernels': true, 'simulate': false}, 'adminConfig': {}, 'proceedOnFailure': false}}]}, 'automationLocal': true, 'customFields': {}}

is possible covert from json dataiku to file csv ? 

Please Help Me !

0 Kudos
1 Solution
SarinaS
Dataiker
Dataiker

Hi @AGdB ,

I think it might work well for you to use the following method, writing all scenario setting objects to a folder, and then creating a dataset out of the folder.  Here's an example: 

I create a folder to hold all of my scenario settings output: 

Screen Shot 2021-05-03 at 12.00.49 PM.png

This is my modified version of your Python code to now write to the Folder I just created: 

folder = dataiku.Folder('scenario_jsons')
import dataiku 

client = dataiku.api_client()
for this_project in client.list_project_keys():
    project = client.get_project(this_project)
    scenarios = project.list_scenarios(as_type='objects')
    for curr_scenario in scenarios:
        settings = curr_scenario.get_settings()
        if settings.active:
            # write file in PROJECT_SCENARIO format
            filename = this_project + '_' + curr_scenario.id
            folder.write_json(filename, settings.get_raw())


Now my folder contains a bunch of json files: 

Screen Shot 2021-05-03 at 12.08.17 PM.png

I can select "Create dataset" from this folder to now create a single dataset from my JSON files:

Screen Shot 2021-05-03 at 12.09.38 PM.png

And a snippet of my final dataset: 

Screen Shot 2021-05-03 at 12.10.08 PM.png

In addition to this method, I wanted to mention that there is an internal stats dataset for scenarios that will contain scenario run information, just in case it is sufficient for your use case.  You can create and view this by going to + Dataset > Internal > Internal Stats > select "Scenario runs" as your "type" and leave the project field blank for all projects. Here's an example of what it looks like:

Screen Shot 2021-05-03 at 12.15.32 PM.png

In addition to both of the above methods, you could also parse the scenario settings dictionary in Python and create a Pandas dataframe for the specific columns you care about, and then write that dataframe as your dataset.  The folder approach simply allows you to skip the parsing yourself.  I can paste an example of this if you would prefer that method though. 

I hope that's helpful.

Thanks,
Sarina 

 

View solution in original post

2 Replies
SarinaS
Dataiker
Dataiker

Hi @AGdB ,

I think it might work well for you to use the following method, writing all scenario setting objects to a folder, and then creating a dataset out of the folder.  Here's an example: 

I create a folder to hold all of my scenario settings output: 

Screen Shot 2021-05-03 at 12.00.49 PM.png

This is my modified version of your Python code to now write to the Folder I just created: 

folder = dataiku.Folder('scenario_jsons')
import dataiku 

client = dataiku.api_client()
for this_project in client.list_project_keys():
    project = client.get_project(this_project)
    scenarios = project.list_scenarios(as_type='objects')
    for curr_scenario in scenarios:
        settings = curr_scenario.get_settings()
        if settings.active:
            # write file in PROJECT_SCENARIO format
            filename = this_project + '_' + curr_scenario.id
            folder.write_json(filename, settings.get_raw())


Now my folder contains a bunch of json files: 

Screen Shot 2021-05-03 at 12.08.17 PM.png

I can select "Create dataset" from this folder to now create a single dataset from my JSON files:

Screen Shot 2021-05-03 at 12.09.38 PM.png

And a snippet of my final dataset: 

Screen Shot 2021-05-03 at 12.10.08 PM.png

In addition to this method, I wanted to mention that there is an internal stats dataset for scenarios that will contain scenario run information, just in case it is sufficient for your use case.  You can create and view this by going to + Dataset > Internal > Internal Stats > select "Scenario runs" as your "type" and leave the project field blank for all projects. Here's an example of what it looks like:

Screen Shot 2021-05-03 at 12.15.32 PM.png

In addition to both of the above methods, you could also parse the scenario settings dictionary in Python and create a Pandas dataframe for the specific columns you care about, and then write that dataframe as your dataset.  The folder approach simply allows you to skip the parsing yourself.  I can paste an example of this if you would prefer that method though. 

I hope that's helpful.

Thanks,
Sarina 

 

View solution in original post

AGdB
Level 1
Level 1
Author

Thx perfect 😉 

0 Kudos
A banner prompting to get Dataiku DSS
Public