Sign up to take part
Registered users can ask their own questions, contribute to discussions, and be part of the Community!
Added on February 7, 2021 5:20PM
Likes: 0
Replies: 5
I already posted this question earlier , but perhaps it wasn't clear, so I'll try to be more precise here.
Custom python trigger code uses t.fire() to trigger the scenario when the condition is met.
What is the equivalent of that t.fire() in the SQL query change trigger?
Thank you
from the UI, Administration > Maintenance > Logs is the way. Otherwise, they're in the run/ folder of your DSS data directory.
Hi,
a "sql query change" trigger initiates a run of the scenario if it detects a change in the data returned by the query, be it in the number of rows returned or the values returned. Typically, such a trigger is used with a query that aggregates a table, like computing a row count, or a max of a timestamp column.
@fchataigner2
thank you for your reply. I don't like creating duplicate entries, but here is what I posted in the first one (link above):
I have two triggers: a SQL query change trigger and a python Custom trigger.
They both check the same table dataiku_poc.CMR_CAMP_COPY
Here is what I have in the SQL query change trigger
select count(*) from dataiku_poc.CMR_CAMP_COPY;
Here is what I have in the python Custom Trigger
import dataiku from dataiku import pandasutils as pdu import pandas as pd from dataiku.scenario import Trigger mydataset = dataiku.Dataset("CMR_CAMP_COPY") mydataset_df = mydataset.get_dataframe() p = dataiku.Project() variables = p.get_variables() CMR_count = int(variables["local"]["CMR_count"]) t = Trigger() new_count = len(mydataset_df) if new_count != CMR_count: variables["local"]["CMR_count"] = new_count p.set_variables(variables) t.fire()
Both triggers have Run every 10 seconds, Grace period 0 seconds.
Every time I delete or insert rows in the table the python Custom trigger is triggered and the SQL never triggers.
What I am doing wrong? Is there a way to troubleshoot it?
Thank you
the setup of the sql trigger looks fine, so maybe 1) check that the query runs in a notebook and returns results and 2) check the backend.log of the instance for exceptions in case some arise in the context of ActiveTriggerLifecycleThread threads (there is no proper debug of these triggers). You can also check the scenario's charts to see if the only triggers firing are the python ones
I did make sure the SQL runs in the notebook, and I've been constantly checking Last Runs to see if the SQL trigger triggered.
Where do I find backend.log?