Some recipes take a very long time to execute. During execution, target datasets are typically unavailable since they've been dropped prior to data insertion. A current workaround is to add a sync recipe to these datasets, allowing them to be copied. However, this leaves duplicate tables sitting in the database and requires a manual disable for projects with pipelining enabled. This problem could be solved with a temporary table mode. If the output of a recipe could optionally be written to a temporary table to completion, then copied to the permanent target table, that would solve this problem without creating intermediate datasets or requiring special configuration for pipelining. The temporary table could be deleted after data is copied to the permanent table so space is conserved and so the flow remains uncluttered.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.