Read and write to same dataset
Wuser92
Registered Posts: 20 ✭✭✭✭
What is the best way / workflow to read and write to the same dataset without running into any recursion issues?
For example, if you read and write to the same dataset somewhere in your flow, you can not build the final dataset recursively anymore.
Example Use Case: We want to read a dataset with URLs, scrape the URLs or make an API call for this URL and then flag the URL to avoid calling the same URL again in the future (as we already have the results).
For example, if you read and write to the same dataset somewhere in your flow, you can not build the final dataset recursively anymore.
Example Use Case: We want to read a dataset with URLs, scrape the URLs or make an API call for this URL and then flag the URL to avoid calling the same URL again in the future (as we already have the results).
Answers
-
An easy way to achieve this is to create two datasets (effectively, JSON docs) that point to the same location (file on disk, table in database). One for reading, one for writing. DSS will only look at the metadata (the JSON file) to decide what the dependencies are, and not at the actual data. This way, you avoid the circular dependency.