In my opinion, (1) should be simple and straightforward.
Slight modification of 1: If you have access to advanced automation features, you could implement it as a Python scenario step, which executes your own code to tell the external application that the dataset is ready. This way why no dummy dataset is needed 🙂
Thanks Alexandre! The idea here is to prevent another application (external to DSS) to read a data file on the DSS server while the dataset corresponding to this file is being built by DSS.
Our initial assumption was to use the API to tell DSS not to build the dataset until the application is done reading it or to tell the application not to read the file while the dataset is being built. I had two ideas to go around this concurrency issue without direct API call :
1) The dumb solution : have the scenario building that dataset writing a line to another "lock" dataset at the beginning of the build and deleting it at the end of the build : the data file is accessible to the application only when the "lock" dataset is empty.
2) The neckbeard solution : creating system-wide lock files (https://stackoverflow.com/questions/6931342/system-wide-mutex-in-python-on-linux).
I was going for the second one, but other options are welcome!