Sign up to take part
Registered users can ask their own questions, contribute to discussions, and be part of the Community!
I am developing a scenario which will monitor multiple of my dataiku environments from one. I am familiar with remotely entering different Dataiku environments.
So far it has been successful, however, I am now faced with an issue where I am trying to read data from a mounted location on each environment. Due to this, it is unable to read data from the mounted location and will only read from the environment that I am running the monitoring from.
There are specific File System connections in each of the environments that will allow me to have access to these individual locations.
Please can you tell me how I can read from these FileSystem locations using Python APIs and libraries
Operating system used: Windows
Your question is not really that clear. I don’t really understand your problem. All I understand is that you mounting some file systems and you want to access them. But you really say why you can’t access them. What’s the error. What are you trying to achieve.
In any case this reply should help (I think):
If it doesn’t help then you should explain clearly what are you trying to do. Thanks
I am trying to read a file from a mounted File System via Python within a Dataiku Scenario.
This file system is also set up as a connection so I was wondering if there is a library that I could use to call the connection and read a file from it.
Its a similar concept to how SQL connections can be used via SQLExecutor2.
I hope this clears it up.
OK so read my other post and follow the steps I mentioned to create a managed folder and a symlink to your mount. Then simply read the files on the managed folder using the Dataiku API, here is a sample:
And one more thing, this should allow us to access the environments file system connections from each environment as we remotely access them from one environment using the dataiku.set_remote_dss(env_link, env_api_key) api?