Using Dataiku

Sort by:
61 - 70 of 87
  • I've made custom python code that i need to deploy as an api endpoint on my company's dataiku instance. I've tested it in the Api Designer, though each time i've tested it with the sample code in the …
    Question
    Started by benh
    Most recent by Sarina
    0
    1
    Sarina
    Last answer by Sarina

    Hi @benh
    ,

    Thank you for the details, and indeed your test using a test deployment and testing out the sample code snippet from the "Sample code" is a great way to test if you are hitting the right endpoint.

    Sorry to hear that the issue still persists though! To start with, I think that we could help the most if you open a support ticket and attach a diagnostic of the instance that houses your API deployer along with the name of the API endpoint. Then we can go from there.

    Thanks,
    Sarina

    Sarina
    Last answer by Sarina

    Hi @benh
    ,

    Thank you for the details, and indeed your test using a test deployment and testing out the sample code snippet from the "Sample code" is a great way to test if you are hitting the right endpoint.

    Sorry to hear that the issue still persists though! To start with, I think that we could help the most if you open a support ticket and attach a diagnostic of the instance that houses your API deployer along with the name of the API endpoint. Then we can go from there.

    Thanks,
    Sarina

  • It's probably rather a bug report than a discussion, but there are chances I'm doing something wrong. I'm calling trunc(MyDateColumn, 'd') on a column and for "2021-10-31T22:00:00.000Z" it returns "20…
    Question
    Started by NikolayK
    Most recent by NikolayK
    0
    6
    NikolayK
    Last answer by NikolayK

    @CatalinaS
    , also, is this a bug that you plan to fix in future releases?

    NikolayK
    Last answer by NikolayK

    @CatalinaS
    , also, is this a bug that you plan to fix in future releases?

  • Hi! Some context: * I'm running DSS in VirtualBox. * I'm running PostgreSQL in WSL2 (Ubuntu for Windows). I was guiding myself to learn to make a connection to postgresql with this tutorial: https://a…
    Question
    Started by violepshe
    Most recent by Sergey
    0
    1
    Last answer by
    Sergey
    Last answer by Sergey

    Hi @violepshe
    ,

    FATAL: Password authentication failed for user means that DSS was able to reach Postgresql instance so that's not a firewall issue. You will need to check pg_hba.conf on Postgres side to see if you have correct auth used there for local connections.

    Please note this is not DSS issue but rather plain Postgres auth one.

  • The link https://doc.dataiku.com/dss/latest/python-api/outside-usage.html suggests that we can use dataiku.clear_remote_dss() to clear remote configuration. However, I see the following error. What is…
    Answered ✓
    Started by yashpuranik
    Most recent by yashpuranik
    0
    3
    Solution by
    Shashank
    Solution by Shashank
    You don’t have to (and should not) do a set_remote_dss just in order to do a dataiku.api_client(). You should rather instantiate a client for your remote DSS this way:
    import dataikuimport dataikuapihost = "OTHER_NODE"apiKey = "KEY"client = dataikuapi.DSSClient(host, apiKey)# And then the code does not need to change at all:# Get list of users on the other nodeusers = client.list_users()users = [(user["displayName"], user["login"], user["email"], user["enabled"]) for user in users]users_df = pd.DataFrame(users, columns=["Display Name", "username", "email", "is_active"])# Write list of users on other node in a dataset on current nodeusers_dataset = dataiku.Dataset("users")users_dataset.write_with_schema(users_df)
    Much easier, simple to read, and no worries about versions
  • For each correlation p values are communicated in dataiku. Are these p values refering to the null hypothesis: correlation = 0?
    Question
    Started by franperic
    0
  • Hi, I am going through Business Analyst quick start tutorial and I am unable to join 2 datasets. I have attached the screenshot.
    Answered ✓
    Started by Codefather007
    Most recent by Codefather007
    0
    2
    Solution by
    Emma
    Solution by Emma
    Generally, this error happens when your profile does not have permission to use the default cluster. Even if the recipe you are creating doesn't explicitly select the cluster, permission to use it is necessary for Dataiku to check which compute engines are viable.
    There are two potential solutions. The first option is to navigate to your project Settings > Cluster Selection menu and unselect the Use DSS global cluster settings option, then save.
    The second option is to get someone with access to the Admin > Clusters menu to grant usage rights to the specific group of users that you are a part of or select 'Usable by all' (see screenshot). Either option should allow you to continue on without issue.
    Hope that helps,
    Emma
  • I want to get connection for PstgreSQL. But there is authentication error. I want to know how do I make superuser in postgresql Operating system used: linux
    Question
    Started by PreetiKhare
    Most recent by Alexandru
    0
    1
    Last answer by
    Alexandru
    Last answer by Alexandru

    Hi,

    Can you provide a bit more context here? What authentication error are you seeing?

    Where is Postgres installed?

    By default if you use psql on the machine Postgres is, the default pg_hba.conf should allow you to be SUPERUSER privileges (postgres) users and thus be able to run the sample commands here if you are trying to set up the Postgres up like suggested in the DSS courses:

    https://knowledge.dataiku.com/latest/courses/sql-integration/tech-prerequisites/configure-database.html

  • Hi, I have a table with structure similar to this: IDdate startdate endcontract_nbtag_subscription12330/01/201530/02/20151first subscription12330/01/202030/06/20202re-subscription12330/01/202230/06/20…
    Answered ✓
    Started by lnguyen
    Most recent by lnguyen
    0
    2
    Solution by
    NN
    Solution by NN

    Hi @lnguyen
    ,

    In the group recipe have you been able to look at the Computed Columns and Custom Aggregations options?

    In the computed columns section you can create a new column using a dss formula

    if(tag_subscription=='re-subscription',contract_nb,null)

    and in the next group step you can take a distinct of this new column that you created.

    The second option you can try in you can add a custom aggregation using a simple case when expression in the custom aggregation step of the group recipe

    count(distinct (case when "tag_subscription"='re-subscription' then "contract_nb" end))

    Hope i have understood your question correct.

  • Hi, I'm using Dataiku 10.0.5, Python 3.6 and BigQuery. I'm trying to fill empty cells of all my columns with 0, but it's not working correctly for some columns. The first column it's not working with …
    Question
    Started by UgoD
    0
  • According to https://doc.dataiku.com/dss/latest/code_recipes/shell.html, input and output dataset names are shared as environment variable names to the shell recipe. Assume my input dataset name is "m…
    Question
    Started by yashpuranik
    Most recent by Jurre
    0
    1
    Last answer by
    Jurre
    Last answer by Jurre

    Hi @yashpuranik
    ,

    In the shell-recipe, left top of your screen (see attached screenshot) you'll find "variables" next to "datasets". Select "variables", in the list you will find "DKU_INPUT_0_DATASET_ID" or something like it > click on it and it will be used in your recipe. When your output will be a new file it might be wise to specify the output-folder (also mentioned in the variables list) as that saves time searching for it.

    Hope this helps, have fun!

    variables.jpg

61 - 70 of 877