hello, I was working in a HDInsights cluster where I have Dataiku DSS and whenever I try to run a hive query I get this error: "com.microsoft.azure.storage.blob.CloudBlob.startCopyFromBlob(Lcom/microsoft/azure/storage/blob/CloudBlob;Lcom/microsoft/azure/storage/AccessCondition;Lcom/microsoft/azure/storage/AccessCondition;Lcom/microsoft/azure/storage/blob/BlobRequestOptions;Lcom/microsoft/azure/storage/OperationContext;)Ljava/lang/String;"
Spark is runing very well, I am not sure, but it seemed to me that the problem arrises from within DSS infrastructure since I recieve the same problem when I run a DSS visual, Python/R recipies and tell them to create the new dataset as hdfs, with the visual recipies the problem goes away when I tell it to run in spark engine instead of DSS. I have gone around the problem with R and Python if I tell DSS to print the new table as a filesystem, but with hive I cannot do the same I even cannot use pig nor impala. have you got any idea why this is happening, should I change a parameter or do something else?
Any help will be greatly appreciated.
ps. if you need me to send you logs or something I will.