Dataiku Forcing Column Lengths Despite SQL Based Table Being Dropped

wim-vde
Level 1
Dataiku Forcing Column Lengths Despite SQL Based Table Being Dropped

Hello, 

I've encountered an issue regarding varchar length a couple of times when using the custom SQL recipe. If I create a varchar field with an implicit length of x and then change the SQL to increase the length of the column (i.e to x+4) Dataiku throws an error saying the string is too long and would be truncated. 

The curious point here is that the issue occurs even when I drop the table entirely from the (Snowflake) backend and recreate it using the new x+4 length script from within Dataiku. The new x+4 length script works fine on Snowflake itself so the issue seems to be Dataiku specific. Is Dataiku caching the schema despite the table being dropped? Please see the below for illustration of the above.

Usually we get around this by just renaming the column but for obvious reasons this is not an ideal solution when the recipe is upstream of other tools. Any ideas what is driving this issue?

Creating the table with a column SOME_TEXT as a varchar(4)

SELECT 'FOUR' AS "SOME_TEXT"

*Dropping table*

Recreating the table with a column SOME_TEXT as a varchar(8)

SELECT 'FOURTEEN' AS "SOME_TEXT"

*Dataiku then throws an error: Job failed: String 'FOURTEEN' is too long and would be truncated*

Thanks!


Operating system used: Windows 10

  

0 Kudos
1 Reply
Turribeach

"Is Dataiku caching the schema despite the table being dropped?"

Yes, every Dataiku Dataset has a separate schema definition. If you change the definition on the database you need to synchronise the changes in the dataset settings tab.

0 Kudos