While creating a table, DSS i think reads from a sample data size and builds the DDL on the fly. This creates a problem when there is data from larger datasets with more length than the maximum in the sample. Is there a way to change(increase) the column length and propagate the schema changes downstream?
You can change the size of the column from the dataset settings - schema - choose column and JSON and change the max length. This will propagate downstream if other datasets are created after the change. The "propagate schema" feature does not currently propagate the change to max length.
If the datasets already exist, you can either edit manually, with the API or recreate the recipes/datasets.