The Parquet writer in Spark cannot handle special characters in column names at all, it's unsupported.
If you are in a code recipe, you'll need to rename your column in your code using select, alias or withColumnRenamed.
If you are in a visual recipe, you'll need to rename your column prior to this recipe, for example with a prepare recipe.
Other options can include using CSV instead of Parquet
Generally speaking, given the multiple idiosyncrasies and differences of behaviors between engines, we strongly recommend that as soon as your data enter the Hadoop/Spark world, you should only use lowercased column names without any special characters just_like_that.