It would be helpful if I could protect a dataset to prevent it from being overwritten without first unprotecting it. This would allow me to effectively lock a dataset while recipes are running against it or let collaborators know its current state is important and should not be overwritten right now. This would also prevent automations from overwriting a dataset if a team member needs to keep it static temporarily without needing to manually pick automations to disable. Implementation could be as simple as a lock button in the dataset flyout and a modal on unlock indicating which user locked it and when. Then, any time a user tries to execute an operation that would modify that dataset, an error would be triggered indicating the locked datasets and notifying the user they need to unlock them before trying again. Users could even be warned of locked datasets before executing pipelines and given the option to build only from the locked dataset down to prevent wasted time on lengthy builds.