Using swap memory on Design Node

tgb417
tgb417 Dataiku DSS Core Designer, Dataiku DSS & SQL, Dataiku DSS ML Practitioner, Dataiku DSS Core Concepts, Neuron 2020, Neuron, Registered, Dataiku Frontrunner Awards 2021 Finalist, Neuron 2021, Neuron 2022, Frontrunner 2022 Finalist, Frontrunner 2022 Winner, Dataiku Frontrunner Awards 2021 Participant, Frontrunner 2022 Participant, Neuron 2023 Posts: 1,598 Neuron

I'm working with a small DSS Design Node with 16GB RAM memory on AWS.

The models I'm trying to build are running out of Available Memory and crashing. I can cut the size of the sample I'm working with. However, to get this to work, I'm using only 1/3 of 1% (0.3333%) of the data in my model-building process. (data set size ranges from 300,000 to 1,000,000 records) With this model, I can build incremental models adding a different sample of records each time I run the model. However, it is clear to me that this is not typical best practice.

So I need some more working memory. I'm working with a non-profit so budget is a consideration.

Has anyone used swap memory for data science using a Python Model? Does anyone have any good news stories or horror stories where swap did and did not work?

Thanks for any insights you can share.

Answers

Setup Info
    Tags
      Help me…