Has anyone worked out how to get GPU Neural Networks setup on a Macintosh? I'm seeing some discussions of PlaidML as a potential way to get this done. Maybe not for production. But at least as a way to prototype.
I am afraid that GPUs on MacBooks are not really designed to run heavy neural network training. While their AMD GPUs have quite some processing power, you will find little support from Deep Learning libraries (Tensorflow/PyTorch). I do use my MacBook for training shallow networks on small training sets. It's good for small prototypes. I use the tensorflow release for CPU.
Note that this is specifically true of the MacBook product line. The iMac Pro or Mac Pro lines have very powerful GPU cards, but again, you'll encounter the AMD support challenge.
In general, for large DL workloads, I advise Linux cloud instances with Nvidia GPUs. In Dataiku, you can even spin up GPU-enabled Kubernetes clusters on-demand, on the 3 main Clouds:
Another option for recurring workloads is to use specific Deep Learning VM images, which are available from all cloud providers.
Hope it helps,
As I continue my research. I've learned that some are using PlaidML on some of the Macbook GPUs. That sounded good to me for a while. However. I've learned that this requires OpenCL 1.2 or later. Then when researching the new Macbook Pro 16. I've discovered that Apple may have dropped support for OpenCl in this MacBook Pro. Ugh...
Anyone out there who has created an AWS, Azure, or GCP for experimentation?
Which is the most economical as a personal out of pocket expense?
This depends on the characteristics of your deep learning workload and your interest in D.I.Y. Linux operations. Let me highlight a few possible routes you can take.
Our Managed Kubernetes feature facilitates AWS/Azure/GCP experimentation for transient workloads. DSS can automatically start, stop and manage multiple clusters. The setup of GPUs, drivers, etc. is greatly simplified, no need to install things yourself directly on the machines.
The economics is determined by each cloud pricing. Usually, this is a function of the region, instance type (including GPU type) and time spent processing. There is an economical trade-off between GPU type and time spent processing. Some GPUs are more expensive per hour but compute way faster. You also need to take into account GPU memory. Some workloads (large images, 3D scans) necessitate really large GPU memories.
Otherwise, a good option for recurring workloads is to rent a dedicated GPU instance on the cloud. You can use pre-built Deep Learning images:
Once the machine spun out from the image, you can install DSS on it and start using your GPUs. Our Deep Learning visual workbench can use any Nvidia GPU with the CUDA/CuDNN drivers (up to CUDA 9 / CuDNN 7 as of DSS 7.0.1). You can also work in notebooks, with full flexibility and no limitations on the GPU type and drivers' versions.
Note that there is a large number of other cloud providers with GPU offerings: Alibaba Cloud, OVH, Linode, Lambda Labs, etc. It goes beyond the scope of Dataiku to list them all.
The economics of this solution is a bit different, you can check out each cloud provider per-month pricing.
Yet another option, which may be the most fun (if you like DIY computing) would be to buy/repurpose an old gaming desktop computer with an Nvidia card. Install Linux on it, install the GPU drivers, install DSS, and have fun. You can even go the extra mile: install Docker on it and use it from your MacBook's DSS as a remote Docker daemon. That's a bit of a rabbit hole, but a great learning opportunity if you have the time.
I hope that helped.