Hardware for Machine Learning
Hardware for Deep Learning
- Hardware for Deep Learning. Part 1: Introduction - by Grigory Sapunov - Intento
- Hardware for Deep Learning. Part 2: CPU - by Grigory Sapunov - Intento
- Hardware for Deep Learning. Part 3: GPU - by Grigory Sapunov - Intento
- Hardware for Deep Learning. Part 4: ASIC - by Grigory Sapunov - Jan, 2021 - Intento
TPU - GPU
- Turning TPU into GPU (mean: it compiles Pytorch to work on a TPU). PyTorchXLA converts the TPUv3-8 hardware into a GPU so you can use it with PyTorch as a normal GPU. TPUv3-8 which is part of free access from Google Colab can give a computation power that is equivalent to 8 V100 Tesla GPU and possibly 6 3090RTX GPU. info is here. TPUs are ~5x as expensive as GPUs (8.00/hr for a Google TPU v3 vs $4.50/hr for the TPUv2 with “on-demand” access on GCP).
- We recommend CPUs for their versatility and for their large memory capacity. GPUs are a great alternative to CPUs when you want to speed up a variety of data science workflows, and TPUs are best when you specifically want to train a machine learning model as fast as you possibly can. In Google Colab, CPU types vary according to variability (Intel Xeon, Intel Skylake, Intel Broadwell, or Intel Haswell CPUs). GPUs were NVIDIA P100 with Intel Xeon 2GHz (2 core) CPU and 13GB RAM. TPUs were TPUv3 (8 core) with Intel Xeon 2GHz (4 core) CPU and 16GB RAM).
Free TPU
- TensorFlow Research Cloud - Free TPU : Accelerate your cutting-edge machine learning research with free Cloud TPUs.
AI/ML Cloud Computing
- Types of Cloud Computing—an Extensive Guide on Cloud Solutions and Technologies in 2021
- What are the Benefits of Machine Learning in the Cloud? - Cloud Academy
- Cloud Platform Comparison