A study in Boston optimized a set of machine learning algorithms on a GPU. It revealed the performance of two popular GPU integration tools developed in Python, namely, Cython and PyCUDA. Utilizing the GPU’s parallel performance advantages, speedups of 20 times - 200 times over the multi-threaded Scikit Learn (a machine learning library for the Python) CPU-based implementations were highlighted. It also specifically addresses the need for GPUs due to the growing sizes of emerging datasets:
Image by Tumisu (https://pixabay.com/users/tumisu-148124/) from Pixabay.com
For more information, you can refer to the research paper given here:
Accelerating Machine Learning Algorithms in Python. Boston Area Architecture Workshop, P Reilly, L Yu, L, D Kaeli (2017):
www1.coe.neu.edu/~ylm/Files/ml_pycuda_barc2017...