The Beauty and the Beast: Python on GPUS
11-03, 11:00–11:40 (America/New_York), Winter Garden (Room 5412)

Python promises productivity, GPUs promise performance, but if you ever try to fire up a program on a GPU you will find that it is often slower than a CPU. Over the last decade, the Python ecosystem has embraced GPUs in numerous libraries and techniques. We survey what works with GPUs and some of the libraries that one can use to accelerate the Python workflow on a GPU.


Python's origin story as a mere education language often gives folks the perception that performance computing is not available. This perception has been upended by the sheer number of ML frameworks using Python as their main language. While large companies may have the funds to build and maintain such libraries, using a GPU to speed your computing is not out of reach for any Python programmer.

In this talk we present the major libraries and techniques used by practitioners of GPU computing. These techniques include automated jitting, use of mathematics libraries, and integrating low level kernels. New techniques such as kernel fusing and tiling have been recently incorporated into numerous libraries.

Come learn about how Python codebases can use GPUs to create some amazing applications.


Prior Knowledge Expected

No previous knowledge expected

I lead CUDA Python Product Management, working closely with RAPIDS, Omniverse, and Math Libraries to unify NVIDIA's foundational offering for Python developers and the Python community.

I received my Ph.D. from the University of Chicago in 2010, where Ibuilt domain-specific languages to generate high-performance code for physics simulations with the PETSc and FEniCS projects. After spending a brief time as a research professor at the University of Texas and Texas Advanced Computing Center, I have been a serial startup executive, including a founding team member of Anaconda.

I am a leader in the Python open data science community (PyData). A contributor to Python's scientific computing stack since 2006, I am most notably a co-creator of the popular Dask distributed computing framework, the Conda package manager, and the SymPy symbolic computing library. I was a founder of the NumFOCUS foundation. At NumFOCUS, I served as the president and director, leading the development of programs supporting open-source codes such as Pandas, NumPy, and Jupyter.