Algorithms based on kernel methods play a central role in modern machine learning and nonparametric statistics. At the core of these algorithms are a number of linear algebra operations on matrices of kernel functions which take as arguments the training and testing data. These algorithms range from the simple matrix-vector product, to more complex matrix decompositions, and iterative formulations of these. Often these algorithms scale quadratically or cubically both in memory and operational complexity, and as data sizes increase, kernel methods scale poorly. In this paper, we address this lack of scalability by using parallelized kernel algorithms on a GPU.
Learn More