The fields of kernel approximation and Gaussian processes have evolved, mostly independently, over the past few decades to become mature subjects in numerical analysis and statistics/applied probability, respectively. The two fields have natural affinities, both by relying on a common mathematical machinery of reproducing kernel Hilbert spaces and positive definite functions, and by treating a diverse assortment of scientific applications. A hallmark problem for kernels is the modeling of irregularly sampled, deterministic data, but kernels are also heavily employed in the simulation of a range of phenomena from atmospheric flows to financial forecasting.; a key feature of kernel approximation is the ability to deliver solutions to computational problems without requiring costly underlying geometric structures like meshing. Gaussian processes have traditionally focused on regression, classification, and learning of the underlying probability distribution from random samples, although recent activity has focused on successful undertakings in interpretable and explainable machine learning, inverse problems and partial differential equations with stochastic parameters. Despite their different languages and different objects of interest, the fields share a similar collection of important, challenging theoretical and computational problems.