Demands of resolution and fidelity have driven the performance of simulations of first-principles mathematical models in science and engineering from MegaFlop/s to ExaFlop/s and their datasets from MegaBytes to ExaBytes (12 orders of magnitude) over the past four decades. Over the most recent decade, the application of machine learning to science and engineering systems has created a similar demand for performance and associated storage. Lower levels of the software stack created for simulation have proved immediately useful for machine learning at scale. However, higher levels of the simulation software have not yet fulfilled their potential to lift the dominant algorithms for machine learning and inference today above relatively “brute force” implementations, resulting in massive costs for facilities and energy that slow progress and restrict access to the research frontier for many. At the same time, hardware optimized for machine learning applications possesses yet unrealized potential for traditional simulation. Increasingly, the same science or engineering application is more effectively addressable by simulation and learning working in tandem than by either alone, resulting in a confluence of two formerly distinct research communities. This “hot topics” weekend aims to capitalize by highlighting opportunities for cross-fertilization in both directions.