Large-scale, high-dimensional data sets are becoming ubiquitous in modern society, particularly in the areas of physical, biomedical, and social applications. For example, in the problem of predicting thyroid malignancy from biopsy images, the images are typically about 150,000 by 100,000 dimensions, which limit the application of many existing methods. There is an urgent need for accurate and efficient mathematical and statistical tools for the analysis and engineering of high-dimensional data sets. The proposed 5-Day workshop will bring researchers from different disciplines to collaboratively address the foundational computational and theoretical challenges in high-dimensional data analysis. The workshop is designed around the simple question ``how to accurately and efficiently process large-scale data in 10+ dimensions’’. Invited participants will review existing mathematical and statistical tools for high dimensional data sets, including the Monte Carlo methods, randomized algorithms, dimension reduction, sparse grid, network analysis, and interpolation-based deep neural networks, and compare their performance and address their limitations. The invited participants will collaboratively address the current challenges in high-dimensional data analysis, and design new strategies by combining existing tools and by introducing new methodologies for problems in 10+ dimensions.