The increasing amount of data being collected, stored, and analyzed induces a need for efficient, scalable, and robust methods to handle this data. Representation learning, i.e. the practice of leveraging neural networks to obtain generic representations of data objects, has been shown effective for various applications over data modalities such as images and text. More recently, representation learning has shown initial impressive capabilities on structured data (e.g. relational tables in databases), for a limited set of tasks in data management and analysis, such as data cleaning, insight retrieval, and data analytics. Most applications traditionally relied on heuristics and statistics, which are limited in robustness, scale, and accuracy. The ability to learn abstract representations across tables unlocked new opportunities, such as pretrained models for data augmentation and machine learning, that address these limitations. This emerging research area, which we refer to as Table Representation Learning (TRL), receives increasing interest from industry as well as academia , in particular in the communities of data management, machine learning, and natural language processing. This growing interest is a result of the high potential impact of TRL in industry given the abundance of tables in the organizational data landscape, the large range of high-value applications relying on tables, and the early state of TRL research so far.