In many real-world situations, the input to an algorithm is revealed piece-meal over time, and algorithmic decisions must be taken without knowledge of future inputs. This has given rise to the field of online algorithms. Traditionally, the performance of an online algorithm is measured by comparing the quality of the solution produced by the algorithm to that of an offline optimal solution. This framework, called competitive analysis, has led to a beautiful theory of worst-case analysis for online algorithms over the last few decades. At the same time, it has also been painfully obvious that this model is overly pessimistic in many situations and leads to strong lower bounds that bucket together algorithms with significant differences in observed empirical performance. In recent years, there has been an increasing push to go beyond worst-case competitive analysis of online algorithms, and several models and frameworks have been proposed that have attracted much research activity. This Dagstuhl Seminar seeks to bring together the leaders in online algorithms research to discuss and debate these approaches, helping chart the future of online algorithms for the foreseeable future.