In source coding since Davisson’s seminal paper [1] various redundancy and regrets were thoroughly analyzed, from pointwise redundancy, to average and maximal minimax and maxmin regrets. Similarly, in online learning, there are various formulations of regrets that are grouped into fixed-design (when data is known in advance) and sequential. This position paper gives a brief overview of current formulations of regrets, and provides a thorough comparison of the sequential and fixed design formulations. Moreover, inspired by the source coding literature, new classes of regrets, from average to worst case minimax, are introduced. In particular, it is shown that the fixed design and sequential regrets are equal in the worst case and average sense when data is known in advance; but, in maximal sense (when maximizing over data), the former can be significantly smaller than the latter. Specifically, this paper proves that under logarithmic loss (i) for linear predictors the two maximal formulations are of the same order; and (ii) for linear threshold predictors, fixed design maximal regret is logarithmically smaller than the sequential one.