In this work, the authors focus on the typical $l_2$-regularized empirical risk \text{minimization} problem, which arises in machine learning problems such as image recognition and \text{distributed} resource allocation or distributed power control applications. This \text{minimization} problem has $n$ computing nodes with $m$ local training instances. Mathematically, this problem can be expressed as follows: $$ \min_{\theta\in\Bbb{R}^d}\sum_{i=1}^nf_i(\theta), $$ where $$ f_i(\theta)=\sum_{j=1}^mf_{ij}(\theta)+\displaystyle\frac{\sigma_i}{2}\left\| \theta\right\|^2\quad \text{for}\ i=1,\dots,n. $$ The function $f_{ij}$ represents the loss function for the $j$th training example of node $i$ and is assumed to be convex and $L_{ij}$-smooth. \par The authors propose an efficient accelerated decentralized stochastic algorithm for \text{finite} sums called ADFS, which uses local stochastic proximal updates and \text{decentralized} communications between nodes. In this algorithm, the arbitrary block sampling \text{introduces} flexibility, which allows one to use global synchronous \text{communications} as well as local pairwise communications, or anything in between. Then, they apply the general ADFS algorithm to a well-chosen dual problem based on an augmented graph approach and illustrate the improvement of ADFS over state-of-the-art decentralized approaches with experiments. In the numerical experiments, ADFS is compared with the multi-step dual accelerated (MSDA) method [K.~Scaman et al., in {\it Proceedings of the 34th International Conference on Machine Learning}, 3027--3036, 70, PMLR, 2017; A. Defazio, in {\it Advances in neural information processing systems}, 29, 2016; Z. Shen et~al., in {\it Proceedings of the 35th International Conference on Machine Learning}, 4624--4633, 80, PMLR, 2018].