Recently, group sparse representation which is based on a hypothesis about correlation of coefficient variables has attracted much attention due to its effectiveness and robustness in dictionary learning. Traditional group sparse representation methods use $\ell_{2,1}$ norm to enforce the estimation of models with joint sparsity patterns, which often leads to over-punishment phenomenon. To solve this issue, we replace $\ell_{2,1}$ with non-convex surrogate of $\ell_{2,0}$, and give a general solver for the corresponding optimization algorithm. Experimental results confirm the effectiveness of our proposed method.