Lasso regression is a supervised linear regression method in the field of artificial intelligence (AI). Generally speaking, its idea is adding an L1 regularization term to a linear regression to achieve two purposes: the first purpose is to overcome the original linear regression collinearity or the irreversibility of the matrix in the solution; the second purpose is to obtain the sparse solution, which will be more efficient in computer operation. Usually, Lasso regression is a least squares normal form plus an L1 regularization term. both of them are convex functions, and the sum of convex functions is still convex. Therefore, gradient descent (GD) method generally used to solve the optimization problem of such convex functions, however, as the L1 regularization term in Lasso regression is not differentiable, the gradient information cannot be obtained, thus, GD method cannot solve Lasso regression problem properly. In this paper a t-distribution coot optimization algorithm has been proposed to overcome the above shortcomings and difficulties in Lasso regression, and the effectiveness of the proposed algorithm is proved by experiments.