Recent successes in human-computer gaming AI development have achieved super-human performance in sophisticated confrontation games. Since the Nash equilibrium of multiplayer games is difficult to solve even within only two players, multiplayer poker has always been an challenging problem in the field of Artificial Intelligence. From the perspective of game theory, the pursuing of diverse polices is necessary in non-stationary environment, which is deeply rooted in the non-transitive structure of game. In this paper, in order to provide a tractable solution for multiplayer poker, we first uses the Team-maxmin equilibrium to re-define the solution concept of multiplayer poker without communication in the game playing process. Secondly, we utilize the diverse curriculum learning with neuroevolution (DCLN) method for offline opponent exploitation, try to improve the competitiveness and diversity of the multiplayer poker agent gaming with various opponent. Experimental results show that this kind of agent has the ability to evolve and can defeat different play-styles opponent.