Deep learning model Transformer has been widely used in natural language processing(NLP) filed, and its demand for computing resources is also growing. However, general-purpose processors(CPU and GPU) have invested excessive hardware resources in the design because they have to flexibly support a variety of tasks, they are not efficient for the implementation of Transformer. Consequently, various software optimizations that towards general-purpose processors have been proposed one after another. But, under the condition of ensuring sufficient accuracy, the degree of software optimization is limited. It is requisite to friendly-support Transformer at the hardware level.After analysing the computational characteristics of the Transformer model, based on RISC-V, we designed a hardware friendly instruction set architecture for the Transformer model. In addition to the basic instruction, for the intensive and general computing part of the model, according to the expansion rules of RISC-V instruction, we design the matrix load/store instruction calculation instruction, softmax instruction, activation instruction and other user-defined instructions. They support any matrix scale, and deploy it on FPGA to realize a flexible and efficient custom processor RISC-VTF for Transformer. The design is integrated on the Xilinx toolkit zynq-7000 FPGA, and the resource consumption and performance are analyzed. Compared with the traditional common ISA(Instruction Set Architecture) such as x86, arm or MIPs, RISC-VTF provides higher code density and performance efficiency.