At present, some network security defense systems expand their knowledge base by extracting network threat Intelligence (CTI) to understand the common attack techniques and processes of malicious attack groups. However, there are potential risks to this approach. Hackers can spread fake CTI through the Open-Source Intelligence (OSINT) platform to fool defense systems into learning false information. In this article, we focus on how to generate fake CTI text using the GPT-Neo model and show that the generated text is highly confident. By fine-tuning a common language model like GPT-Neo, we can generate text similar to real CTI. At the end of the paper, we propose two disinformation detection methods that can help us to eliminate unreliable content.