Astronomical knowledge entities, such as celestial object identifiers, are crucial for literature retrieval and knowledge graph construction, and other research and applications in the field of astronomy. Traditional methods of extracting knowledge entities from texts face challenges like high manual effort, poor generalization, and costly maintenance. Consequently, there is a pressing need for improved methods to efficiently extract them. This study explores the potential of pre-trained Large Language Models (LLMs) to perform astronomical knowledge entity extraction (KEE) task from astrophysical journal articles using prompts. We propose a prompting strategy called Prompt-KEE, which includes five prompt elements, and design eight combination prompts based on them. Celestial object identifier and telescope name, two most typical astronomical knowledge entities, are selected to be experimental object. And we introduce four currently representative LLMs, namely Llama-2-70B, GPT-3.5, GPT-4, and Claude 2. To accommodate their token limitations, we construct two datasets: the full texts and paragraph collections of 30 articles. Leveraging the eight prompts, we test on full texts with GPT-4 and Claude 2, on paragraph collections with all LLMs. The experimental results demonstrated that pre-trained LLMs have the significant potential to perform KEE tasks in astrophysics journal articles, but there are differences in their performance. Furthermore, we analyze some important factors that influence the performance of LLMs in entity extraction and provide insights for future KEE tasks in astrophysical articles using LLMs.