The meta information in scientific literature including article title, author, institutions, year, journal, etc., plays a critical role in providing useful information to research peers. Traditional meta information extraction methods usually rely on rules and templates. Recently, due to the booming of Large Language Models (LLMs), its application in scientific literature meta-information extraction has drawn more and more attention. This paper aims to explore and evaluate the effects of meta information extraction for scientific literature using large language models. First, datasets consisting of the publications in given academic areas are built for the experiments. Then, the task definition and evaluation metric (i.e., accuracy rate) are described. Various large language models as well as the traditional methodology are used in experiments to execute the task of meta information extraction. The results are analyzed and compared among the use of various LLMs.