This paper aims to enlighten the differences between texts which were generated by human and AI model, focusing on ‘essays’. We analyzed Korean usage patterns according to linguistic layers such as morphological, syntactic, and sociolinguistic features. In an aspect of dataset, we used essays written by high school students in the 1st to 3rd grades as ‘human-generated essays’, and essays generated by GPT-4 as data for ‘GPT-4 generated essays’. For morphological feature analysis, we used part-of- speech tag frequency, Type Token Ratio (TTR), lexical density, and lexical sophistication. We found that human and GPT-4 have different patterns of Part-Of-Speech by using statistical analysis. Also we observed that there are differences between human and GPT-4 in lexical features except lexical sophistication. The syntactic analysis was based on basic sentence characteristics and dependency parsing. The results showed that humans’ sentences have a wider range of deviations and distributions for each sentence characteristic than GPT-4s’, which indicates that humans write a variety of sentences than AI model. On the other hand, the dependency parsing showed that ‘VP’ and ‘AP’ appeared relatively frequently in human- generated sentences, while ‘NP’ and modifier(MOD) appeared frequently in GPT-4-generated sentences. To examine the linguistic features in the sight of language users, we analyzed gender-biased words, and found that GPT-4 rarely used gender-specific preferred words, gender-aversive words, and gender-common aversive words but frequently used gender-common preferred words. This suggests that GPT-4 has been pre-trained to avoid gender-biased or hate speech. In the case of sentiment analysis, GPT-4s showed more positive emotions than humans, and this also revealed that as same as gender bias results, GPT-4 has been pre-trained to generate positive sentences that negative sentences.