This paper proposes a method to attack embedding-based text classifiers, namely GenEmC, to reduce the high computational cost of rule-based replacement. GenEmC includes three main steps including data preparation, embedding decoder training, and generator training. The embedding decoder is learned to convert embedding vectors to texts. The generator is trained by a two-term loss to generate adversarial embedding vectors from original texts. After three steps, the combination of two trained models could generate adversarial texts instantly. To demonstrate the effectiveness of GenEmC, the experiments are conducted on the IMDB dataset. The target classifiers are well-trained GRU, LSTM, and Bert. GenEmC is compared with two well-known rule-based replacement methods known as PWWS and TextBugger. The experiments have demonstrated that for 1,000 texts in test set, while the computational cost of GenEmC is an average of about 6 minutes, the others require at least 10 hours. Besides, the average success rate and semantic-preserving output score of GenEmC is moderately higher than those of PWWS and TextBugger.