While Question-Answering (QA) has been a longtime area of interest for NLP and ML groups, Question Generation (QG) has been less of a focus, but has begun to generate more research in the last few years. QG specifically for education is a more narrow focus, but one that is important and holds great promise. In this work, we present a pipeline for generating and evaluating multiple-choice questions from text-based learning materials in an introductory data science course. We applied a T5 question generation model and a concept hierarchy extraction model on the text content, then ranked the generated questions based on their relevance to the resulting knowledge graph. Our evaluation with the course instructors show that the majority of high-ranked questions have acceptable quality and can be deployed in future iterations of the course. We conclude with discussions on the next steps towards refining the pipeline and promoting NLP research in educational domains.