The advance of AI in safety- and security-critical domains such as aviation needs high standards for its trustworthy development. In this context the EASA introduced the Assessment List for Trustworthy AI (ALTAI) as an helpful tool. This paper presents an approach for applying the ALTAI in the development of an AI-based digital Air Traffic Control Operator (ATCO). Specifically, the aim was using the ALTAI to derive a set of high-level requirements for the AI-based system to guarantee trustworthiness in the early development stages. The focus is thus given on how the ALTAI questions can be processed in order to yield system requirements. However, the necessity for a structured approach becomes apparent when confronted with the abundance of diverse perspectives within the ALTAI. Accordingly, various filtering, prioritization, and grouping methods were implemented in an usable framework. Consequently, the applicability of the ALTAI is analyzed, discovering a divergence between technical and ethical requirements. It is illustrated that technical questions often lead to highly applicable specific requirements, compared to ethical questions. Especially due to their importance, the challenges of deriving specific requirements for certain ethical aspects are emphasized and discussed. Additionally, suggestions on future versions of the ALTAI are given in order to strengthen its application during the development of AI-based systems. By showcasing our method and specific requirements obtained for the digital ATCO system, the objective is to highlight the necessity of the ALTAI and to provide a basis for its wider use.