In this paper, we propose a framework that uses Brain-Computer Interface (BCI) technology to create human-like avatars for user-driven Metaverse applications. This framework is designed to work efficiently with fast wireless connectivity and high computing demand, making it ideal for future infrastructures, e.g., 5G and beyond. The Metaverse system uses brain signals sent through wireless channels to create intelligent digital avatars that can provide helpful recommendations and assist in user-driven applications. To eliminate the computational burden on the user equipments, the computational tasks and resource allocation decisions are shifted to the centralized base station. As a result, our framework involves solving a mixed decision-making and classification problem. The goal is for the base station to efficiently allocate its computing and radio resources to users, as well as classify their brain signals. To this end, we develop a hybrid training algorithm that uses the latest advancements in deep reinforcement learning to solve the problem. Our algorithm involves three deep neural networks working together to handle both decision-making and classification tasks. Simulation results indicate that our framework can effectively manage system resources while accurately classifying users' brain signals.