The advent of 6G is anticipated to bring advanced support for decentralized data processing, promoting the exploration of Federated Learning (FL). FL enables collaborative learning among distributed clients without direct access to raw data, offering benefits in communication efficiency and privacy preservation. However, some challenges hinder its widespread adoption in the 6G context, such as anomalous local models' packet loss due to communication resource limitations, and centralized design on data management. To address these obstacles, this work proposes a trustworthy architecture for supporting FL with the following three key contributions. First, our approach develops a robust model aggregation method by incorporating model analysis along with client reputation, withstanding abnormal models and enhancing system resilience. Second, it utilizes all received models, including those partially received because of packet loss, for accuracy optimization while ensuring fair contribution reputations for all participants. Third, our work customizes a consensus mechanism in Distributed Ledger Technology (DLT) for the proposed aggregation rule. This mechanism facilitates transparent and immutable records for data exchanges, and decentralizes the system. Our simulation demonstrates that the proposed architecture accurately identifies outlier models and utilizes incomplete models, enhancing global model accuracy by 13% compared to the method that averages over randomly selected fully-received models. Additionally, when benchmarked against the state-of-the-art Krum algorithm, our approach registers a 5% performance improvement.