Processor, memory, and storage are the three most critical components of a High Performance Server(HPS). For a long time, given the processor and memory, how much storage capacity do we need? has been a problem that has deeply troubled the academic and enterprise communities. Especially in recent years, with the rapid development of Artificial Intelligence(AI), AI-based data-intensive tasks such as Deep Learning(DL), Re-inforcement Learning(RL), and High Performance Data Analy-sis(HPDA) have taken up the vast majority of processors, mem-ory, and storage overhead. The ability to support AI applications has become a key evaluation metric for the performance of HPS. Therefore, we propose an HPS storage design solution for typical AI applications. Furthermore, as AI models continue to grow more giant and the GPU Memory Wall problem becomes increasingly significant, using storage for offloading models and intermediate variables becomes the mainstream approach for training and inferring Extreme-Scale AI Models in the future. We need to consider the static overhead of models and datasets and the dynamic offloading requirements that may arise when AI tasks are run. We propose the Server Storage Computing Ratio (SSCR) model. The model uses DL training capabilities to characterize processor and memory performance. When config-ured for AI tasks, it can get the maximum server performance and the least amount of storage space. In other words, in the HPS mainly oriented to AI tasks, our model answers the question What is the minimum amount of Storage space that needs to be configured to maximize server performance for a given processor and memory?