Decentralized stochastic optimization has become a crucial tool for addressing large-scale machine learning and control problems. In decentralized algorithms, all computing nodes are connected through a network topology, and each node communicates only with its direct neighbors. Decentralized algorithms can significantly reduce communication overhead by eliminating the need for global communication. However, existing research on the linear speedup analysis of decentralized stochastic algorithms is limited to the condition of network-dependent learning rates, which rarely holds in practice since the network connectivity is typically unknown to each node. As a result, it remains an open question whether a linear speedup bound can be achieved using network-independent learning rates. This paper provides an affirmative answer. By utilizing a new analysis framework, we prove that D-SGD and Exact-Diffusion, two representative decentralized stochastic algorithms, can achieve linear speedup with network-independent learning rates. Simulations are provided to validate our theories.