Large-scale code search is a crucial task in software engineering, yet existing deep learning based models often embed Abstract Syntax Trees (ASTs) and code sequences separately, limiting their ability to learn the correlation between structural and textual features. To address this limitation, we propose a novel code search model that automatically generates Token-Level Information Flow Graphs (TL-IFGs) from aligned AST nodes and source code tokens. Our model includes an aligner that establishes a one-to-one correspondence between AST leaves and code tokens, which we make publicly available, along with a processed dataset to facilitate further research. The model automatically generates a TL-IFG for each code snippet from the aligned datas by predicting the information flow at the token-level, which ensures that structural and textual features are highly correlated during the embedding process. We also generate TL-IFGs for descriptions and embed them using a similar process. Experimental results demonstrate that our model outperforms state-of-the-art code search models, indicating the effectiveness of our approach. Furthermore, an ablation study shows that the generated TL-IFGs for both code and description positively impact model performance.