3D object detection has aroused widespread concerns, in which point cloud research is the most popular one.Point clouds are always deemed as irregular and disordered, however implicit order actually exists due to laser arrangement and sequential scanning. Therefore, the authors improve 3D detection accuracy by exploring point cloud inner order, which contains context information but neglected before. In this paper, the authors propose a novel method termed Frustum 3DNet for 3D object detection from point clouds. Following inner order, rearranged feature matrix is constructed, and a pseudo panorama is generated from LiDAR data. Given 2D region proposals on the pseudo image, the authors extend them to 3D space and obtain frustum regions of interest. For each frustum, generate a sequence of small frustums by slicing over distance. To further cooperate with context information, a novel local context feature extraction module is introduced. The extracted context features are concatenated with frustum features afterwards. The feature map is fed to a fully convolutional network , followed by a classifier and a regressor. Refinement and Fusion with RGB input are attached for outcome improvement. Ablation studies verify the efficacy of context extraction component and the corresponding model architecture in this paper. The authors present experiments on KITTI and Nuscenes datasets and F-3DNet outperforms existing methods at the time of submission.