Two Stage Semantic Segmentation by SEEDS and Fork Net
- Resource Type
- Conference
- Authors
- Mukherjee, Aritra; Jana, Prithwish; Chakraborty, Sayak; Saha, Sanjoy Kumar
- Source
- 2020 IEEE Calcutta Conference (CALCON) Calcutta Conference (CALCON), 2020 IEEE. :283-287 Feb, 2020
- Subject
- Aerospace
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Fields, Waves and Electromagnetics
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Deep learning
Image segmentation
Computer vision
Statistical analysis
Convolution
Conferences
Semantics
Semantic segmentation
Superpixel
- Language
Semantic segmentation of image is one of the most challenging and researched topic in the field of computer vision. Statistical methods can be employed for the task with low computational resources, but in a diverse natural environment, it fails to label many complicated objects. Deep learning methods are quite popular now for high accuracy but dense semantic segmentation at pixel level accuracy is very resource-intensive and not suitable for robot vision. Proposed methodology merges the best of both worlds to semantically label superpixels computed by a statistical method, with a deep net. The deep convolution network is novel in its use of superpixels in different fields of vision. The methodology is tested on the Pascal VOC dataset and compared with recent popular approaches. The results show that the proposed methodology is on par with the best results.