CFP last date
20 January 2026
Call for Paper
February Edition
IJCA solicits high quality original research papers for the upcoming February edition of the journal. The last date of research paper submission is 20 January 2026

Submit your paper
Know more
Random Articles
Reseach Article

Epipolar-Aligned Channel Selection: A Projection from Optical Flow to Disparity

by Sahereh Obeidavi, Dieter Landes, Arsalan Moosavipoor
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 187 - Number 69
Year of Publication: 2025
Authors: Sahereh Obeidavi, Dieter Landes, Arsalan Moosavipoor
10.5120/ijca2025926148

Sahereh Obeidavi, Dieter Landes, Arsalan Moosavipoor . Epipolar-Aligned Channel Selection: A Projection from Optical Flow to Disparity. International Journal of Computer Applications. 187, 69 ( Dec 2025), 7-16. DOI=10.5120/ijca2025926148

@article{ 10.5120/ijca2025926148,
author = { Sahereh Obeidavi, Dieter Landes, Arsalan Moosavipoor },
title = { Epipolar-Aligned Channel Selection: A Projection from Optical Flow to Disparity },
journal = { International Journal of Computer Applications },
issue_date = { Dec 2025 },
volume = { 187 },
number = { 69 },
month = { Dec },
year = { 2025 },
issn = { 0975-8887 },
pages = { 7-16 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume187/number69/epipolar-aligned-channel-selection-a-projection-from-optical-flow-to-disparity/ },
doi = { 10.5120/ijca2025926148 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2025-12-24T19:35:38+05:30
%A Sahereh Obeidavi
%A Dieter Landes
%A Arsalan Moosavipoor
%T Epipolar-Aligned Channel Selection: A Projection from Optical Flow to Disparity
%J International Journal of Computer Applications
%@ 0975-8887
%V 187
%N 69
%P 7-16
%D 2025
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Stereo disparity estimation is a fundamental problem in computer vision, forming the basis for 3D reconstruction, autonomous navigation, and robotics. Unlike optical flow, which describes unconstrained 2D displacements, disparity in rectified stereo geometry is strictly aligned with the epipolar axis. This geometric property implies that one component of the flow field contains the true disparity signal, while the orthogonal component predominantly reflects distortion, miscalibration, or noise. However, most existing approaches either neglect this constraint or require dedicated disparity networks trained from scratch, leading to redundant computation and limited generality. This paper introduces Epipolar-Aligned Channel Selection (EACS), a parameter-free and geometry-aware post-processing operator that isolates the disparity-aligned component of optical flow while discarding the non-epipolar channel. Implemented as a fixed linear projection with negligible overhead, EACS ensures that only geometrically meaningful information is retained. When coupled with RAFT, a state-of-the-art optical flow network, the resulting RAFT + EACS pipeline enables direct and efficient disparity estimation from optical flow, without requiring additional training or specialized stereo architectures. Experiments conducted on synthetic stereo data generated at TU Chemnitz (Technische Universität Chemnitz) confirm the effectiveness of this approach. The proposed method achieves sub-pixel disparity accuracy (MAE = 0.3007, RMSE = 0.9470) and extremely low error rates under stringent evaluation protocols (D1-all = 0.4%). Qualitative analysis further demonstrates that RAFT + EACS preserves fine structural details and produces smooth, consistent disparity maps, even in challenging low-texture regions. These findings establish geometry-aware post-processing as a simple yet powerful alternative to specialized stereo disparity networks.

References
  1. Vedula, S., Baker, S., Rander, P., Collins, R., and Kanade, T. 1999. Three-dimensional scene flow. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). 722–729.
  2. Horn, B. K. and Schunck, B. G. 1981. Determining optical flow. Artificial Intelligence. 17, 1–3, 185–203.
  3. Zach, C., Pock, T., and Bischof, H. 2007. A duality based approach for realtime TV-L1 optical flow. In Proceedings of the DAGM Conference on Pattern Recognition. 214–223.
  4. Dosovitskiy, A., Fischer, P., Ilg, E., Hausser, P., Hazirbas, C., Golkov, V., Van Der Smagt, P., Cremers, D., and Brox, T. 2015. FlowNet: Learning optical flow with convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). 2758–2766.
  5. Kendall, A., Martirosyan, H., Dasgupta, S., Henry, P., Kennedy, R., and Bry, A. 2017. End-to-end learning of geometry and context for deep stereo regression. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). 66–75.
  6. Chang, J.-R. and Chen, Y.-S. 2018. Pyramid stereo matching network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 5410–5418.
  7. Sun, D., Yang, X., Liu, M.-Y., and Kautz, J. 2018. PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 8934–8943.
  8. Poggi, M., Tosi, F., Batsos, K., Mordohai, P., and Mattoccia, S. 2020. On the synergies between machine learning and binocular stereo for depth estimation from images: A survey. arXiv preprint arXiv:2004.08566.
  9. Teed, Z. and Deng, J. 2020. RAFT: Recurrent all-pairs field transforms for optical flow. In Proceedings of the European Conference on Computer Vision (ECCV). 402–419.
  10. Lipson, L., Teed, Z., and Deng, J. 2021. RAFT-Stereo: Multilevel recurrent field transforms for stereo matching. In Proceedings of the IEEE International Conference on 3D Vision (3DV). 218–227.
  11. Liu, X., Zhang, T., and Liu, M. 2024. Joint estimation of pose, depth, and optical flow with a competition–cooperation transformer network. Neural Networks 171, 263–275.
  12. Guo, X., Zhao, H., Shao, S., Li, X., and Zhang, B. 2024. F²Depth: Self-supervised indoor monocular depth estimation via optical flow consistency and feature map synthesis. Neural Networks 175, 106–118.
  13. Gui, M., Schusterbauer, J., Prestel, U., Ma, P., Kotovenko, D., Grebenkova, O., Baumann, S. A., Hu, V. T., and Ommer, B. 2024. DepthFM: Fast generative monocular depth estimation with flow matching. arXiv preprint arXiv:2403.13788.
  14. Birchfield, S. and Tomasi, C. 1998. A pixel dissimilarity measure that is insensitive to image sampling. IEEE Transactions on Pattern Analysis and Machine Intelligence 20, 4, 401–406.
  15. Zbontar, J. and LeCun, Y. 2016. Stereo matching by training a convolutional neural network to compare image patches. Journal of Machine Learning Research 17, 1, 2287–2318.
  16. Mayer, N., Ilg, E., Häusser, P., Fischer, P., Cremers, D., Dosovitskiy, A., and Brox, T. 2016. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 4040–4048.
  17. Dosovitskiy, A., Fischer, P., Ilg, E., Hausser, P., Hazirbas, C., Golkov, V., Van Der Smagt, P., Cremers, D., and Brox, T. 2015. Flownet: Learning optical flow with convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). 2758–2766.
  18. Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., and Brox, T. 2017. Flownet 2.0: Evolution of optical flow estimation with deep networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2462–2470.
  19. Zhang, F., Prisacariu, V., Yang, R., and Torr, P. H. 2019. GA-Net: Guided aggregation net for end-to-end stereo matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 185–194.
  20. Guo, X., Yang, K., Yang, W., Wang, X., and Li, H. 2019. Group-wise correlation stereo network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 3273–3282.
  21. Hur, J. and Roth, S. 2019. Iterative residual refinement for joint optical flow and occlusion estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 5754–5763.
  22. Teed, Z. and Deng, J. 2021. RAFT-3D: Scene flow using rigid-motion embeddings. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 8375–8384.
  23. Mehl, L., Jahedi, A., Schmalfuss, J., and Bruhn, A. 2023. M-fuse: Multi-frame fusion for scene flow estimation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). 2020–2029.
  24. Jeong, J., Lin, J. M., Porikli, F., and Kwak, N. 2022. Imposing consistency for optical flow estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 3181–3191.
  25. Huguet, F. and Devernay, F. 2007. A variational method for scene flow estimation from stereo sequences. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). 1–7.
  26. Wedel, A., Brox, T., Vaudrey, T., Rabe, C., Franke, U., and Cremers, D. 2011. Stereoscopic scene flow computation for 3D motion understanding. International Journal of Computer Vision 95, 1, 29–51.
  27. Poggi, M., Tosi, F., Batsos, K., Mordohai, P., and Mattoccia, S. 2021. On the synergies between machine learning and binocular stereo for depth estimation from images: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence.
  28. Zhai, M., Xiang, X., Lv, N., and Kong, X. 2021. Optical flow and scene flow estimation: A survey. Pattern Recognition 114, 107861.
  29. Ilg, E., Saikia, T., Keuper, M., and Brox, T. 2018. Occlusions, motion and depth boundaries with a generic network for disparity, optical flow or scene flow estimation. In Proceedings of the European Conference on Computer Vision (ECCV). 614–630.
  30. Jiang, H., Sun, D., Jampani, V., Lv, Z., Learned-Miller, E., and Kautz, J. 2019. SENSE: A shared encoder network for scene-flow estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 3195–3204.
  31. Jaegle, A., Borgeaud, S., Alayrac, J.-B., Doersch, C., Ionescu, C., Ding, D., Koppula, S., Zoran, D., Brock, A., Shelhamer, E., et al. 2021. Perceiver IO: A general architecture for structured inputs and outputs. arXiv preprint arXiv:2107.14795.
  32. Li, J., Wang, P., Xiong, P., Cai, T., Yan, Z., Yang, L., Liu, J., Fan, H., and Liu, S. 2022. Practical stereo matching via cascaded recurrent network with adaptive correlation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 16263–16272.
  33. Geiger, A., Lenz, P., and Urtasun, R. 2012. Are we ready for autonomous driving? The KITTI vision benchmark suite. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 3354–3361.
  34. Scharstein, D. and Szeliski, R. 2002. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision 47, 1–3, 7–42.
Index Terms

Computer Science
Information Sciences

Keywords

Stereo disparity estimation Optical Flow Epipolar Geometry RAFT Epipolar-Aligned Channel Optical Flow–to–Disparity