Farshid PirahanSiah
Image Processing Test for C++
My GitHub about Advanced Programming with Modern C++ 23 for Image Processing
https://github.com/pirahansiah/cvtest
Last update July.2022
The first function is int func_image_info(cv::Mat src, cv::Mat &dst /*output*/) this function show information about image such as size, histogram, ....
YouTube link for OpenCV: https://www.youtube.com/watch?v=gK1ybsWOqhs
Multi-camera calibration
Stereo camera calibration# Stereo camera calibration
July 2022
Computer Vision, Deep Learning, AI Metaverse
https://github.com/pirahansiah/pirahansiah
I have 6+ years of experience as a computer vision research engineer in three multinational companies in two continents, strengthened by my academic background with a Master’s and PhD in Computer Science (Computer Vision). My expertise includes Technical Lead R&D, Software Specialists Image Processing - Medical Devices, Computer Vision with Machine Learning (Object Detection, Video Tracking), IoT, and Robotics; and I am experienced in designing algorithms for Image Thresholding, Optical Flow, Camera Calibration, and Stereo Vision. Lastly, I have a track record in creating effective metrics, building end-to-end pipelines, and writing production-level codes with OpenCV and Deep Learning frameworks (Caffe, TensorFlow, PyTorch).
FarshidPirahanSiah
I am interested in Metaverse, Medicine. I am interested in 3D Camera Calibration for extended reality headset in Metaverse.
I have experience in computer vision, deep learning and robotic.
I am familiar with IoT and Edge computing, Medical devices, cloud base solution (AWS), robotic.
Platform for metaverse
AR/VR Frameworks Engineer For New Application Paradigm
Camera Calibration
https://www.tiziran.com/topics/camera_calibration
Last Updated 29.Jan.2022
Geometric camera calibration, also referred to as camera re-sectioning, estimates the parameters of a lens and image sensor of an image or video camera. These parameters can be used to correct for lens distortion, measure the size of an object in world units, or determine the location of the camera in a scene. These tasks are used in applications such as machine vision to detect and measure objects. They are also used in robotics, navigation systems, and 3-D scene reconstruction. Without any knowledge of the calibration of the cameras, it is impossible to do better than projective reconstruction (MathWorks).
Non-intrusive scene measurement tasks, such as 3D reconstruction, object inspection, target or self-localization or scene mapping require a calibrated camera model (Orghidan et al. 2011). Camera calibration is the process of approximating the parameters of a pinhole camera model (Tsai 1987; Stein 1995; Heikkila & Silven 1997) of a given photograph or video.
Camera self-calibration, also known as auto/fully calibration method, is not reliant upon the calibration reference object of a camera. Three-dimensional reconstruction and motion estimation are two fundamental tasks in computer vision (Kaehler & Bradski 2016). In both tasks, camera calibration is an essential step that bridges the 2D imaging plane and 3D space. For the past decade, camera calibration has been heavily investigated in the fields of computer vision and optics (Anuar et al. 2015; Garg & Deep 2015; Hong et al. 2015; Jia et al. 2015). Maybank and Faugeras (1992) introduced the concept of camera self-calibration. However, the self-calibration method is nonlinear and highly sensitive to noises; these methods can be enhanced by using active vision, where some specific camera motions are designed, such as pure rotation, orthogonal translations (Wang et al. 2004). For example, Hartley proposed using pure rotation to compute the infinite homography, then linearly calibrate the camera (Hartley & Zisserman 2003). However, the constraints on the specific motions are too strong to satisfy in practice, which hinder them from wider applications (Lei et al. 2004). For example, it is difficult to perform pure rotation around the camera’s optical center, even with a pure rotation platform, because it is difficult to obtain the camera’s optical center and even more difficult to coincide the camera’s optical center with the rotation center of a rotation platform. Furthermore, some researchers tried to improve self-calibration using more constraints, such as module constraint and loop constraint (Courchay et al. 2012). Another category of calibration methods is usually based on specific calibration rig or scene constraints (Liming et al. 2013).
The first step for camera calibration is corner detection. Based on my research, the calibration pattern image play important rule in the whole calibration process.
-
Camera calibration for multi-modal robot vision based on image quality assessment https://www.researchgate.net/profile/Farshid-Pirahansiah/publication/288174690_Camera_calibration_for_multi-modal_robot_vision_based_on_image_quality_assessment/links/5735bc2908aea45ee83c999e/Camera-calibration-for-multi-modal-robot-vision-based-on-image-quality-assessment.pdf
-
Pattern image significance for camera calibration https://ieeexplore.ieee.org/abstract/document/8305440
-
Camera Calibration and Video Stabilization Framework for Robot Localization https://link.springer.com/chapter/10.1007/978-3-030-74540-0_12
- CV_metaverse
- 3D_multi_camera_calibration
- corner_Detection
- cornerDetection.ipynb
- auto multi camera calibration
- corner_Detection
- 3D_multi_camera_calibration
Top source code:
- cornerDetection.ipynb
- It use several preprocessing and postprocessing steps to enhance corner detection use by camera calibration.
- 3D multi camera calibration require detect and set points for all camera together
- if the calibration pattern images are not good, blur, ... it need to enhance it first then use corner points to detect and use for calibration process
#computervision #AI #objectdetection #objecttracking #ml #research #CNN #gans #convolutionalneuralnetworks #ai #vr #reinforcementlearning #mlops #aiforbusiness #science #researcher #phd #cameracalibration #opticalflow #videostablization #humanoidrobot #localization #3dSLAM #reconstruction #pointcloud #mixedreality #edgecomputing #raspberrypi #intelstick #googlecoral #jetsonnano #nvidiavgpu #tensorflowjs #pytorch #opencv #aikit #caffee #DIGITS #c++ #python #ubuntu #farshidpirahansiah #tiziran.com #farshid #pirahansiah #robotics #tiziran.com #farshid #pirahansiah #MultiCameraMultiClassMultiObjectTracking #deeplearning #machinelearning #artificialintelligence #tensorflow #robotics #3dvision #sterovision #depthmap #RCNN #machinevision #imageprocessing #patternrecognition #compiler #RISC-V #RNN #fullStackDeepLearning #productinnovation #patents #TensorRT #ApacheTVM #TFLite #PyTorchmobile #dockers #gRPC #RESTAPIs #GRPC #GraphQL #imageprocessing #patternrecognition #EnablingEfficient #high-performance #Accelerators #Optimization #computervision #AI #objectdetection #objecttracking #ml #research #CNN #gans #convolutionalneuralnetworks #ai #vr #reinforcementlearning #mlops #aiforbusiness #science #researcher #phd #cameracalibration #opticalflow #videostablization #humanoidrobot #localization #3dSLAM #reconstruction #pointcloud #AR/VR #mixedreality #edgecomputing #raspberrypi #intelstick #googlecoral #jetsonnano #nvidiavgpu #tensorflowjs #pytorch #opencv #aikit #caffee #DIGITS #c++ #python #ubuntu #farshidpirahansiah #tiziran.com #farshid #pirahansiah #robotics #SingleObjecttracking #SOT #MultiObjecttracking #MOT #MultiTargetTracking #MTT #MultiClassMultiObjecttracking #MCMOT #MultiCameraMultiClassMultiObjectTracking #MCMCMOT #deeplearning #machinelearning #artificialintelligence #computervision #video #objectdetection #objecttracking #tensorflow #innovation #learning #datascience #robotics #3dvision #sterovision #depthmap #RCNN #machinevision #imageprocessing #patternrecognition #compiler #RISC-V #RNN #fullStackDeepLearning #productinnovation #patents #TensorRT #ApacheTVM #TFLite #PyTorchmobile #TensorFlow.js #CoreML #MLkit #DataDog #NewRelic #AmazonCloudWatch #dockers #gRPC #RESTAPIs #GRPC #GraphQL #farshidpirahansiah #tiziran.com #farshid #pirahansiah #robotics #SingleObjecttracking #SOT #MultiObjecttracking #MOT #MultiTargetTracking #MTT #MultiClassMultiObjecttracking #MCMOT #MultiCameraMultiClassMultiObjectTracking #MCMCMOT #deeplearning #machinelearning #artificialintelligence #computervision #video #objectdetection #objecttracking #tensorflow #innovation #learning #datascience #robotics #3dvision #sterovision #depthmap #RCNN #objectdetection #objecttracking #ml #research #CNN #gans #convolutionalneuralnetworks #ai #vr #reinforcementlearning #mlops #aiforbusiness #science #researcher #phd #cameracalibration #opticalflow #videostablization #humanoidrobot #localization #3dSLAM #reconstruction #pointcloud #AR/VR #mixedreality #edgecomputing #raspberrypi #intelstick #googlecoral #jetsonnano #nvidiavgpu #tensorflowjs #pytorch #opencv #aikit #caffee #DIGITS #c++ #python #ubuntu #machinevision #imageprocessing #patternrecognition #SingleObjecttracking #SOT #MultiObjecttracking #MOT #MultiTargetTracking #MTT #MultiClassMultiObjecttracking #MCMOT #MultiCameraMultiClassMultiObjectTracking #MCMCMOT #deeplearning #machinelearning #artificialintelligence #computervision #video #objectdetection #objecttracking #tensorflow #innovation #learning #datascience #robotics #3dvision #sterovision #depthmap #RCNN #objectdetection #objecttracking #ml #research #CNN #GAN
