Development of a Human Tracking Indoor Mobile Robot Platform

Gürkan Küçükyıldız, Suat Karakaya

In this paper, a differential drive mobile robot platform was developed in order to perform indoor mobile robot researches. The mobile robot was localized and remote controlled. The remote control consists of a pair of 2.4 GHz transceivers. Localization system was developed by using infra­red reflectors, infrared leds and camera system. Real time localization system was run on an industrial computer placed on the mobile robot. The localization data of the mobile robot is transmitted by a UDP communication program. The transmitted localization information can be received any computer or any other UDP device. In addition, a LIDAR (Light Detection and Ranging; or Laser Imaging Detection and Ranging) and a Kinect three­dimensional depth sensor were adapted on the mobile robot platform. LIDAR was used for obstacle and heading direction detection operations and Kinect for eliminating depth data of close environment. In this study, a mobile robot platform which has specialties as mentioned was developed and a human tracking application was realized real time in MATLAB and C# environment.

A Hybrid Indoor Localization System Based on Infra-red Imaging and Odometry

Gürkan Küçükyıldız, Suat Karakaya

In this study, a real-time indoor localization system was developed by using a camera and passive landmarks. A narrow band-pass infra-red (IR) filter was inserted to the back of the camera lens for capturing IR images. The passive landmarks were placed on the ceiling at pre-determined locations and consist of IR retro-reflective tags that have binary coded unique ID’s. An IR projector emits IR rays at the tags on the ceiling. The tags then reflect the rays back to the camera sensor creating a digital image. An image processing algorithm was developed to detect and decode the landmarks in captured images. The proposed algorithm successfully estimates the position and the orientation angle based on relative position and orientation with respect to the detected tags. To further improve the accuracy of the estimates, extended Kalman filter (EKF) was adapted to the measurement algorithm. The proposed method initially estimates the position of a mobile robot based on odometry and kinematic model. EKF was then used to update the estimates given the measurement obtained from the image processing system. Real time experiments were performed to test the performance of the system. The results prove that the proposed indoor localization system can effectively estimate position with an error less than 5cm.

Vision Based Control of Magnetic Levitation System

Gürkan Küçükyıldız

This work presents vision based control of a single-axis magnetic levitation system. The  system is the fundamental of the high speed maglev trains and magnetic bearings. A ferro magnetic object is levitated at a desired position in the air gap by applying electromagnetic  force against to gravity. The system consists of five main components: a position sensor and a camera, a coil, a controller, a driver and a ferro-magnetic object. Current on the coil, which causes electromagnetic force, was controlled according to position feedback. The mathematical model of the system was obtained based on Newton’s theory and verified by experimental data. The nonlinear relationships between force, current and air gap were found experimentally. The controller was designed using feedback linearization technique based on the nonlinear relationships. In the literature, light based sensors have mostly been used to detect the object’s position. Despite the preferred usage of conventional sensors, some disadvantages are encountered such as: calibration requirement, non-linearity, noise and non robustness. These disadvantages inevitably cause disturbances on the system. In this study, the position of the magnetic object was detected using image processing methods in order to overcome the disadvantages associated with the light based sensors.

Image Processing Based Package Volume Detection with Kinect

Gürkan Küçükyıldız, Suat Karakaya

In this study, an image processing based package volume detection scheme that utilizes Kinect depth sensor was developed in Matlab environment. Background subtraction method was used to obtain the foreground image that contains the package to be measured from the Kinect depth image. Connected components labeling method was used to segment the foreground image. Out of the components determined by connected components labeling, the one that has the maximum pixel area overlapping with the measuring plate was assumed to be the package of interest.  Package orientation angle and center point were then determined. Hough transform was applied to the package image to obtain the lines that passes through package edges. The package corners were obtained by finding the four intersection points of the detected lines. Real world coordinates of the package corners were calculated using the Kinect’s intrinsic matrix. Package width and length were determined by finding the distance between the corners in the real world coordinate system. Finally, the package height was determined by differencing plate depth and average depth value of points on the package surface. It was observed that the algorithm performed successfully and the measurement error was within 1cm under presence of various disturbance effects.

Design and Navigation of a Robotic Wheel Chair

Gürkan Küçükyıldız, Suat Karakaya

In this study, design and navigation of a robotic wheelchair for disabled or elderly people was explored. Developed system consists of a wheelchair, high-power motor controller card, Kinect camera, RGB camera, EMG sensor, EEG sensor, and computer.  Kinect camera was installed on the system in order to provide safe navigation of the system. Depth frames, captured by Kinect camera, were processed with developed image processing algorithm to detect obstacles around the wheelchair. RGB camera was mounted to system in order to detect head-movements of user. Head movement, has the highest priority for controlling of the system. If any head movement detected by the system, all other sensors were disabled. EMG sensor was selected as second controller of the system. Consumer grade an EMG sensor (Thalmic Labs) was used to obtain eight channels EMG data in real time. Four different hand movements: Fist, release, left and right were defined to control the system using EMG. EMG data was classified different classification algorithms( ANN,SVM and random forest) and most voted class was selected as result.  EMG based control can be activated or disabled by user making a fist or release during three seconds.  EEG based control has lowest priority for controlling the robotic wheelchair. A wireless 14 channels EEG sensor (Emotiv Epoch) was used to collect real time EEG data. Three different cognitive tasks: Solving mathematical problems, relaxing and social task were defined to control the system using EEG. If system could not detect a head movement or EMG signal, EEG based control is activated.   In order to other to control user should accomplish the relative cognitive task.   During experiments, all users could easily control the robotic wheelchair by head movements and EMG movements. Success of EEG based control of robotic wheelchair varies because of user experiments. Experienced users and  un-experienced user changes the result of the system.

DSP Based Real Time Lane Detection Algorithm

Development And Optimization Of DSP Based Real Time Lane Detection Algorithm On A Mobile Robot Platform

Gürkan Küçükyıldız

In this study, development and optimization of a Hough transform based real time lane detection algorithm was explored. Finding lane marks by using Hough transform on captured video frames was the main goal of the system. Image processing code was developed on Visual DSP 5.0 environment and the code was run on BF-561 processor embedded in ADSP BF561 EZ KIT LITE evaluation board. The code was optimized into a form which is satisfactory for real time applications. A mobile robot platform was developed during the study and the image processing algorithm was tested on this platform. The experimental results which were obtained before and after the optimization of the code were compared.

Image Processing Based Indoor Localization System

Gürkan Küçükyıldız, Suat Karakaya

In this study, image processing based low cost indoor localization system was developed. Image processing algorithm was developed in C++ programming language and Open CV image processing library.  Frames were captured by a USB camera which was designed for operating at 850 nm wave length to eliminate environmental disturbances. A narrow band pass filter was integrated to camera in order to detect retro reflective labels only. Retro reflective labels were placed ceiling of indoor area with pre-determined equal spaced grids. Approximate location of mobile robot was obtained by label identity and exact location of mobile robot was obtained with detected label’s position at image coordinate system. Developed system was tested on a mobile robot platform and it was observed that system is operating successfully in real time.

Kamera Ve Lazer Kullanarak LIDAR Sistemi Geliştirilmesi

Gürkan Küçükyıldız, Suat Karakaya

Bu çalışmada kamera ve lazer kullanılarak ortamdaki cisimlerin anlık uzaklıklarının tespiti üzerinde çalışılmıştır. Geliştirilen sistemde kamera ve lazer sabit tutularak ikisinin de görüş açısını değiştirecek bir ayna kullanılmıştır. Ayna, kameranın odak çizgisine 45o’lik açı yapacak şekilde sisteme entegre edilmiştir. Sistemde bulunan aynayı döndürmek için bir adet redüktörlü DC motor kullanılmıştır. Bu sayede sistem 270olik bir alanda istenilen hızda ve çözünürlükte veri alabilmektedir. Sistem için gereken kodlar Phyton ortamında yazılmış olup sistemde bulunan DC motorun kontrolü için ise Atmel Atmega328p işlemcisi tabanlı bir geliştirme kartı kullanılmıştır. Yapılan deneylerde geliştirilen sistemin 360o’lik bir alanı 1.8 saniye içerisinde 3.30o çözünürlükle taradığı görülmüştür.

Kinect Tabanlı Robot Kolu Kontrolü

Gürkan Küçükyıldız, Suat Karakaya

Bu çalışmada Kinect ile robot kolunu anlık olarak hareket ettirmek üzerinde çalışılmıştır. Bu amaçla geliştirilen sistemde Kinect sensör ve bilgisayar kullanılmıştır. Ayrıca çalışma sırasında bir adet üç eksenli robot geliştirilmiş ve deneyler gerçek zamanlı olarak bu geliştirilen robot üzerinde gerçekleştirilmiştir. Üç eksenli robotun hareketi RC servo motorlar ile sağlanmış olup bu motorlar Arduino Uno R3 kartı ile kontrol edilmiştir. Eklem açılarını bulabilmek için Kinect kameradan elde edilen görüntü Processing 2.0b9 ortamında geliştirilen görüntü işleme programı aracılığıyla iskeletleştirilir. Açısı bulunmak istenilen insan uzuvları üzerine bir vektör çizdirilir. Çizilen bu vektörlerin uzunlukları trigonometrik işlemlerden geçirilerek uzuvlar arasındaki açıları vermektedir. Elde edilen açı değerleri seri haberleşme vasıtasıyla Arduino Uno R3 kartına gönderildikten sonra robotun hareketini sağlayan servo motorlar bu açı değerlerine göre döndürülerek sistemin hareketi sağlanmıştır. Yapılan deneyler sonucunda geliştirilen sistemin başarılı olarak çalıştığı ve robotun kollarının yapılan hareketleri anlık olarak taklit edebildiği gözlenmiştir.

Kinect based control of a Mobile Robot

Gürkan Küçükyıldız, Suat Karakaya

In this study Kinect based control of a mobile robot system was examined. A mobile robot platform was developed for his purpose and developed algorithms were tested on this platform in real time. Mobile robot was actuated by DC Motors. Frames captured from Kinect sensor, which was placed in front of the mobile robot, was processed in Visual Studio C# environment by developed image processing algorithm.  Distance between Kinect sensor and detected skeleton was gathered by developed image processing algorithm. Results were sent to developed control card via serial port.  Developed control card controlled actuators PD speed control algorithm. At result, it was observed that developed system is operating successfully and  follows the skeleton successfully.