Posts

Implementation - Robot Navigation

Image
 Block Diagram Initialization of Pyro object  robot = Pyro4.Proxy("PYRONAME:example.robot") # use name server object lookup uri shortcut con =connect() #############..............................  Define particles  defining number of particles,initiating particles array and assigning initial position # Generate initial particles. Each particle is (x, y, theta).  number_of_particles = 25 start_state = np.array([500.0, 0.0, 45.0 / 180.0 * pi])  initial_particles = [copy.copy(Particle(start_state)) for _ in xrange(number_of_particles)] Initiating SLAM object fs = FastSLAM(initial_particles, robot_width, scanner_displacement, control_motion_factor, control_turn_factor, measurement_distance_stddev, measurement_angle_stddev, minimum_correspondence_likelihood)   Loop check whether incoming data is valid try :      while True:         ...

localization - Robot Navigation

Image
Localization As discussed,Localization is the process of position it self with respect to the surrounding, surrounding can be define as the global or the local . Consider a person who is in a dark room. if he does not have sens of where he is with respect to the features of the room , next movement could not grantee that he will not hit a wall. So it is important to localize itself if it is moving with respected to it surrounding. If the system has map of it's surrounding , once he has the current and goal positions, actions are straight forward. If there is no map,like in our case, it need to build a accurate enough map while exploring. And also movement should not be too costly to the goal. So to resolve this problem karlman filter integrated particle filter is used to probabilistically map the position and the surrounding. Lidar Scan Data since 2D Lidar is used , Algorithm is only compatible with signal ,one set of distance measurement around fix z axis coordina...

Communication link uisng Pyro4 - Robot Navigation

                                      As mention in the earlier the Localization Algorithm  localize the robot using it surrounding and also surrounding according to the the robot position. it uses a standard particle filter algorithm to position it self with respect to the the obstacle and also each particle go trough a karlman filter to guess the next position of the robot according to the control signal and error dynamics of the robot.              It is not totally correct to divide the  functionally of the the two Algorithm separately since both contribute working as one for this scenarios. But is easy to understand this way. Systematically it is the way bias too.           ...

Full stack development of Robot navigation

Image
Hi everyone, Well, Developing a system that can behave intelligently in environment that previously unknown is challenging? because decision, it has to make is totally, well i can say mostly,? unpredictable before hand . Saying totally unpredictable is wrong which leads to unresolvable problem,So? that means every environment should have some way of predictability in order to get confident judgment about next move . As in controlled environment like assembly line next movement is measured and predictable , if we consider a car in the road , judgment of the next set of action can be predicted using the road signs , other vehicle movement , position and destination of the car and etc.. with tolerable perturbations handling . But if you consider the rover in the mars , only what system has is estimated model how environment could be , build using previous observation , that mean system has to predicted the action mostly according to the what it sens. So that sot of Automated...

Measuring 3D Distances between points using Kinect

Image
Now time to move on to some simple application from the kinect depth image. As I mentioned , Kinect keep track of depth values of all the pixels related to pixel position.  That points are corresponding to the real world instances of points. So every point on this depth image  can be considered as three dimensional vector as (X, Y, V) as below. X Value, Y Value, Z Value,  from the Cartesian coordinate system. We can demonstrate the point as 3 Dimensional vector  shown below, So if we denote two instant of point as Vector, We can easily calculate distances between them.   C = V1 – U1 Then we can get magnitude of the C, that’s mean the distance between Point1 and Point2. Before  Explaining the How I code , I will show the Code and Result,   Result has pretty good accuracy as I expected. To implement a vector, we can use PVector Data type form the processing with tree argument...

Tracking nearest object with kinect

Image
I think you guys now familiar with Kinect depth image and how it’s behave. Next I will explain some basic  example  so that it will help you to do some basic stuff with Kinect.  Before that, If not go to my first blog post from here. Installation:  http://techgeekon.blogspot.com/2016/07/installingopenni-in-ubuntu-14.html Introduction:  http://techgeekon.blogspot.com/2016/07/hello-world-in-kinect-processing.html As I mention before Kinect keep track of depth values of  all most all the pixels related to pixel position  which are corresponding to the real world instances. If we can access to that depth value pixels using an array ,using simple algorithm we can track which pixel  have least depth, then our goal is complete. Actually SimpleOpenNI have pretty-match all the necessary  data structure  to achieve this  very  easily. This method of keep tracking nearest or specific place which...

Hello world with Kinect and processing

Image
I hope everyone have configured processing and OpenNI. If not go to my first Blog post from here. http://techgeekon.blogspot.com/2016/07/installingopenni-in-ubuntu-14.html First I will show you the processing code for getting depth image and I will explain line by line and properties of image    Copy this code and phase it on processing or download it from here Now Run it  result is something like that showing below below.. image is containing depth image and color image side by side  Now I will go through line by line so that you can understand the code ·          import SimpleOpenNI.*;                 This import the SimpleOpenNI Library  to this program ·          SimpleOpenNI kinect;                   This create a object form the SimpleOp...