Routines to control a humanoid echolocator robot.
Things that need to be done before every new experimentimental setup...
Make sure you have succesfully completed all steps of the Setup before going on to this setup Most importantly, the setup of the network needs to be ready before robot and cameras are turned on so that they connect to the network correctly. For the full experiment you will need:
Turn on the network and wait for it to start up (can take a few minutes).
Important: do not unplug the cameras before shutting them down correctly or the SD card might get corrupted. You can turn off the camera by pressing and holding the button on the side for longer than 7 seconds. (If you hold it only for a few seconds, the camera will reboot) The green light will then blink slowly 10 times and then turn off. When only the red camera is turned on, you are safe to unplug the camera.
Fix the 4 cameras in different corners of the room such that they have a big common visible area.
It is useful to indicate the visible area so that no blind areas are entered by mistake during the experiments.
You may view the camera's visible area at
http://172.16.156.139:8080/stream.html.
Simply replace "139" in the URL by the desired camera number.
You can find the visible area by always considering two neighbouring cameras and putting
the reference point where both cameras can just see it (see Figure 2).
Mark an area where the robot should be allowed to move. Place the robot in the critical points of the area and adjust the camera orientation such that it can just see the robot's head (See Figure 3).
Place 4 to 6 reference points in the visible area. For later processing, the reference points are numbered. Place the points such that all reference points are above an imaginary line drawn from the first reference point to the second reference point. (see Figure 4)
Measure the distances between all reference points and store the results (in meters) in the file input/objectpoints.cls. The file is structured like a euclidian distance matrix, so the element i,j cooresponds to the distance between reference point i and j. You only need to fill out lower diagonal of the matrix at it is symmetric. Leave the upper diagonal blank or fill it with zeros (see Figure 5)
If you are using the checkerboard for extrinsic calbiration, place the checkerboard on a white support and place reference points in the 3 corners as shown in Figure 6.
While the program is still in its test phase, it is useful to also store the real position of the robot. As it is easier to measure the robot position with respect to the walls than with respect to the reference points, the coordinates of the robot are entered in the wall reference frame and converted by the program to the reference point frame.
Measure the x and y position of the first two reference points and store the results as PTS_BASIS in the program location.py. For the checkerboard, PTS_BASIS should correspond the two checkerboard points closest to 1 and 2 respectively.
For visualization, an x and y margin is addded to the basis reference. It can be defined in the code as MARGIN.
You can now run the program. Make sure that you have created an output folder (called "output" here) where you will to store all results of this session. Make sure that all parameters are available in the input folder (called "input" here), see Figure 7.
It is possible that you have to adjust the following parameters:Some verification is recommended before going on with the experiments. See Analysis for more possible verifications.
You should check whether whether the projection works. The reprojection corresponds to the red dots in output/summaryX.png (see Figure 8). They need to be superimposed with the image of the respective positions. There can be several reasons for bad matches:
It is recommended to check whether the cameras are placed approximately at the correct positions. The camera centers are found in output/cameraX.png, and they are stored with respect to the reference point frame. See Analysis for how to get a visualization of the camera centers in the room reference frame.
Some calibration is requried for the audio setup. If you wish to recalibrate the latency time of the sound system (the delay added to the impulse responses), you can do this following the steps proposed in Analyze . All you need to do is choose the sound.wav file to be an approximation of white noise and run the program location.py again, and answering yes to "Do you want to localize the robot using acoustics?". This sends the signal sound.wav and saves the recorded responses in the output folder. The class Analysis.py ( Analyze ) then does the cross-correlation for you. All you need to do is change the latency time in Analysis.py (TLAT) manually to the time where the maximum of the cross correlation occurs.
Important: For accurate results, it is important to measure the actual temperature of the room and to adjust the speed of sound C in Analysis.py
Finally, make sure that the gains of the microphones and speakers are set appropriately for the given setup, meaning that no clipping occurs and there is a resonable signal to noise ratio