Routines to control a humanoid echolocator robot.

Semester Project by Frederike Dümbgen

Acoustic Robot

Setup

Things that need to be done before every new experimentimental setup...

Make sure you have succesfully completed all steps of the Setup before going on to this setup Most importantly, the setup of the network needs to be ready before robot and cameras are turned on so that they connect to the network correctly. For the full experiment you will need:

Figure 1 - from left to right: Robot with acoustic head, Webcam, Reference point and checkerboard, Speaker

Start network

Turn on the network and wait for it to start up (can take a few minutes).

Place cameras

Important: do not unplug the cameras before shutting them down correctly or the SD card might get corrupted. You can turn off the camera by pressing and holding the button on the side for longer than 7 seconds. (If you hold it only for a few seconds, the camera will reboot) The green light will then blink slowly 10 times and then turn off. When only the red camera is turned on, you are safe to unplug the camera.

Fix the 4 cameras in different corners of the room such that they have a big common visible area. It is useful to indicate the visible area so that no blind areas are entered by mistake during the experiments.
You may view the camera's visible area at
http://172.16.156.139:8080/stream.html.
Simply replace "139" in the URL by the desired camera number. You can find the visible area by always considering two neighbouring cameras and putting the reference point where both cameras can just see it (see Figure 2).

Figure 4 - Example for finding the biggest common visible area)

Mark an area where the robot should be allowed to move. Place the robot in the critical points of the area and adjust the camera orientation such that it can just see the robot's head (See Figure 3).

Figure 3 - Example of well adjusted camera for robot detecting. The white crosses mark the area where the robot is allowed to move.

Place and measure reference points

Reference points

Place 4 to 6 reference points in the visible area. For later processing, the reference points are numbered. Place the points such that all reference points are above an imaginary line drawn from the first reference point to the second reference point. (see Figure 4)

Figure 4 - Example of correct reference points placements (All points are above line between points 1 and 2)

Measure the distances between all reference points and store the results (in meters) in the file input/objectpoints.cls. The file is structured like a euclidian distance matrix, so the element i,j cooresponds to the distance between reference point i and j. You only need to fill out lower diagonal of the matrix at it is symmetric. Leave the upper diagonal blank or fill it with zeros (see Figure 5)

Figure 5 - How to measure and save the distances between points.)

Checkerboard

If you are using the checkerboard for extrinsic calbiration, place the checkerboard on a white support and place reference points in the 3 corners as shown in Figure 6.

Figure 6 - How to place checkerboard and reference points and the resulting numbering.

While the program is still in its test phase, it is useful to also store the real position of the robot. As it is easier to measure the robot position with respect to the walls than with respect to the reference points, the coordinates of the robot are entered in the wall reference frame and converted by the program to the reference point frame.

Wall reference frame

Measure the x and y position of the first two reference points and store the results as PTS_BASIS in the program location.py. For the checkerboard, PTS_BASIS should correspond the two checkerboard points closest to 1 and 2 respectively.

For visualization, an x and y margin is addded to the basis reference. It can be defined in the code as MARGIN.

Extrinsic calibration

You can now run the program. Make sure that you have created an output folder (called "output" here) where you will to store all results of this session. Make sure that all parameters are available in the input folder (called "input" here), see Figure 7.

Figure 7 - File structure. X corresponds to camera number (139, 141, etc.) and N to iteration number. TIME is the time when the program is started.

It is possible that you have to adjust the following parameters: They depend on the camera location and the image resolution. The terminal output indicates what parameter probably need to be changed.

Verification

Some verification is recommended before going on with the experiments. See Analysis for more possible verifications.

Reprojection of reference points

Figure 8 - Visualization of reprojection errors in "output/summary_X.png".
You should check whether whether the projection works. The reprojection corresponds to the red dots in output/summaryX.png (see Figure 8). They need to be superimposed with the image of the respective positions. There can be several reasons for bad matches: If the reprojection is completely off because of wrong numbering of the reference points, a warning appears (caused by the program not finding a valid homography) and you should not continue with the experiments before fixing the problem.

Camera centers

It is recommended to check whether the cameras are placed approximately at the correct positions. The camera centers are found in output/cameraX.png, and they are stored with respect to the reference point frame. See Analysis for how to get a visualization of the camera centers in the room reference frame.

Audio setup

Some calibration is requried for the audio setup. If you wish to recalibrate the latency time of the sound system (the delay added to the impulse responses), you can do this following the steps proposed in Analyze . All you need to do is choose the sound.wav file to be an approximation of white noise and run the program location.py again, and answering yes to "Do you want to localize the robot using acoustics?". This sends the signal sound.wav and saves the recorded responses in the output folder. The class Analysis.py ( Analyze ) then does the cross-correlation for you. All you need to do is change the latency time in Analysis.py (TLAT) manually to the time where the maximum of the cross correlation occurs.

Important: For accurate results, it is important to measure the actual temperature of the room and to adjust the speed of sound C in Analysis.py

Finally, make sure that the gains of the microphones and speakers are set appropriately for the given setup, meaning that no clipping occurs and there is a resonable signal to noise ratio