Thursday, November 4, 2010

Lab Exercise 7

Date:
28/10 2010

Duration of activity:
3½ hours

Group members participating:
Frederik, Lasse & Christian

Goals:
The overall goal for this lab session is construct and program three different Braitenberg vehicles and thereby do experiments with different sensor setups.

Plan:
The plan for the lab session is to construct the three Braitenberg vehicles shown on figure 1 by means of using different sensors as input.
We will use a sound sensor for vehicle 1 because it it easy to create distinguishable inputs. For vehicle 2a and 2b  we will use light sensors because it is fairly easy to change the light conditions for a single sensor by for example covering one of the sensors. This would be more difficult if sound sensors were used, but ultrasonic sensors could also be a solution.


Figure 1

Results:
This section describes the implementation of the three mentioned vehicles supported with photos and videos.

Vehicle 1
According to figure 1 vehicle 1shall move closer to the source which in our case is move faster the louder the sound.
The car was built with one sound sensor mounted. The car structure was reused from the previous lab session.
We made a thread for the control program to encapsulate the functionality/behaviour and make it easier to terminate the program and re-use some of the program stucture for the multi-threaded solutions for vehicle 2a and 2b. The code for the class in found in Vehichle1[2].

We had no major difficulties in getting the right behaviour. A video of the program in action is seen below:



Vehicle 2a
According to figure 1 vehicle 2a shall move away from the source which in our case is move away from the bright source.
The car was built with two RCX light sensors mounted. The car structure was reused from the Alishan-lab exercise as seen on figure 2.


Figure 2

Our first attempt was a direct mapping from the light values to the motor power. With this implementation it was hard to notice the difference so improvements had to be done to make a better distinguishing of the colours.
The second attempt made use of normalization because the sensor measurements didn't cover the whole spectrum of possible values.
A video of the program in action is seen below (one sensor thread):



The code for the class in found in Vehichle2a[3].

Vehicle 2b
According to figure 1 vehicle 2b shall move closer to the source which in our case is move towards the bright source.
The car was (as the previous car) built with two RCX light sensors mounted. 


The program from the car was based on the code from vehicle 2a including normalization. We extended the code for the normalization regulation so it was able to dynamically adapt to the surrounding environment be setting the min and max values as seen below.



We later on created a thread for each sensor to see if the difference was noticeable. By observing the car we noticed a more smooth performance. Afterwards we refactored the code so we ended up having the following threads:
  • One thread per connection which is two (connecting the sensors with the motor)
  • Control thread (closed loop controller regulating according to the environment)
  • Info thread (displaying information on the LCD)
  • Main thread (Initiating the program by performing bootstrapping)
A video of the program in action is seen below :


The final code the class in found in Vehichle2b[4].


Conclusion:
From what we found it is important to make sure that the range output from the sensor readings is proper utilized by normalization. If this is not done, the impact on behaviour is not great enough. Furthermore it was important to dynamically adapt to the environment, as the light conditions more or less constantly changed.
When the program was made more multithreaded, the vehicle seemed to behave more smooth, and it also behaved more reactive.
The main focus during this exercise was to experiment with the "original" design of the vehicle. Because of this we did not use ressources on expanding the vehicle with e.g. the bump-sensors.

References: