The Hawking Bot is a Lego MINDSTORMS EV3 Project inspired by the late Stephen Hawking. Stephen Hawking had a good sense of humour so I am sure he would have approved of this project. The Hawking Bot can navigate his way around obstacles and responds to movements and then utters one of Stephen Hawking's famous soundbites and moves in the direction of the moving object. It uses the ultrasonic sensor which is scanning its environment with a sweeping head movement.
The code for the Hawking Bot is all written in python 3. A bootable image file to run python within a Debian Linux environment on the Hawking Bot can be downloaded from the ev3dev website. The code to run the Hawking Bot can be downloaded from here. All code is contained within a class file so you can use the existing methods or even modify them if you like.
Please watch this video with detailed instructions on how to set up Debian Linux and Python3 on your robot. Although this is specifically for a Mac setup it will still be useful to get a general understanding of the process. This is work in progress. The ultrasonic sensor is at times unreliable and this requires smarter code to detect 'outliers'. I would like to see contributions from others to make the code more efficient and less error-prone.
OK now you want to have some famous quotations or just some simple uttererances from Prof Hawking. There are tons of videos where you can hear him talk and then there are his lectures which are a treasure trove of wisdom and useful sound bites.
You need a programme like Audacity which works on many platforms to select and cut out your favourite soundbites.
Save your soundbite as a wav mono file as SH6, SH7, ...SH11, SH12 and so on.
Below you find a few samples which I have created according to the above method.
The Hawking Bot comes with a self-check module to ensure all cables are connected and the battery power is sufficient. Loose, missing or even damaged connections can occur easily. So this module is very useful. The 'checkConnection' method only checks if there is an electric connection. You must still ensure the motors are connected to the correct port.
The swiping head movement is essential for the Hawking Bot to scan its terrain and find the longest unobstructed path ahead. The cables need enough space to accomodate head movements; it is therefore advisable to tie them together as shown on the photograph.
The Hawking Bot works best with large obstacles and on a flat and smooth surface. Carpets are more challenging for the motors and you may have to adjust the settings to adjust the behaviour for different surfaces.
The Hawking Bot is by no means perfect and this is a prototype that will benefit from further improvements. The code is fully commented and it should be easy for you to work out what the various methods do. Various bits have been commented out with #, if you remove the # in front of 'print' the running programme will show you the various sensor readings and calculations.
Now that you have successfully build your robot you want to take it to the next level. You could improve the MotionDetector method. Right now ever so often it gets a wrong reading. You can see the actual readings by uncommenting disA and disB (at the bottom of the method block). The wrong reading usually stands out from other readings so you could write an algorithm to stop the robot responding to a wrong reading.
Perhaps you want to take full control of the robot and just remote-control its various functions. You could do this via Bluetooth and write an Android programm to communicate with the robot. However, a much easier approach would be to find a place for the infrared sensor to take control of Hawking Bot.
What about getting the robot to learn about its environment? This could be accomplished with a k-nearest neighbour approach or possibly a neural network. The EV3 brick has limited processing power although it does support Numpy. An alternative would be a BrickPi which would allow you to run an AI library like Tensorflow but the intention of this guide was to use the Lego EV3 MINDSTORMS kit without the need to buy many expensive additional pieces other than the ultrasonic sensor.
However, k-nearest neighbours re-infocement learning approach should work on the EV3 brick and this is the suggested algorithm. I leave it up to you to find a working implementation or spot any problems:
Reinforcement learning for Hawkings Bot
The idea is that the 7 USS readings are encoded into a vector and the last 10 head swoops are used to create a sequential vector of 70 entries. The first readings are incomplete so will be filled with zeros. Each entry contains the distance value from the USS. This is the state vector s. The system allows for 1000 entries. Thereafter the oldest entry will be replaced and the age entries for each s-r pair will be reduced by one.
The bot must not come closer than 10 cm to an object. This creates a negative reward. For simplicity; good actions are rewarded with a 1 and bad ones with a 0. Effectively this creates a probability for the reward for each action-state combination. We will use discounted rewards and epsilon greedy policy.
This creates 3 large state –reward (s-r) table for all three actions right, straight ahead and left – it may be possible to have fast and slow speeds for each action. We would then have 6 actions and 6 lookup s-r tables.
Each time a new state s is recorded it is compared to the tables, Euclidean distance (or similar measure) is used to find the nearest neighbour. This will not be ranked but rather a threshold t is set to accept the state as very similar, overwrite the existing state and update for the highest reward and carry out the associated action a. If it is not similar (d>t) enter a new s-r pair for each action a. If there is a tie between actions for for s-r (they all have the same reward) choose randomly but this is not common and could be omitted.
t will have to experimentally determined, if t is too small similar states will be ignored and each state is being seen as unique. Too large a t means that even rather dissimilar states are lumped together which could affect the ability to choose good actions. It may be possible to use statistical methods to determine the best t.
The table looks somthing like this: Entry No – State vector– reward for action 1 – reward for action 2 – reward for action 3.
I guess actual implementation will be tricky but should be wortth the effort. Good luck!