| |||||||||||||||||||||||||||||||
Raspberry PiThe Pi on the mower is named moana. I have a seperate Pi that will be used as a prototyping platform called maui.
The Rasberry Barrel Mounted on MoanaThe sensor barrel is finally complete. Here it is mounted on the mower. It sits on a small ring and is bolted through the mower deck. ![]() The old basic grass-capture program was loaded onto moana (pi) to see how the barrel changed the type of images being captured. Due to the way it is mounted in the barrel, there is no need to flip the image by 180 degrees. After doing this, here is a higher resolution image taken from the barrel. ![]() Two key points to notice are:
I believe the sonars will give sufficent coverage of what lies ahead of the mower. The camera is meant to be focussing on grass so I feel it needs to be angled down more towards the ground in front of the mower. This should help with grass recognition as it wont get sidetracked by distant trees. I can also put a 'visor' over the top of the camera to mask out the sky but this wastes processing cycles during image recognition so I feel suitably angling the camera is a better option. Unfortunately, this may not be so easy on the barrel without considerable modification. Camera Angling Solutions
Angling the whole barrel is the easiest option as it just requires a wedged ring under it. The tower certainly looked weird when I tried this but I am concerned the sonars will also point down and may give false triggering. Angling the camera module inside the barrel may be a good compromise. It depends on how much space there is to play with. On closer inspection of the camera module, it appears the actual camera lens can be detached from the board and it possible to angle it as the flexible circuit connector gives a bit of wiggle room. This was done and hot glued in place and assembled back into the vertically mounted tower. This gave much better results, with a 7cm blind-spot but without distractions from distant objects. ![]() Configuring the Raspberry PiI aim to build a Neural Network that runs on the Raspberry Pi to recognize grass. In order to achieve this goal, I need to to load TensorFlow and OpenCV on the Pi. Unfortuantely, neither of these apps are available on the Pi natively so it is necessary to build them from source. This is a rather convoluted process so I'll detail them here. sudo apt update sudo apt upgrade Virtual EnvironmentsDue to all the dependencies, I am creating a virtual environment (grassenv) under ~/Documents/grass where I will build environment.
TensorflowLoading fetching relevant libraries
In order to know which Pi wheel to download, it is necessary to know which architecture and python3 versions I'm running.
tensorflow 2.5 with pythone3.7 is the highest I can load on my Pi 3B (on architecture armv71), along with numpy 1.1.9.5. So the download file is executed:
Uninstall any old tensorflow and install the appropriate tensorflow wheel.
Checking installation is ok:
OpenCVOpenCV sudo apt-get update && sudo apt-get upgrade && sudo rpi-update sudo vi /etc/dphys-swapfile and edit the variable CONF_SWAPSIZE : #CONF_SWAPSIZE=100 CONF_SWAPSIZE=2048 Restart Cd Documents/grass Source grassenv/bin/activate apt-get install build-essential cmake pkg-config apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev apt-get install libxvidcore-dev libx264-dev apt-get install libgtk2.0-dev libgtk-3-dev apt-get install libatlas-base-dev gfortran wget -O opencv.zip https://github.com/opencv/opencv/archive/4.1.0.zip wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.1.0.zip unzip opencv.zip unzip opencv_contrib.zip cd ./opencv-4.1.0/ mkdir build cd build cmake -D CMAKE_BUILD_TYPE=RELEASE \ -D CMAKE_INSTALL_PREFIX=../.. \ -D INSTALL_PYTHON_EXAMPLES=ON \ -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-4.1.0/modules \ -D BUILD_EXAMPLES=ON . make -j4 make install && sudo ldconfig Sudo reboot Python3 -m pip install opencv-python Image Pre-ProcessingThe Pi camera minimum resolution is 64x64 pixels. As I am going to be limited on processing power, I think 32x32 will be sufficient. what I lose in resolution, hopefully I can make up with machine learning layers! I'm not 100% sure of all the pre-processing I need, but at least one step is to knock down the resolution to 32x32. I will need to do this for the complete training set but also on-the-fly when running in prediction mode. Here are the first steps at pre-processing but this will probably get extended as time goes on.
I believe it is possible to stream the output from the camera directly into a numpy array which is something that may be worth investigating when doing live image-predictions. However, initially, I'm going to keep writing to files on the disk unless it become necessary to change for performance reasons. July 2023 | |||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||