Here’s a screen shot from some testing with our new automated imagery processing code running on Raspberry Pi with PiCam for finding Outback Joe.
Stephen has been working really hard on this while the rest of the team focussed on the OctoPorter and Kraken; now it’s time to give these algorithms a work out!
We conducted a 25 minute imagery collection mission against our target set and collected
5GB of data. The processing takes place in the RaspberryPi on board the aircraft to reduce latency and the requirement for high bandwidth data links. We get smaller thumbnails from the camera down to the ground station to see what the aircraft is collecting and what the Pi is ‘thinking’. The revised code reliably spotted our targets from this mission, but it ‘found’ too many false ones too. We can work on that with more training data sets.
Our geolocation of the targets is down to about 50 metres; that’s not accurate enough for the Porter to land on yet, so we have some more refinement to do. We think the key to that lies in improving our understanding of the time stamps.
We’re coming Outback Joe!
The Raspberry Pi camera in the OctoPorter
Stephen doing his magic!