Outback Challenge 2018 Debrief

By | October 8, 2018

 

With the 2018 UAV Outback Challenge finishing up last week it is time once again to write up our experiences from this year. As with every OBC there were highs and lows, but as always it was a fantastic experience, and a great testament to the huge effort put in by the competition organisers.

For those of you reading this who don’t know the OBC there is lots of background information here:

https://uavchallenge.org/

You may also like to read the write ups that we have done from the 3 previous challenges that CanberraUAV has entered:

The competition itself evolves each time it is run (every two years), pushing teams to solve ever more complex problems related to UAV search and rescue tasks. The basic task for this year was similar to the 2016 challenge, but there were two additional optional components that were added.

The base challenge was to retrieve a blood sample from a target location in a 100 meter radius search area located 11 kilometers from the takeoff location, with a requirement to fly via two intermediate waypoints in both directions that extended the minimum mission distance (round trip) to 46 km. This had to be done while maintaining communications links, plus avoiding a geofence boundary and 22 static no-fly zones. At the destination the aircraft needed to using on-board imaging to zero-in on the exact location of a visual target provided by the team, then land within 10 meters of the target.

The two optional extensions to the challenge were to perform the whole mission completely hands off (so no touching of keyboard or mouse throughout the mission), and to avoid a set of “dynamic no-fly zone” obstacles (simulated aircraft, flocks of birds and weather systems) that would obstruct the path of the UAV.

CanberraUAV chose to try both extensions to the challenge, and we developed a solution based around two aircraft.

The primary “retrieval” aircraft was a 2.7m VQ Porter set up as an octa-quad QuadPlane running ArduPilot, with a 35cc DLE-35 petrol motor for forward propulsion. This is the same basic configuration we used in the 2016 challenge. It is an easy platform to work with, even if it is a bit of a “brute force” approach, relying on the petrol motor to pull a fairly high-drag aircraft through the mission. The specific aircraft we flew was originally built by Jack Pittar (our chief pilot and builder) as our backup aircraft for the 2016 challenge, but with some upgrades to better ESCs (BLHeli-32 Wraith32 metal V2s) and an improved fuel pump arrangement for the petrol motor.

The secondary “relay” aircraft was a new one for CanberraUAV. It is a Flite Test Kraken, a 1.8m wingspan foam-board twin motor electric tractor aircraft in a delta-wing configuration, with conventional wheeled takeoff. While the base Kraken is very simple, the one we used has been carefully set up by team member Greg Oakes with a Hex Cube autopilot and loaded with rather more battery than is usual for this aircraft (a total of 21Ah of 3S LiPos).

Loaded on the Porter was a Pixhawk1 flight controller, with a RaspberryPi 3B+ companion computer for imaging and supplementary communications. Both aircraft were fitted with RFDesign RFD900x long range telemetry radios, running our mesh/relay firmware which allows for relaying of telemetry packets between the retrieval aircraft and the GCS via the relay aircraft.

In addition, the Porter was fitted with two 3G USB dongles (one Telstra, one Optus), each using two antennas. This gave 3 telemetry links to the Porter for redundancy (two via 3G, one via the RFD900x). The software was set up so that the mission could be completed with any one of the 3 links.

We had spent a lot of time in the lead up to the challenge developing detailed simulations of the mission, including both aircraft, the imaging subsystem, the radios and wind effects among other things. You can see an example of the simulation system in this blog post and video:

https://discuss.ardupilot.org/t/canberrauav-outback-challenge-2018-demo/33212

that video also gives a detailed walk-through of our flight plan.

The basic plan was to use the relay aircraft as a 180m-high flying antenna, orbiting near the home location. The retrieval aircraft would fly the full mission, and would send a “release the Kraken” command back to the GCS when it approached a waypoint close to the search area. In that way the Kraken would only takeoff once it was needed – when the retrieval aircraft was furthest from home, and particularly when the aircraft was on the ground at the remote site.

For the ground station this year we used a rental van which we hired in Canberra and drove up to Dalby with most of our equipment in the back. We set up tables inside the van to hold the monitors for the two GCS laptops, with the van’s side door providing a good view for the organisers and the other CanberraUAV team members of the progress of the flight.

Flight Plan

Things started to get a bit more complex when the organisers released the KMZ file giving the mission waypoints, search zone and exclusion zones on the Tuesday morning – the day before the competition flights began. We were surprised by a few things in the mission for this year. Firstly, the number of exclusion zones (22) was more than we’d expected, and more importantly one of those exclusion zones was located just 40 meters from the edge of the 100m search radius.

The exclusion zones were a new feature for this year. In past years the rules had somewhat vague instructions to teams to try to avoid flying over buildings (such as farm houses), but there was no specific penalty attached. It was pretty easy to plan the waypoints to avoid the buildings that could be seen on satellite images.

This year the exclusion zones were mandatory, and any breach of them would lead to immediate disqualification from the competition. In addition, just planning mission items to avoid them was not enough if a team wanted to try for the extensions involving dynamic no-fly zones, as the dynamic obstacle avoidance could push an aircraft in any direction, leading it to not follow the most direct path between waypoints.

We had prepared for this by adding support for exclusion zones to ArduPilot, adding them as a high priority obstacle type in the dynamic avoidance code, along with the geofence and the dynamic obstacles. However our preparation had all been with longer range avoidance margins, matching the sorts of exclusion zones that could be seen in the example mission layout given in the rules. With an exclusion zone within 40 meters of the search we needed to quickly come up with a way to autonomously fly our large aircraft (with correspondingly large turning circle) in close proximity to a region that would result in disqualification, while still covering the entire search region. Simulation testing showed us that with high winds the aircraft would sometimes be forced into situations in which it could not avoid both dynamic and static no-fly zones due to an inability to turn tightly enough. It looked quite bad.

Late on Tuesday we came up with a solution made up of two parts. The first part was to dynamically adjust the avoidance margins based on whether we were in the “search” part of the mission or on the longer distance waypoints. The second part was to add another fake exclusion zone around the one near the search area.

In the above image the inner red rectangle is the exclusion zone provided by the organisers. The outer red rectangle is another exclusion zone that we added around the organiser’s zone. The extra exclusion zone allowed us to use achievable margins for three sides of the exclusion zone, while the choice of the unusual flight path reflected in the search waypoints meant that close approach to the organiser’s exclusion zone would only happen while flying parallel to the fourth side of the zone. Extensive simulation testing showed that this worked, and we could be confident of avoiding the exclusion zone for a wide range of wind conditions and dynamic obstacles. In some scenarios we would breach the outer exclusion zone, but that would not trigger disqualification as it wasn’t an official zone. We would not breach the inner exclusion zone even with rapidly changing 30 knot winds.

The Flight

With a workable flight plan in place we were ready for the competition flight. We started our 15 minute flight preparation period at about 2 pm on Wednesday afternoon. Dale drove the van into place in the pilot’s box with Stephen and myself inside, while James and Paul started pre-flight tests with Jack and Greg. Matt and Jimmy worked on getting the RFD900x antenna set up.

The pre-flight checks went well except for one seemingly minor glitch. The LEDs on the Porter didn’t come on. These RGB LEDs play an important role in the Outback Challenge as they indicate if the aircraft is safe to approach, which is critical for the safety of the organisers at the remote landing site. As the LED colours required for the OBC are different from the standard ArduPilot colour scheme the system we used for LEDs on the Porter was to set NTF_LED_OVERIDE to 1, which disables ArduPilot control of the LEDs, then to use a mavproxy module running on the RaspberryPi to send MAVLink LED commands to set the right colours based on arming and mission state.

As the Porter GCS operator I couldn’t see any reason why the LEDs were not working, with everything else looking normal. I knew however that we had a backup. The Kraken relay aircraft didn’t have a companion computer so Peter had developed a different option for LED control for the Kraken which involved setting NTF_LED_OVERIDE to 2 which set up the Pixhawk to use the OBC LED colours. That option was also available on the Porter as it was running exactly the same flight firmware – so, while puzzled as to why the LEDs didn’t work, I changed NTF_LED_OVERIDE to 2 and got confirmation over the radios from James that the LEDs on the Porter were now operating correctly. Later developments in the flight made me regret not looking more deeply into why the LEDs were not operating normally before takeoff.

With pre-flight tests passing and our 15 minutes of prep time up we were ready for takeoff. We switched to AUTO on both the Porter and Kraken at 2:21 pm. By design, the Kraken didn’t move, as it was set up to wait for a message from the Porter when it was approaching the search zone.

The Porter did a nice vertical takeoff and started its petrol motor. It yawed a bit as it climbed (yaw control is quite weak on the Porter), and reached its transition altitude of 12 meters. At that point it started its transition to forward flight, raising the forward throttle on the petrol motor. The petrol motor immediately cut out, which triggered an automatic restart.

The Porter has an electric starter motor on its petrol motor, controlled by ArduPilot on the Pixhawk. It uses RPM monitoring to detect motor cut-outs and automatically triggers a restart sequence. During this takeoff the motor stopped and restarted 12 times before we commanded the aircraft to return for a landing on the strip. While it was doing the restarts the OctaQuad motors had continued to climb, heading towards the target altitude of the waypoint. The Porter got to over 70 meters AGL when we commanded it to return, and it had used a lot of its VTOL battery capacity. Even if the motor had properly started at this point it would not have had enough battery to complete the mission.

The vehicle landed itself safely, and Jack started working on the tuning of the motor. It looked like for some reason the tuning was off, despite us having tested the motor several times since arriving in Dalby. Jack worked hard under extreme pressure to retune the motor, going through 6 tuning adjustments before the motor came up to its normal full throttle level of 7000 RPM. At this point we were still on the ground at home – with 13 minutes of our mission time elapsed.  Thankfully we had a spare set of VTOL batteries charged!

Later log analysis showed that Jack’s issues with making the motor run reliably were significantly exacerbated by a bug that we had introduced into the CanberraUAV ArduPilot code the week before the competition. We had been having some issues with the idle level on the petrol motor. The idle level of internal combustion motors in ArduPilot is normally controlled with the THR_MIN parameter, which sets a minimum throttle as a percentage. We had found substantial differences in the appropriate idle throttle when the motor was warm or cold, so Jack suggested that instead of an idle throttle we should have an idle RPM, with a simple controller to adjust the throttle as needed to keep the motor idling close to the desired RPM on the ground. I really liked this idea, so I added an ICE_IDLE_RPM parameter and wrote a controller for it.

The automatic idle adjustment worked very well for what it was supposed to do, but I had inadvertently introduced a side effect such that when the throttle advanced in transition to forward flight it went straight from an idle throttle to full throttle, instead of using the THR_SLEWRATE limit, which we had set to 40% of the full throttle range per second. A petrol motor which is just started can cut out if you advance the throttle too quickly, and it seems likely that this was a major contributor to the problem with the first takeoff.

Having improved the tuning on the petrol motor we were ready for a 2nd takeoff at 2:34 pm. This time the Porter took off cleanly, automatically started its petrol motor at a height of 3 meters as expected, and transitioned correctly, heading off to its first waypoint 6 km to the south.

We were a little surprised to find that there were no DNFZ (Dynamic No-Fly Zone) obstacles showing up on the map. We had elected to try for the two optional extensions for the 2018 competition, one of which involved full autonomy (so we couldn’t touch the GCS keyboard or mouse after takeoff), and the 2nd was for avoiding dynamic obstacles. These dynamic obstacles would appear on our map with graphics to represent the 4 types of obstacles to be avoided (planes, birds of prey, migrating birds and weather systems).

Upon investigation, the organisers explained to us that they had set up the system not to start generating dynamic obstacles until we had completed the first leg of the mission, so we relaxed and enjoyed the flight.

We finished the round-trip to the first waypoint at 14:44 (10 minutes after takeoff) and started on the longer leg towards the search area. At this point the flight became a lot more interesting, with a huge swathe of dynamic obstacles appearing on the map.

The above photo was taken by Josh Smallwood (the OBC photographer), looking over the shoulders of our BVLOS pilot, Dirk Lessner (on the left) and myself (on the right). Dirk was assigned to be the official BVLOS pilot for our mission and was part of the OBC organisation team. This year was the first year that competition aircraft were flying in true Beyond Visual Line of Sight conditions. In previous years there had been spotters on the ground to keep the aircraft in sight. Dirk used our ground station displays so he could keep watch over the progress of the aircraft, and could ask us to change our flight plan if anything unusual happened (such as a real manned aircraft coming into the area).

The above photo should give you a good idea of what it looked like to monitor the flight, both from our point of view, and the point of view of the BVLOS pilot. The red squares on the map are the static no-fly zones, which we needed to avoid on pain of disqualification. The pink plane in the middle is the Porter, just as it was turning to avoid a raft of birds coming in from the east, along with a large static no-fly zone. The green line shows the direct path to the next waypoint close to the search area in the southwest, and the purple line is the path that ArduPilot has chosen to avoid all the obstacles while making progress in the mission. That avoidance path is updated at 10Hz by the avoidance thread running on the Pixhawk, using a “bendy ruler” algorithm that we developed for this challenge.

The green planes are synthetic aircraft generated by the organisers and sent to our GCS as ASTERIX packets, which were then relayed up to our aircraft’s automatic avoidance system inside the Pixhawk. It was not permitted to be within 300 meters horizontally or 150 meters vertically of these vehicles. The white birds are of two types – the ones that look like hawks represent birds of prey, which needed to be avoided to a radius of 200 meters and all the way to the ground from their current height (due to the risk of diving from these birds). The white birds that look like swallows represent migrating birds which needed to be avoided to a radius of 100 meters and a height of 150 meters. Although there are none visible on the map in this photo there are also weather systems (represented by yellow clouds) which move slowly and needed to be avoided over the full available height range of the Porter.

The avoidance system we developed for ArduPilot has been built over several months and is easily the biggest advance in ArduPilot that has come out of this year’s competition. The 2016 competition pushed us to add fixed wing VTOL support to ArduPilot, and this year the competition once again pushed us to develop ArduPilot in new and useful ways. We were delighted to see it performing so well in a test over long distances, with the Porter dodging all the aircraft, birds and weather systems that the organiser’s DNFZ generator threw at us. While we had (of course!) tested it extensively in simulation against our own simulated obstacle generator we had only been able to do short range tests of the system on real aircraft due to the limited distances we can fly at CMAC.

There was one very close call, where we nearly came within the avoidance radius of a fairly fast moving synthetic plane coming in from the east. It was a tense time as we watched the Porter move far enough north to avoid the aircraft. That near-miss made me realise that we should have added another dimension to the avoidance, to make the Porter advance to full throttle in cases where it is trying to get out of the way quickly. The avoidance system only considered turns and didn’t consider that it could speed up or (to an extent) slow down.

You may have noticed that the density of synthetic aircraft is much higher than a UAV is likely to encounter in a real-life situation. This is necessary for the competition as otherwise the “big sky, small plane” effect would mean that the chances of needing dynamic avoidance at all would be small. While it made for a tense time as we watched the Porter dodge such a dense set of obstacles, I am glad the organisers did push it as hard as they did, as it gives me more confidence that if applied to UAVs in real airspace with manned aircraft that we would be able to avoid colliding with (or coming needlessly close to) another aircraft.

Lost Link

It was about halfway towards along the long leg to the search area that we hit our next problem. Up to that stage we had 3 telemetry links to the plane, but suddenly this dropped to a single link, the RFD900x. The status on the GCS console showed that the 3G links were up and that the imaging system on the RaspberryPi was able to communicate correctly to the GCS over all 3 links – but the link between the RaspberryPi and the Pixhawk had been lost. The RaspberryPi was not getting any telemetry data from the Pixhawk at all.

We later determined that the cause was a bad cable between the Pixhawk and the RaspberryPi. We had used Dupont leads held in place with pressure from the RaspberryPi case, and these had become loose enough to lose the serial link from the Pixhawk. The link had been intermittent, and it was a failure of the transmit side of that UART connection that had caused the LED failure we saw before takeoff. The link had come good again before we took off, then failed again in flight. This failure was unprecedented in any test flight CanberraUAV had performed.

We were still operating fully hands-off, so Stephen and I couldn’t investigate properly, instead relying on the RFD900x link to provide telemetry while we hoped that the intermittent link would recover in time for the image search. Without that serial link between the Pixhawk and the RaspberryPi we couldn’t do any automatic georeferencing to find the target, and our hopes for the 2018 Outback Challenge would be over.

At 2:48 pm the Porter reached the trigger waypoint for releasing our Kraken relay aircraft, and the Kraken took off to start its role as a flying antenna. I was so engrossed in the avoidance that the Porter was doing at the time that I didn’t even realise that the Kraken takeoff was imminent. Afterwards, in discussion among the team, we decided that having just one GCS operator per aircraft was insufficient. We should have had an overall flight director to look after the overall mission, and ensure that events like the pending takeoff of the relay aircraft were properly communicated to both our BVLOS pilot and the organisers.

We should also have added automatic voice announcements from the GCS for events like the Kraken takeoff, with the announcement happening 20 seconds before the event. While the Kraken takeoff went smoothly, it would have been better to ensure that everyone was prepared.

The Kraken climbed up to its holding pattern height of 600 feet AGL and started its shadow boxing with the dynamic no-fly zone obstacles that were being thrown at it by the organisers. While I couldn’t see it myself as I was inside our GCS van, spectators later told me that it was great to see the Kraken dodging and weaving as it avoided the imaginary aircraft.

Search Area

The next stage of the flight after having ducked and weaved our way down to the southwest corner of the flying area was the search. The Porter entered the search at 2:51 pm, having spent 17 minutes flying. With only 29 minutes left in the hour, we had for our mission we needed to complete the search quickly.

The search itself proceeded normally, with the Porter easily handling the turns and executing the pre-planned (if somewhat complex) search pattern in close proximity to the static no-fly zone next to the search area. Unfortunately after 8 minutes in the search pattern, there was still no sign of the serial cable between the Pixhawk and the RaspberryPi recovering, so we decided to forgo the full autonomy part of the mission and intervene manually.

The only way we could think of to allow the search to proceed was to establish an alternative link between the Pixhawk and the RaspberryPi. The serial cable wasn’t working, so using mavproxy commands over an ssh link into the RaspberryPi we added a rather complex route for the packets. We told mavproxy on the RaspberryPi to listen on UDP port 14550, then we told mavproxy on my GCS laptop to redirect packets that it received from the Porter over the RFD900x link to the RaspberryPi by way of a VPN running on my home network back in Canberra. In this way the RaspberryPi could get telemetry from the Pixhawk, travelling over 2000 km via Canberra instead of the usual 20 centimetres of the serial cable.

This ad-hoc workaround did work, but it added a lot of lag, averaging about 2 seconds for packets to travel from the Pixhawk to the RaspberryPi. This gave some problems with the imaging system, which had been designed with the possibility of the timestamps on the images being well behind the timestamps on the telemetry, but not with the possibility that the images would be ready to be processed on the RaspberryPi before the corresponding telemetry arrived. That led to most of the images being rejected.

One image did pop up on the GCS display, showing a clear picture of the target we were looking for. Unfortunately, due to the timestamping issue, it didn’t have an attached geo-reference. We used the GCS “mosaic” interface to request the full image be downloaded but didn’t receive it. Peter then suggested that we download the full image via the VPN link and that we georeference manually. I downloaded the full image from the RaspberryPi using rsync and Peter and I had a look at it to work out the target’s position. With that position set up for landing we commanded the plane to start its landing approach.

Later we discovered just how close we had come to completing the imaging and landing automatically. Looking at the RaspberryPi logs after the flight we discovered that the slow forwarded link via Canberra had, in fact, worked better than we had realised, and did manage to correctly identify 10 target images from small periods where the latency of the VPN link happened to be momentarily low enough to allow geo-referencing. It had even automatically moved the search and landing waypoints based on that georeferencing, sending the updated waypoints back over the VPN and the RFD900x link to the Pixhawk.

cuav: lzresult nr:10 avgscore:56375
cuav: Moving search to: (-27.357622,151.240345) 14
Moved WPs 18:57 to -27.357622, 151.240345 rotation=0.0

What we had done by doing the manual geo-referencing was to override the automatically determined landing location with a manual location.

We hadn’t realised the automatic system was working as the attempted download of the initial non-georeferenced image had caused an exception in the transmit thread of the imaging system, which prevented it from sending landing zone updates to the GCS. These landing zone status messages would normally be displayed as concentric green circles showing the landing location and the error margin in the geo-referencing.

The automatic mission code running on the RaspberryPi needed 11 good targets in its landing zone calculation before it would initiate an automatic landing. It had got to 10 when we did the manual landing override. If we had waited another minute the Porter would almost certainly have done an automatic landing at a position of latitude -27.357622, longitude 151.240345. The organisers later told us the correct position as recorded by their GPS was -27.35764, 151.24030 which is just 4.8 meters from the automatically determined position. That isn’t good by our normal standards (where we normally get within 2 meters) but isn’t bad at all given the massive timing jitter introduced into the geo-referencing system by the slow VPN telemetry system.

As it happened we didn’t know that automatic georeferencing was working, so we initiated an automatic landing with a manually selected landing point at -27.357829 151.240222. The landing went smoothly, and the organisers reported to us over the radio that the aircraft was down safely at 3:11 pm.

After landing safely the Porter changed its LEDs to green to indicate that it was disarmed and the organisers measured the distance between the target and the aircraft, which came out as 22.8 meters. The two organiser at the remote site (Mick Malloy and Tyson Dodd) then loaded the blood sample into the receptacle built into the top of the Porters fuselage and at 3:13 pm Mick pressed the button to tell the Porter that it should begin it’s one minute countdown for takeoff and return home.

LED Failure

It was at this point that the most serious safety issue of the flight became apparent. When Mick pressed the button nothing happened. The Porter should have started flashing its LEDs yellow and started sounding the buzzer with a clear warning tone, telling everyone to get clear so that it could take off safely. Mick naturally assumed that the button press had failed, and stayed close to the plane while he discussed the situation with the other organisers over the radio.

The situation was further confused by Mick reporting via the radio back to our team in the GCS that the Porter top hatch was missing. This caused some critical delay while it was discussed. I’ll explain a bit more about the missing hatch later in the story.

Meanwhile, the Porter had indeed started its 60 second countdown to takeoff, at which point it would start the 8 OctaQuad motors at close to full throttle. You really don’t want to be standing close to the Porter when it takes off.

What had happened was an unexpected side effect of the bad cable between the Pixhawk and the RaspberryPi. The backup LED option that I had enabled when the LEDs were not reliable in pre-flight was designed for the Kraken, which doesn’t have a button and doesn’t need to indicate a pre-takeoff warning via flashing yellow LEDs and buzzer sounds. That LED system just set the LED colour according to the arming state, and during the 60 second countdown the Porter was still disarmed. The sequence of mission items it was performing was to wait for a button press, then do a 60 second countdown, then wait for airspace clear (no dynamic obstacles nearby), then arm and takeoff.

The problem was compounded by the fact that we only had the RFD900x link running via the Kraken relay aircraft to give us the status of the Porter. That link was stretched to its limit, and very few packets were getting through. We still had the 3G links, but those were not being fed the critical telemetry data from the Pixhawk that was needed to tell me the button had been pressed.

Part way through the 60 second countdown when an OBC organiser asked me about the status of the aircraft I realised what had gone wrong. I told the organisers that the aircraft was live, and they relayed that to the team at the search site, and then to Mick. Luckily Mick stepped back just in time before the takeoff, but it was a very close thing.

We’ve learnt several lessons from this incident.

Most importantly we will ensure that in future designs all safety-critical actions (including warning LEDs and warning sounds) are controlled fully by the autopilot itself and not by the companion computer.

Secondly, we will add a delay in the takeoff sequence between arming and takeoff, with both the LEDs changing colour to indicate the vehicle is armed and a slow spin of the motors for a period of at least 10 seconds. Mick commented on how quickly the aircraft went from apparently dead to flying. That didn’t occur to us when we set up the mission as we expected the LEDs and buzzer to make the state of the aircraft clear to anyone nearby.

In the 2016 Challenge the CanberraUAV mission was organised so that the GCS operator had to command the arming of the vehicle for takeoff after the button press was confirmed. With the full autonomy (hands off) requirement for the 2018 competition that wasn’t an option, and unfortunately the solution we came up with didn’t provide enough safety. So with profound apologies to Mick Malloy, I’ll continue with the description of the flight.

The takeoff itself went smoothly at 3:14 pm, and the Porter began its trip back towards the home waypoint while continuing to avoid dynamic obstacles.

Meanwhile the Kraken relay aircraft was already starting its landing sequence. The slow search time meant that the Kraken has reached it’s preset 20 minute flight time at 3:08 pm. We had set up the Kraken so that it would automatically start a landing if either recalled by the Porter or it reached 20 minutes of flight time. The 20 minutes was a very conservative estimate to allow for the possibility that it needed to spend a long time holding off its landing waiting for clear airspace around home. As we were no longer attempting the full autonomy extension Stephen override the landing sequence and set the Kraken back to its normal holding pattern as an antenna relay. It had plenty of battery reserves and it was still needed as the radio relay due to the failure of telemetry from the Porter via the 3G link (due to the bad cable discussed above).

The Kraken stayed on station until 3:16 pm when the Porter was well on its way home, and it sent an automatic Kraken recall message via the GCS. The Kraken landed at 3:18 pm, however the landing was well short of the desired location. We initially thought that this was due to low battery, but in fact it was caused by a too-low glide slope angle causing the flare to be triggered early as the rangefinder detected the ground being closer than the flare height.

With the Kraken landed we just needed to wait for the Porter to come home. The trip back for the Porter was largely uneventful, with it arriving back at the home waypoint at 3:25 pm. As is required for the mission, it then turned to do a final lap down to the southeast waypoint before coming to home again. It reached that waypoint at 3:28 pm and started its final leg home for landing. It arrived at its landing pattern waypoints near home at 3:32 pm, and then automatically assessed the nearby obstacles. Both the Porter and Kraken missions were set up to assess whether airspace was clear before starting a landing approach. The first assessment was at 3:32pm, and it determined that the airspace was not sufficiently clear for a landing. The landing airspace clear assessment requires that there be no obstacles that could come within 400 meters of the aircraft over the next 30 seconds. If this requirement is not met then the aircraft will continue in a pre-landing holding pattern until the airspace clears.

Fifteen seconds after entering the holding pattern the Porter determined that airspace was now clear and it began its landing approach. The airspace clear requirement was then automatically re-assessed at each stage of the landing up until the VTOL motors engaged and was in each case found to be clear to the required distance. The Porter engaged its VTOL motors at 3:34 and landed cleanly 30 seconds later.

After landing the organisers retrieved the blood sample from the Porter’s sample container, and our mission was at its end. We started the pack up process, moved the GCS van out of the pilot’s box and brought both aircraft in from the runway.

After the mission was complete we could relax and enjoy watching the other teams fly, as well as helping out with last minute questions from the 9 teams (out of 11) that were using ArduPilot.

The Missing Hatch

There was still one piece of unfinished business left from our flight. As I mentioned earlier, the organisers had reported when the aircraft had landed at the remote site that it was missing its top hatch. You can also see the hatch missing in the above photo of Jim Coyne taking the blood sample out of the Porter.

We wondered if we could identify where the hatch was lost from the onboard log, and perhaps even find the hatch. Peter and I looked carefully at the EKF barometer innovations in the log and found one particularly suspicious event.

The logs showed a spike in the EKF height innovation of over 5 meters at 2:42 pm while over a location about 1km south of the runway. We then looked to see if we ever passed close to the same location again and found that we had in fact passed within 150 meters of the same location twice more in the flight with our on-board camera operating.

Armed with that information we modified the image matching code “geosearch” that we use for offline matching of targets to look for a target with the right dimensions of the hatch and disabled the “red and blue” scoring system that we used for the 2018 target, instead using our generic “unusual object” scoring system from the 2016 challenge. We ran it with a search radius of 200 meters around the point where we had seen the barometer innovation spike and got several hits on a likely object.

During the lunch break we got permission from the organisers to undergo a ground search for the hatch as the location was close to a small farm road to the south of the runway. When we arrived Jimmy immediately spotted the hatch close to the location where geosearch had indicated and we were able to bring the hatch home.

It seems likely that the hatch wasn’t properly secured after Peter had lifted it off when looking at the cabling issue between the two flights of the Porter. When I reported that the cable had come good he had quickly replaced the hatch but hadn’t pushed down hard enough for the velcro to fully engage. The airflow in the flight had lifted the hatch off within a few minutes of takeoff.

You may notice a small yellow duck featuring prominently in photos of CanberraUAV and the Porter. This duck was our team mascot and was even installed as the pilot of the porter during its flight. While it wasn’t lost when the hatch came off we thought it appropriate for it to be part of the photo when we found the hatch.

Watching the rest of the teams fly was very enjoyable, and the great spirit of the competition was in full evidence as teams cheered each other on and commiserated over the inevitable failures. The Outback Challenge is a truly superb event and the organisers deserve a great deal of credit for how well they run the event.

The Scores

By Friday lunchtime the last flight was over and it was time to tally the scores. While no team met the conditions for “mission complete” Monash UAS got a well deserved win on points, and it was a great pleasure to see that team succeed so well after having progressed in their professionalism and technical abilities at each event since they first started participating back in 2012.

Team Dhaksha from India also did extremely well, with less than one point separating them from Monash UAS. Their use of a hybrid hexacopter brought a new twist to the challenge and demonstrated the practicality of multicopters for this type of mission.

Our team, CanberraUAV, came in 3rd, well behind the two leaders due to losing points by going overtime, by losing part of our aircraft (the hatch) and by using a manual geolocation of the target in the search area.

We did  score a win for the automatic avoidance portion of the challenge. While we were not eligible for the grand prize as we hadn’t met the conditions required, we scored best in the avoidance section and were awarded a $10,000 prize for best result in automatic avoidance. That provides us with some funds to start thinking about the 2020 challenge.

Overall ArduPilot did extremely well. Teams using ArduPilot took out the top 6 places, although it was a great shame that MAVLab TUDelft didn’t do better with their Paparazzi based aircraft. They had a software failure that led to their aircraft disarming early 3.8 meters above the landing location, resulting in sufficient damage to their aircraft that they couldn’t continue the mission. Their aircraft, the DelftaCopter (a hybrid helicopter, bi-plane delta-wing), was easily the most technically sophisticated aircraft in the competition and it is a shame they didn’t do much better than the 7th place that they scored.

The 2018 Outback Challenge had its high points and low points but was once again a fantastic event run professionally by the organisers and with a great spirit of involvement by all the teams. Huge thanks are due to the organisers, to the Dalby Model Aero Club, to all the teams involved and of course to the team’s sponsors.

We’d also like to thank the CanberraUAV sponsors, HobbyKing, ProUAV Australia, our individual supporters on Patreon and for the members of our home flying club CMAC for their support. The Outback Challenge is not a cheap event to prepare for, and we are grateful for the wonderful support we have received.

We look forward to seeing what the organisers dream up for the 2020 UAV Challenge!

Category: OBC