For the week of July 31st, the main tasks were to finish the fix of the DDSA implementation, gather statistics on how it ran so that it could be compared to the CPFA. However, the DDSA implementation was not running properly after a merge halfway through the week with another DREU intern's addition of path-planning behavior to avoid the center between points (my previous addition only kept the points themselves outside the center). The bug introduced by this merge caused the spiral to be offset by about 1.3 meters to the West and South, and also start larger than intended. This offset severely hampered the effectiveness of the DDSA because it was avoiding several decently-sized clusters that it would normally collect from.
The cause of this behavior wasn't found before Friday, when we were due to give a presentation on our findings so far, with some preliminary statistics collected that day. The DDSA performed extremely poorly in comparison to the CPFA given the state it was in- around 12-13 blocks collected per 30-minute run compared to 50-60 collected by the CPFA in the same amount of time, with the same exact distribution of cubes. The base code, which implements a random walk (Brownian motion-like search) performed slightly worse than CPFA, at 45-55 blocks in a 30 minute round. After the presentation, we took the code for the DDSA alone (not including the center-avoidance code that caused the bug when it was merged in) out of the branch and merged it in with the base code- the problem was fixed and the algorithm performed normally.
0 Comments
For the week of July 24th, I continued to work on the DDSA implementation as much as it needed to be fixed and revised. Additionally, some improvements to the base code were made and incrementally merged in- issues due to the merges were fixed each time they were identified, before moving on with the implementation.
The math described in the paper was sufficient for the rovers if the dimensions of the rover were accounted for in terms of 'g' (the gap between lanes in the spiral)- there were no terms to add or other factors to multiply in besides the modified gap, based on the length of the diagonal of the rover plus a small gap defining the closer blind spot of the rover (each of those terms had to be measured from the simulation, because our physical rovers were not fully assembled at the time). However, we did have to adjust the spiral pattern as a whole to account for our non-point center dropoff zone. A small term is added to the first circuit and the "north" and "east" portion of the second circuit in order to offset the overall spiral from the center, since we do not want rovers driving through it. From July 15th to July 17th, the DREU interns were in Boston to run the Hackathon and take care of packing and shipping the supplies back to Albuquerque, before returning ourselves.
The RSS Hackathon had some technical issues (someone had done a `git push --force` incorrectly and overwrote our progress) but nevertheless was a useful experience for everyone involved. We stayed up overnight helping all the teams- some of them had significantly more experience than others, others didn't know C++. All-around, it was a good learning experience, both for the competitors to learn more about robotics software, and for the mentors as teachers and organizers. When we returned from the Hackathon, we had an extended weekend due to the strenuousness of the event. We got back to work on the 19th, and I started working on the code refactor to fix a few remaining bugs. After most were taken care of and Jarett (one of the long-term, non-DREU interns) had claimed the last few, I began work on the implementation of the Distributed Deterministic Spiral Algorithm. I worked out how the rovers would determine their index and the overall size of the swarm and adapted the math to account for the dimensions and sensing capabilities of the rovers, and another DREU intern worked on the generation of the spiral pattern itself. Because of the short week due to the July 4th holiday, I'll merge the two weeks together for this update.
From July 5th to July 7th, my main work was to work on the camera plug-in for the Gazebo simulator, which was done by working with a fellow DREU intern. He mainly wrote the code, while I mainly did the math required for figuring out the coordinate system, constraints of the camera joints, and force to use for turning the camera joints, as well as research into the plugin system used in Gazebo. I also wrote the appropriate sections in the model.sdf file used to represent the rover in the simulation. Final prep was also done for the RSS Hackathon. I started printing "stand-offs" for the rovers, packing up supplies to ship out, and preparing for the competition itself the next week I learned a bit how Gazebo models coordinate systems through this minor project; joints are relative to the first parent "link" (a physical point on the model), while links are based on absolute coordinates relative to the base of the model. Revolute joints are the most common type, which are joints that can be rotated around an axis or set of axes located at a certain point. They can be constrained to particular axes, and those axes can have particular constraints on the angles possible for the joint to turn to. From July 10th to July 14th, I mostly prepared code and tested it for the competition. It was more or less final by Wednesday- Thursday was left to test it and get packed to leave Friday morning. We flew out to Boston from Albuquerque on Friday morning at about 11AM MST and arrived in Boston sometime around 9:30PM EST, got situated in the on-campus building we were staying at, and prepared for the Hackathon from Saturday to Sunday. This is coming a little late, but my fourth week of this internship was filled with more Hackathon work- this time to produce a demo for it by June 30th.
Monday and Tuesday were more prep based on the lessons of the mock Hackathon the previous week- we worked out a unified approach to the problem based on improvements developed by the student heading the Hackathon. Our job was to test that code and make sure the simulation carried over to the physical robots. We got the robots moving and we added in some old code that produced waypoints for them to visit, and instead of overshooting each waypoint as they did before, they stopped essentially right on the mark. After we verified that everything was working in the base code as it should, we got to work on solving the problems of pathfinding and obstacle avoidance. I was assigned the problem of designing and implementing an occupancy grid- a grid of squares indicating whether or not there is something there. Using this grid, robots could figure out paths around obstacles as they find them, and if obstacles were broadcast to other rovers, they could avoid the same obstacles. This took up Wednesday, Thursday, and part of Friday. The reason it took so long was that each person made a different branch off the main code, and we had to incrementally merge all the changes we were making as we tested them, then make sure the merged code also worked as expected. The occupancy grid module I had written was mostly independent from the other changes, but the one section of the code I had to modify to integrate it into the rover's program was touched on heavily by the others. Eventually, in the simulation all the bugs were fixed and the rovers moved normally there. The demo was up in about half an hour, so we loaded the code on the rovers to test them beforehand, and found that they consistently crashed for each of our integrated code branches except the main one, which did not have everything integrated but only the last version of the code tested on the rovers. We demoed that code instead of the integrated changes, and it was deemed satisfactory for the purpose of the competition in Boston on July 16th. This week was almost entirely taken up by testing the code for the Hackathon competition coming up on July 15th. My coworkers and I were tasked with debugging the current code for the competition and gauging just how difficult the competition would be, and if there were enough resources to complete it in the given timeframe. Monday was the day we were meant to start, but there were still some adjustments to be made before the code was ready to test. I spent most of the day working on a style guide for new features in C++11- we recently upgraded our project's configuration files to allow for it and planned to use it for our future code, but we needed a style guide in order to have more readable code. The rest of the week, however, was filled with testing various aspects of the Hackathon competition as it currently is set up, a month before the real competition. Tuesday was hardware setup, getting the tracking cameras functional and checking to make sure all the connections were working and the rovers were functional. Wednesday was a bit more active, testing the basic software setup, adjusting it as needed, and communicating the fixes to the person developing the competition. By the end of the day, it was functional enough to begin the real test of the competition itself. Thursday and Friday were the two days we were allotted to actually test the competition and see how much we could get done in the time frame. We initially worked out the problem space and discovered limitations in the physical setup, drawing out a border to the area we could safely drive the rovers. The rest of the day was dedicated to working out different methods of getting the rover to precisely follow a path when the position and orientation information updates from the cameras were so slow. By the end of the day we had two methods, one that I had designed which utilized the default constant-frequency logic loop and added a timer to the position update handler function. The rover would use the amount of time since a position update as a way to have a motor cutoff point- if it hadn't received an update in under the certain amount of time, the rover froze in place until it got a new one. The other method was designed by another coworker who was familiar with the codebase. It instead removed the constant-frequency timer aspect of the mobility logic function and directly called it from the position update handler. This way, the rover would react instantly to a position update, but only when it received an update. It also had a motor cutoff time, but this was a constant amount of time after receiving an update. This was a technically more robust approach for making sure the rover only moved when its position information was valid, but I took issue with the inability to use this method of approaching rover mobility because it did not allow for the constant-time updates required for precise movements, used when picking up and dropping off blocks. We left the office while still disagreeing on which method to use. The next day, we came together and it was agreed, after some discussion with other members, that we would use the constant-frequency update method combined with a tuned speed such that the rover would move a shorter distance the further away in time it was from a position update. After some testing, this method allowed us to follow a path precisely enough for our needs, and it could also pick up cubes we had scattered about the area, so we deemed it satisfactory and continued on with the little time that remained. The next task we chose was to follow a particular tag that indicated the position of a block. This turned out to be a bit more difficult than we initially suspected, since the tag location was given a unique channel for the rovers to listen to rather than being given as a part of a collection of tag positions. We eventually did get this working, though by the time we did, there was only an hour left in the day to implement obstacle avoidance, path planning, and cube return. We didn't finish, but that wasn't the goal; we got plenty of valuable information for the competition's developer, and we got some experience working in the competition so that we can help others with technical issues and questions during the real Hackathon. The First Two Weeks... This post is a little bit behind, but it has been a busy and productive first two weeks here at University of New Mexico!
The first day after arriving, we were already getting to business with our tasks. There was some discussion and we unanimously decided that the codebase was in serious need of a refactor and an update to a more modern version of C++, C++11- so we jumped right in, not wanting to waste time. The other DREU participant who was assigned to the same task had never touched the codebase before, so we spent some time bringing him up to speed and doing a cursory check over the main module ourselves. I had participated in the Swarmathon competition for the previous two years and was fairly familiar with the codebase at this point, so I had started brainstorming ideas for the refactor. We decided that the main module took on far too much responsibility and did not handle the smallest amount of functionality that it should have at that point. What we decided on initially was to split that functionality into several main Controller classes- SearchController, PickUpController, DropOffController, and the new ObstacleController. Each of these controllers would then be fed input from various signals received from the ROS ecosystem (camera and ultrasound information, mostly) and would determine if they needed to send an interrupt signal to the main loop running in the primary module. If they did, then they would be given some amount of control over the rover's driving, until they no longer needed to interrupt the main driving behavior. By the end of the first week, we had implemented this system and gotten it mostly working- I took the DropOffController refactor and helped design the ObstacleController code. Partway through this initial stage of refactoring, I had noticed a similarity with this system and how an operating system's kernel will register interrupt handlers for various hardware and software interrupts. The main difference was that the modules controlling the sent interrupts also controlled the handling of them. Noticing this similarity and pointing out that hard-coding the order in which interrupts were handled was not particularly beneficial, I proposed a generic system in which all Controllers implement a shared API, represented with the Controller abstract base class. You could then register controllers into a priority queue, where the priorities could be modified in data instead of in code. This way, instead of switching the code path taken using conditional statements, a simple re-assignment of priorities would allow for different behaviors at different stages of the robot's control. This became the task of the second week. We verified that our changes from the first week still created the same behavior as in the base code, and then created a new Controller type, the LogicController. LogicController itself implemented the Controller interface (potentially allowing for recursive strategies of control), and is responsible for the management of other controllers, passing them information and determining what is to be done with their output. This was the most complex portion of the refactor and is still not complete after the second week- it involves taking almost the entirety of the 1000+ line mobility.cpp file's functionality and passing it on to the various Controllers, and making sure that all the states and transitions still behave as they did before. At our Thursday tech meeting, one last change was proposed in which we eliminated all ROS dependencies within the Controller framework and moved them to another module, and also moved the remainder of mobility.cpp's functionality (at this point, only the functionality relating to actually driving the rover from waypoint to waypoint or sending out precision driving commands) into a new Controller, DriveController. These changes make the entire module independent of the environment it is implemented in, and remove the need to understand specifics about it. This way we can focus entirely on the algorithms we are meant to implement instead of quirks in the simulation or operating system. While we did not make much progress in the implementation of the DDSA directly, these changes will make that implementation much easier and much quicker, and will lead to more maintainable code for the future. |
AuthorKelsey Geiger: a maker, learner, teacher, and doer. Proud to be out and proud of her work. Archives
August 2017
Categories |