This section contains links to recent code releases from SPL teams.
|Contributing Team:||Austrian Kangaroos|
|Summary:||Austrian Kangaroos has released their 2014 Technical Challenge whistle
detector module open source.
|Contributing Team:||Berlin United – NaoTH|
|Summary:||Berlin United has released their 2017 code base online.|
|Website:||2017 release: https://github.com/BerlinUnited/NaoTH|
|2016 release: https://github.com/BerlinUnited/NaoTH|
|2014 release: https://github.com/BerlinUnited/NaoTH|
|Summary:||The release comes with an extensive team report (196 pages) that describes the current state of B-Human’s software system. The software released is a cleaned-up version of the system we used in the RoboCup 2016 final. However, most parts of the behavior have been removed. Our software is released with its own 3-D simulator. Microsoft Windows 64 bit, Linux 64 bit, and macOS are fully supported.|
|Website:||2017 release: https://github.com/bhuman/BHumanCodeRelease|
|2016 release: https://github.com/bhuman/BHumanCodeRelease/tree/coderelease2016|
|2015 release: https://github.com/bhuman/BHumanCodeRelease/tree/coderelease2015|
|2014 release: https://github.com/bhuman/BHumanCodeRelease/tree/coderelease2014|
|2013 release: https://github.com/bhuman/BHumanCodeRelease/tree/coderelease2013|
|Contributing Team:||Camelia Dragons|
|Summary:||Camelia Dragons has released their 2017 code base online.|
|Website:||2017 release: https://github.com/CamelliaDragons/CamelliaDragonsCodeRelease|
|Summary:||Two tools :1) gusimplewhiteboard. C++.11 fast in-memory Object-Classes forwarding. 2) clfsm. C++ compiled arraignments of concurrent logic-label finite-state machines (on a single thread). Latest P\paper describing this is: V. Estivill-Castro, R. Hexel and Carl Lusty “High Performance Relaying of C++11 Objects Across Processes and Logic-Labeled Finite-State Machines” International Conference on Simulation, Modelling, and Programming for Autonomous Robots (SIMPAR 2014) Bergamo, Italy. October 20-23. In Brugali, D. et al. (Eds.): SIMPAR 2014, Lecture Notes in Artificial Intelligence LNAI 8810, pp. 182–194. Springer International Publishing Switzerland (2014).
Downloads include examples and documentation for running with ROS (catkin environment), on the NAO we use bmake.
|Contributing Team:||Nao-Team HTWK|
|Summary:||Nao-Team HTWK has released their vision module from RoboCup 2014.|
|Website:||2017 release: https://github.com/NaoHTWK|
|2016 release: https://github.com/NaoHTWK|
|Vision release: https://github.com/NaoHTWK/HTWKVision|
|Summary:||HULKs has released their 2017 code.|
|Website:||2017 release: https://github.com/HULKs/HULKsCodeRelease|
|Summary:||Multi-agent Coordination of the Nao Humanoid Robots in SPL.|
|Website:||2017 release: https://github.com/mrlspl/GamePlanner|
|Contributing Team:||Nao Devils|
|Summary:||Nao Devils Dortmund has released their 2013, 2014 and 2016 code bases online.|
|Website:||2017 release: https://github.com/NaoDevils/CodeRelease|
|2016 release: https://github.com/NaoDevils/CodeRelease2016|
|2014 release: http://www.irf.tu-dortmund.de/nao-devils/download/2014/NDDCodeRelease2014.zip|
|2013 release: http://www.irf.tu-dortmund.de/nao-devils/download/2013/NDD-CodeRelease2013.zip|
|Contributing Team:||Northern Bites|
|Summary:||Northern Bites has publicly posted their entire code base online. They also have a wiki describing their code base, how to run it, etc available on GitHub.|
|Contributing Team:||NTU RoboPAL|
|Summary:||Team report and code release 2015|
|Website:||2015 release: https://drive.google.com/file/d/0B1d3UWuPZfdyVjlKRU1iTVpqUGc/view|
|Summary:||This release is a stand-alone, complete NAO kinematics software library. The C++ NAOKinematics library covers Aldebaran Robotics NAO versions H21 and H25, offers forward kinematics, inverse kinematics (analytical, closed-form solution), and center-of-mass calculation functions, and can be integrated into any existing C++ software architecture.|
|Summary:||Team rUNSWift released code after their championship at RoboCup 2014. The documentation is in the form of a wiki, which is part of the repository. The code released contains a slightly cleaned up version of their code from the 2014 competition, but with most of the behaviour removed.|
|Website:||2017 release: https://github.com/UNSWComputing/rUNSWift-2017-release|
|2016 release: https://github.com/UNSWComputing/rUNSWift-2016-release|
|2015 release: https://github.com/UNSWComputing/rUNSWift-2015-release|
|2014 release: https://github.com/UNSWComputing/rUNSWift-2014-release|
|Summary:||The SPQR Ball Perceptor is a software module for black and white ball detection developed by the SPQR Team to be used within the B-Human framework.
The SPQR Ball detector is based on a supervised approach implemented in OpenCV. In particular, an LBP binary cascade classifier has been trained to detect the official RoboCup SPL ball. The detector can be used without modifications both indoor and outdoor, inside and outside the game field.
Details about how to generate the classifier are available in the tutorial “How to Use OpenCV for Ball Detection – RoboCup SPL Use Case” (http://profs.scienze.univr.
|Website:||2017 release: https://github.com/TJArk-Robotics/coderelease_2017|
|2016 release: https://github.com/TJArk-Robotics/coderelease_2016|
|Contributing Team:||UChile Robotics Team|
|Summary:||UChile Robotics team has shared the code of the ball perceptor used in 2016 to solve the ball problem. They indicated that it still has some minor issues but it may be a good start for the new teams that will join in 2017. They shared the code in Github, also including a wiki explaining how to use it and a general overview of the strategy used to detect the ball.|
|Summary:||Open Source Code|
|Website:||2016 release: https://github.com/UPenn-RoboCup/UPennalizers|
|Contributing Team:||UT Austin Villa|
|Summary:||After winning Robocup 2012, Austin Villa opted to do a partial release of their core code. This release includes their software architecture, stream-lined vision processing, localization, localization simulator, kick engine, and debug tool. Austin Villa has also released their vision processing code from 2016 and 2017.|
|Website:||2017 release: https://github.com/LARG/spl-release|
|2016 release: https://github.com/LARG/spl-release|
|2012 release: http://www.cs.utexas.edu/~AustinVilla/?p=downloads/source_code_and_binaries|
|Summary:||The new SPL GameController was rewritten from the ground up and aims at being easier to maintain. It also reflects the rule changes made for RoboCup 2014|
This section contains version-specific hacks that work around issues in different versions of NaoQi.
- Somewhere between NaoQi 1.10.37 and 1.14.1 the value of “Device/SubDeviceList/InertialSensor/AccY/Sensor/Value” had the sign flipped (this is the Accelerometer in the y-dimension). Multiply it by -1 to get the old values. (Contributed by RoboEireann)
- Upgrading to 1.14.1 from 1.10.37, the raw gyroscope values “Device/SubDeviceList/InertialSensor/GyrX/Sensor/Value” and “Device/SubDeviceList/InertialSensor/GyrY/Sensor/Value” changed in both range and scale. RoboEireann empirically found scaling these raw values by 0.7 brought them back to the old sort of values they had previously seen.
- The V3 cameras have a few issues with them in 1.14.1. RoboEireann found that the frame buffers were still not marked as cacheable and that auto black level functions were impossible to disable. RoboEireann forked the Aldebaran kernel and patched the camera driver to fix both of these issues. The kernel is hosted at https://github.com/mp3guy/linux-aldebaran, with versions 1.12 and 1.14. To compile the patched driver, just checkout the source above and then:
- Get the kernel config off a robot (/proc/config.gz)
- Extract it to the root of the kernel source as .config
- Run make ARCH=i386
- Make coffee
- If you get an error about arch/x86/vdso and -m elf_i386, open arch/x86/vdso/Makefile and find the line containing VDSO_LDFLAGS and replace -m elf_i386 with -m32
- Copy drivers/media/video/lxv4l2/lxv4l2.ko to your nao on /lib/modules/188.8.131.52-rt24-aldebaran-rt/kernel/drivers/media/video/lxv4l2/lxv4l2.ko (you’ll need root for this)
- Reboot and enjoy the image being loaded into the Geode’s cache and auto black level settings being disabled by default.
- The sonars work differently from what the documentation suggests (NaoQi 1.14.0-2). First of all, in firing modes 0-3 the readings are all returned as measurements of the right sensor. In addition, only 70 ms after firing a sensor, the readings seem to be correct. Before that, they still might correspond to previous measurements. The firing modes 4-7 always fire both transmitters and read with both receivers. In modes 4 and 7, the receivers measure the pulse sent by the transmitters on the same side. In modes 5 and 6, they measure the pulse sent by the transmitters on the opposite side. They do not seem to disturb each other. They either use different frequencies or they are fired at different points in time. (Contributed by B-Human)