Our drop-in player implements a defensive behavior. Based on its player number, it chooses one of three sections within our team’s half and tries to defend that area. The robot will patrol its assigned section until it finds the ball. When our player finds the ball, we approach the ball and kick it towards the opponent’s goal.
If we play with jersey number 1, we assume the role of the goalkeeper. The player will try to position itself within the penalty box during ready state and remain there. If the goalie detects a ball moving towards the goal line, it will dive, attempting to block the goal.
For the Drop-in Player Competition we use the same code and role switching behavior as in our regular team play. The robot becomes “striker” and goes to the ball if it’s closer to the ball compared to the other players. The decision is made based on the communicated information from the teammates.
The Drop-In Player Competition games are mainly played with the same software as the regular games. Significant adaptions concern only communication and behavior. As some information given by other players might be imprecise, wrong, or even missing, the reliabilities of all teammates are estimated during a game. This is done in a similar way as in 2014 and 2015. In addition, we accept suggestions by our teammates, if a majority of the trusted teammates agrees on a role for our robot. As passes are rewarded with positive scores, our drop-in player tries to pass to teammates that report an adequate target position. For normal games, passes are not activated. If the vision system reports a teammate close to the ball, we do not try to play the ball. As we believe that our robot can better contribute to the overall team performance when being a field player, the goalkeeper role is avoided.
One new element that we have introduced for the 2016 competition are gestures. In general, the audience and the judges cannot know, if some action is intended by the robot or just happened accidentally. Therefore, we implemented two gestures: waving and pointing. Whenever we play a pass to a teammate or decided to let a teammate play the ball, the robot is pointing towards that robot to express its intention in a comprehensible manner. If the robot positions to receive a pass from a teammate, it raises its arm and waves to indicate its waiting status.
The strategy is the Drop-in Player Competition games are mainly played with the same software as the regular games based on the team description paper. In addition, we develop a new strong kick motion. It make by some phases. Then our robot becomes able to make a pass to teammates and to decide a goal.
Our vision for the ball was not the best, so our strategy was to try and position ourselves in defensive locations, taking into account the information published in the standard message (teammate location, ball locations, etc). Essentially, we’d try to minimize open angles on the goal.
Unfortunately, prior to the start of the drop-in we had an issue where the robot would go limp every couple of seconds. This was eventually traced to a bad cable connecting the head to the body of the robot. However, this problem wasn’t diagnosed/fixed until after the Drop-in Player Competition had started. So we ultimately did not get to try out our strategy.
DAInamite’s drop-in strategy does not differ from normal game behavior. If we had number one, our agent would try to play as goalie, and will try to kick away any ball within or close to its own the penalty area, otherwise returning back into its goal.
All the other players select one of a number of fixed home positions for kick off or for returning to, when they are not actively pursuing the ball. If the agent determines it is closest to the ball (with some hysteresis to decrease oscillation) it will pursue the ball otherwise it will wait / return to a support position and watch to not interfere with its teammates. For this purpose, the own seen ball’s position is compared to the communicated one, and if they are not within 1m of each other, we consider the balls to be different (possibly false positive) and may anyway go for the ball. Intention for our own agents are communicated (goTo Positions, pursue ball, kickTo) but others intentions are not yet directly considered in our agent’s decision making (only position and communicated ball). When actively going for the ball the robot will align itself behind the ball to shoot into the opponent’s goal’s direction and kick or dribble (configuration differs from robot to robot depending on stability and effectiveness of the kick for that robot (kick distance, probability of fall, etc.).
We broadcast our intentions, filling all fields from the SPLStandardMessage.
We listen to our team mates’ suggestions to decide if we should walk towards the goal. The player with most votes will be assumed to be goalie. In case of a tie, assume lowest player number will be goalie. Cast a vote for yourself if you believe to be closest. Cast a vote for yourself if you are player #1 (easy for ref). If a team mate broadcasts its intention to be goalie, do not follow voting. From the moment you arrive in the penalty box, broadcast your intention to be a goalie.
While you are not goalie, you are a General Field Player. If you have seen the ball recently, walk to it, align to goal, and kick it. Player localization and kick direction is not trustworthy enough to pass. If the ball has not been seen recently, walk to one of four dynamically chosen positions (nearest to current position which has no team member in a radius of 500 mm), and scan for ball.
In 2016’s Drop-in Player Competition, we participated with our current striker behavior. Our general idea for the Drop-in Player Competition is to utilize them as test games for experimental or new features. As in previous years, we completely ignore all SPL messages (as we do in normal games), but send correct SPL messages regarding our state to the other players.
Our striker behavior tries to kick the ball into the opponents goal. The precision of the goal attempts is weighted less, when the robot is in its own half of the field. This results in faster defending kicks, as the positioning around the ball is easier. We implemented a longer kick to support the players in the opponent’s half of the field.
This year, we successfully tested a controlled field color detection which was necessary due to bad lighting conditions on the drop-in field. As our localization was improved since RoboCup 2015, we managed to minimize our leaving the field penalties.
MRL-SPL has worked to keep the drop-in and normal game strategies as similar as possible. When communicating with other players, there is an inherent necessity to be robust to different kinds of information including false, inaccurate and expired data. Most importantly, receiving and providing ball position is a requirement. The simple strategy used in normal games, with little fine tuning, is to prioritize data by the level of confidence the robot has on it, which prevents disturbances when the robot itself has a valid model of ball. The modeled data is provided to other teammates and the receive data is integrated into our probabilistic search algorithm and other places where it is not very sensitive. Secondly, to cooperate with teammates, we trust teammates localization data and integrate them into our post/role assignment algorithm, designed according to our strategy of not fully trusting each others localization data. Our robot selects the most time-efficient post to take and will follow the ball and strike if he is the nearest to the ball between those who see it. Since posts include defense and strike position, different contributions such as defending, striking, passing, and receiving a pass would be done to the team.
Our robots are always an offensive field player which either goes for the ball or waits near to it, if a team mate is detected near the ball. This is done through vision and sent team mate data.
However, this team mate data like ball information or team mate positions is only used, if their ball model matches ours. Then this information is included in our world model. The team mate data is also rejected if impossible information is sent.
We used our default behavior but ignored all incoming messages from other players. Tests last year showed that ignoring other players leads to better results than trusting their information.
We used our player number to select our strategy, but modified it slightly based on last year’s team strategies (most teams played defense) so we were more likely to play as an attacker. We always played goalie as player 1 though. Our player was based on our normal player except that a) we did not trust our teammates, and b) we modified all of our search strategies since our normal strategies rely on coordination with our teammates. We tried to communicate everything in the SPL packet as legitimately as we could.
NTU RoboPAL provides information defined in the SPL standard message such as the robot’s pose and the ball position as well as the walking to and shooting to information to other teammates except that we do not suggest particular strategy to our teammates. We didn’t use other teammates’ information for our own decision making process. Our intention would be “want to play the ball” and would be less likely to avoid other robots no matter the ones were teammates or opponents while the ball had been seen. Otherwise, the avoidance would take place and tried to seek for the ball.
According to the role assigned to a robot in a particular game (1-5) robot will select a region on the field and operate there transitioning between states as “localize self”, “go to own zone”, “wait for ball” and “attack” or “defend” according to the situation on the field.
Due to some last minute problems, this year, the drop-in strategy for RoboEireann was based on a modified striker rather than an extension of our previous drop-in strategy as originally intended.
We listen to team communications from team-mates but essentially disregard it due to lack of trust. We communicate all mandatory information. For intentions and suggestions this year we just set “nothing particular” since we expect that many team mates will ignore this anyway. Once play starts we search for the ball using a standard search pattern (which eventually selects random field locations within a box 3m wide and 4.5 long centered on the center of the field from which to search if the ball is not found). If we find the ball we approach it and attempt to play it. In general if one of our team mates is closer we allow obstacle detection to resolve the conflict and we back up a few steps. If we get access to the ball we follow our normal striker behavior which chooses among available kick types in a stochastic manner (influenced by distance to goal and orientation to the goal).
The SPQR Team participates in SPL Drop-in Player Competition with two different behaviors. One is specific for the Goalie role, while the other is designed for dynamically assume different roles on the field by receiving the intentions of the teammates.
In the Goalie behavior, the robot communicates the intention to become keeper and reaches the goal in order to start playing as goalie.
In the dynamic behavior, two main strategies are triggered in order to actively participate in the game tactics: by using the teammate intentions the robot chooses if it has to adopt a defensive strategy or a more offensive one.
When the robot approaches the ball, it communicates its intention to the other teammates in order to let them coordinate with it.
In the Drop-in Player Competition, our robot selects roles to play according to the communication with other teammates. First, if there are more than two teammates intend to be a striker, our drop-in player will play a different role to avoid most of the robots playing the same role. Second, combining with the information received, when our player is the nearest player from the ball, our player will switch roles to the striker, and send the information to teammate. Ensuring the direction of the ball is to the opposition half, at the same time, our robot will kick the ball more to the direction where there are more teammates and less opponents; Besides , our drop-in player will position to receive a pass and intercept an opponent’s shot under certain circumstances.
We use the teammates intentions and positions to decide how to play. The priority is to play with the ball, then to play as a goal keeper and finally to support the defense or the offensive accordingly to the game situation. To do these tasks, we use our classical behavior to each role, but preferring to dribbling and passing over to kicking to the goal.
We also estimate the best role for each teammate using their locations and the location of the ball, and send them using the suggestion message.
Our drop-in strategy consisted on the creation of a new framework in which we could develop our many different modules as well as integrate the motion controller provided on UNSW Australia’s 2015 release. Our structure was mainly composed by motion, communication, perception and behavior modules. For the perception module we had ball recognition, in which a trained detector based on a cascade of Haar like features was used, a field border detector, line detector for the goal posts and sensor handling for both the odometry and sonar. Communications, handled the packet delivery and Game Controller interface. Finally the behavior was a state machine with a more conservative approach : First the player would search for the ball in a square around its position, then if the ball was found it would go for it and attack, if the sonar detected an obstacle, such as another robot, it would always try to avoid it, if the robot detect the end of the field it would always turn around and look for the ball on the other side. Finally the robot would repeat all this process until it left the playing state.
The main problem obtained in this year’s competition was the ball recognition, the Haar Cascade method proved to take a lot of our processing time, therefore resulting in a lag in the implemented system. The results proved that the low time rate was not sufficient for an actual match.
We tell all our teammates of our intentions, including where we are, where we are moving, where we are kicking, what intention we have. We don’t believe what our teammates tell us about where they are and do not attempt to tell them what to do.
We search the field for the ball, moving closer to the center of the field if we can’t find it. If we perceive another teammate closer to the ball, we remain stationary and let them play the ball. If we can’t see a teammate closer to the ball, we will attack the ball. If we are close enough to goal we will shoot, if not, we will dribble the ball upfield until we are in a position to shoot. Robots generally struggle to see the ball in 2016, so dribbling makes more sense as keeping possession is important.
If we are player 1, we play goalie. Here we position to defend the goal and clear the ball upfield once it gets closer than our own penalty spot. We defend the goal either by a side dive or a squat downward.
We left most of our code intact for our drop-in games. We tell our teammates of our intentions through sending the correct information of all the SPL standard team communication messages.
We trusted most information from the other robots that was sent to us. In order to determine the ball location, we used a scoring system to weight how confident we were that each other robot was sending us accurate information about the ball. If we were confident enough that another robot had the correct location if the ball then we would assume that was actually where the ball was. This allowed us to be able to move towards the ball and be in a position to make plays even when we could not see the ball ourselves.
Since this year our localization is accurate most of time, we believe our robot staying closer to the ball will benefit the whole team. We forced our robot to be the attacker if the ball is less than certain distance away, but we also use the ultrasonic sensor to avoid fighting the ball with our teammates. If the ball is far away, we acted as a supporter to adjust our position according to ball location and not attempt to play.
We communicated updated estimates of everything in the SPL message except suggestions for teammate roles. However, we did not trust any communicated information from teammates, so we did not suggest roles for our teammates.
If we were assigned player number 1, we occupied the keeper role. If we were assigned player numbers 2 or 3, we occupied a supporting defender role. As a defender we played defense on our half of the field unless the ball was near in which case we attempted to play the ball. If we were assigned player numbers 4 or 5, we attempted to play the ball. However, we responded to sonar more sensitively than in our main competition games such that we would avoid stealing the ball from teammates.
Our drop-in player attempted to take a midfielder role on the pitch, positioning itself around the center circle and in situations where the ball came close trying to pass the ball to the robots in the front half of the field. Due to our still suboptimal ball detection we were unfortunately only able to see the ball when it came close to us and thus were not able to contribute as much as we had hoped. Our robot broadcasted all the information it had to its team, but did not yet use much of the information we received from our teammates. For the last game we attempted to take on a striker role to try out how it would perform in the offense, but unfortunately our self-localization failed.