We present a new vision for smart objects and the Internet of Things wherein mobile robots interact with wirelessly-powered, long-range, ultra-high frequency radio frequency identification (UHF RFID) tags outfitted with sensing capabilities. We explore the technology innovations driving this vision by examining recently-commercialized sensor tags that could be affixed-to or embedded-in objects or the environment to yield true embodied intelligence. Using a pair of autonomous mobile robots outfitted with UHF RFID readers, we explore several potential applications where mobile robots interact with sensor tags to perform tasks such as: soil moisture sensing, remote crop monitoring, infrastructure monitoring, water quality monitoring, and remote sensor deployment.
Many robotic tasks rely on the accurate localization of moving objects within a given workspace. This information about the objects poses and velocities are used for control,motion planning, navigation, interaction with the environment or verification. Often motion capture systems are used to obtain such a state estimate. However, these systems are often costly, limited in workspace size and not suitable for outdoor usage. Therefore, we propose a lightweight and easy to use, visual-inertial Simultaneous Localization and Mapping approach that leverages cost-efficient, paper printable artificial landmarks, socalled fiducials. Results show that by fusing visual and inertial data, the system provides accurate estimates and is robust against fast motions and changing lighting conditions. Tight integration of the estimation of sensor and fiducial pose as well as extrinsics ensures accuracy, map consistency and avoids the requirement for precalibration. By providing an open source implementation and various datasets, partially with ground truth information, we enable community members to run, test, modify and extend the system either using these datasets or directly running the system on their own robotic setups.
Microscopic robots could perform tasks with high spatial precision, such as acting on precisely-targeted cells in biological tissues. Some tasks may benefit from robots that change shape, such as elongating to improve chemical gradient sensing or contracting to squeeze through narrow channels. This paper evaluates the energy dissipation for shape-changing (i.e., metamorphic) robots whose size is comparable to bacteria. Unlike larger robots, surface forces dominate the dissipation. Theoretical estimates indicate that the power likely to be available to the robots, as determined by previous studies, is sufficient to change shape fairly rapidly even in highly-viscous biological fluids. Achieving this performance will require significant improvements in manufacturing and material properties compared to current micromachines. Furthermore, optimally varying the speed of shape change only slightly reduces energy use compared to uniform speed, thereby simplifying robot controllers.
This paper investigates the visual servoing problem for robotic systems with uncertain kinematic, dynamic, and camera parameters. We first present the passivity properties associated with the overall kinematics of the system, and then propose two passivity-based adaptive control schemes to resolve the visual tracking problem. One scheme employs the adaptive inverse-Jacobian-like feedback, and the other employs the adaptive transpose Jacobian feedback. With the Lyapunov analysis approach, it is shown that under either of the proposed control schemes, the image-space tracking errors converge to zero without relying on the assumption of the invertibility of the estimated depth. Numerical simulations are performed to show the tracking performance of the proposed adaptive controllers.
One major challenge in implementation of formation control problems stems from the packet loss that occur in these shared communication channel. In the presence of packet loss the coordination information among agents is lost. Moreover, there is a move to use wireless channels in formation control applications. It has been found in practice that packet losses are more pronounced in wireless channels, than their wired counterparts. In our analysis, we first show that packet loss may result in loss of rigidity. In turn this causes the entire formation to fail. Later, we present an estimation based formation control algorithm that is robust to packet loss among agents. The proposed estimation algorithm employs minimal spanning tree algorithm to compute the estimate of the node variables (coordination variables). Consequently, this reduces the communication overhead required for information exchange. Later, using simulation, we verify the data that is to be transmitted for optimal estimation of these variables in the event of a packet loss. Finally, the effectiveness of the proposed algorithm is illustrated using suitable simulation example.
The visual cue of optical flow plays a major role in the navigation of flying insects, and is increasingly studied for use by small flying robots as well. A major problem is that successful optical flow control seems to require distance estimates, while optical flow is known to provide only the ratio of velocity to distance. In this article, a novel, stability-based strategy is proposed to estimate distances with monocular optical flow and knowledge of the control inputs (efference copies). It is shown analytically that given a fixed control gain, the stability of a constant divergence control loop only depends on the distance to the approached surface. At close distances, the control loop first starts to exhibit self-induced oscillations, eventually leading to instability. The proposed stability-based strategy for estimating distances has two major attractive characteristics. First, self-induced oscillations are easy for the robot to detect and are hardly influenced by wind. Second, the distance can be estimated during a zero divergence maneuver, i.e., around hover. The stability-based strategy is implemented and tested both in simulation and with a Parrot AR drone 2.0. It is shown that it can be used to: (1) trigger a final approach response during a constant divergence landing with fixed gain, (2) estimate the distance in hover, and (3) estimate distances during an entire landing if the robot uses adaptive gain control to continuously stay on the edge of oscillation.
Workspace and joint space analysis are essential steps in describing the task and designing the control loop of the robot, respectively. This paper presents the descriptive analysis of a family of delta-like parallel robots by using algebraic tools to induce an estimation about the complexity in representing the singularities in the workspace and the joint space. A Gr{o}bner based elimination is used to compute the singularities of the manipulator and a Cylindrical Algebraic Decomposition algorithm is used to study the workspace and the joint space. From these algebraic objects, we propose some certified three dimensional plotting describing the the shape of workspace and of the joint space which will help the engineers or researchers to decide the most suited configuration of the manipulator they should use for a given task. Also, the different parameters associated with the complexity of the serial and parallel singularities are tabulated, which further enhance the selection of the different configuration of the manipulator by comparing the complexity of the singularity equations.
We consider the problem of organizing a scattered group of $n$ robots in two-dimensional space, with geometric maximum distance $D$ between robots. The communication graph of the swarm is connected, but there is no central authority for organizing it. We want to arrange them into a sorted and equally-spaced array between the robots with lowest and highest label, while maintaining a connected communication network. In this paper, we describe a distributed method to accomplish these goals, without using central control, while also keeping time, travel distance and communication cost at a minimum. We proceed in a number of stages (leader election, initial path construction, subtree contraction, geometric straightening, and distributed sorting), none of which requires a central authority, but still accomplishes best possible parallelization. The overall arraying is performed in $O(n)$ time, $O(n^2)$ individual messages, and $O(nD)$ travel distance. Implementation of the sorting and navigation use communication messages of fixed size, and are a practical solution for large populations of low-cost robots.
We present a number of powerful local mechanisms for maintaining a dynamic swarm of robots with limited capabilities and information, in the presence of external forces and permanent node failures. We propose a set of local continuous algorithms that together produce a generalization of a Euclidean Steiner tree. At any stage, the resulting overall shape achieves a good compromise between local thickness, global connectivity, and flexibility to further continuous motion of the terminals. The resulting swarm behavior scales well, is robust against node failures, and performs close to the best known approximation bound for a corresponding centralized static optimization problem.
We address the problem of tracking the 6-DoF pose of an object while it is being manipulated by a human or a robot. We use a dynamic Bayesian network to perform inference and compute a posterior distribution over the current object pose. Depending on whether a robot or a human manipulates the object, we employ a process model with or without knowledge of control inputs. Observations are obtained from a range camera. As opposed to previous object tracking methods, we explicitly model self-occlusions and occlusions from the environment, e.g, the human or robotic hand. This leads to a strongly non-linear observation model and additional dependencies in the Bayesian network. We employ a Rao-Blackwellised particle filter to compute an estimate of the object pose at every time step. In a set of experiments, we demonstrate the ability of our method to accurately and robustly track the object pose in real-time while it is being manipulated by a human or a robot.