I am currently a postdoc in the Autonomous Agents and Distributed Intelligence (AADI) Lab and Robotic Decision Making Lab (RDML) at Oregon State University. Below are a list of my current and past research projects.
UAV traffic management in urban airspaces can be formulated as a problem of routing autonomously guided robots using cost space manipulation to induce safe trajectories in the work space. Each UAV does not explicitly coordinate with other vehicles in the airspace. Instead, they each execute their own individual internal cost-based planner to travel between locations. We are developing a high-level UAV traffic management (UTM) system that can dynamically adapt the cost space to reduce the number of conflict incidents in the airspace without needing explicit knowledge of the internal planners of each UAV. Our decentralized and distributed system of high-level traffic controllers each learn appropriate costing strategies via a neuro-evolutionary algorithm. The policies learned by our algorithm demonstrated a reduction in the total number of conflict incidents experienced in the airspace while maintaining throughput performance. Current research is looking at methods to account for traffic heterogeneity in the system.
We are investigating novel approaches to searching a graph with probabilistic edge costs, namely, by incorporating available uncertainty information into the graph search. Our proposed risk aware graph search (RAGS) method consists of two major steps, the first is to perform an initial search across the graph to find the set of non-dominated paths. Following this, we perform risk-aware planning during path execution as information of the true neighboring edge costs become available. Initial results in a graph search domain have demonstrated superior performance when compared to A*, D* and a greedy approach.
In 2014 I received my Ph.D., which I completed at the Australian Centre for Field Robotics at the University of Sydney. My research focused on the development of information-based exploration strategies that can be applied within reinforcement learning frameworks to characterise the exploration-exploitation trade-off within resource-constrained learning missions. The application of interest was an unpowered aerial glider learning to soar in a wind energy field.
The goal of this project was to gather intra-swarm locust motion data for biologists at the University of Sydney to study the effects of inter-locust interactions on the overall swarm motion. The Australian Centre for Field Robotics developed a system to collect this data consisting of micro retro-reflectors (to be attached to the insects) and a UAV equipped with a strobe beacon to autonomously fly loops over the swarm to track and monitor insect locations in real-time. Proof-of-concept was demonstrated in 2011.