Focus on research: Dr John McDonald, Lero
Dr John McDonald is a researcher attached to the Lero, the Irish software research centre, based at Maynooth University Department of Computer Science, where he directs the Computer Vision Research Group. Here he talks about autonomous systems, the challenges of real time data processing and the importance of ethics in research.
What was the attraction of autonomous systems for you?
My interest in the area started in the final year of my undergraduate degree when I sat modules in computer vision and machine learning. What attracted me to the field was the fact that to create versatile autonomous systems they would require the ability to perceive and understand the world around them. Solving this perception problem was clearly a very interesting research challenge but was also quite compelling when considering how such capabilities would result in completely new types of devices and systems.
My own research has involved work on a diverse set of systems including humanoid robots, autonomous vehicles, virtual and augment reality, drone technology, and a wide variety of other platforms. What is exciting at this point is that we are finally seeing the realisation of more than 50 years of research in the area leading to a wave of disruptive technologies with progress moving at quite a frenetic pace.
One of the main problems faced by autonomous cars is finding a way to gather and respond to information in real time. How do you break down this problem in your research?
This is a challenging problem for robotics in general, where any given platform can consist of a myriad of sensors, each of which produces a data stream that may be consumed by multiple sub-modules within the robot’s software system.
Over the years a dominant paradigm to emerge for dealing with this complexity has been architectures that sub-divide the system into a set of loosely coupled independent processes all tied together via a middleware communication sub-system.
This approach facilitates the development of scalable robotics systems, with intrinsic support for task-level parallelism. Many robot software frameworks have been developed to support this approach, two of which we make heavy use of within our lab. The first of these is known as Lightweight Communications and Marshalling (LCM), which was developed by members of MIT DARPA Urban Challenge team to facilitate distributed computation across a multi-CPU cluster that was used in their vehicle. The second is the Robotics Operating System (ROS) which is now widely supported across many commercial and research grade robot platforms.
Moving from a systems level to an individual sensor level, of all the sensors utilised on a modern robotic platform, camera sensors generate by far the most data, and place the greatest challenge on the system in terms of real-time performance.
Over the last decade a particularly significant development in terms of real-time computer vision has been the introduction of general purpose graphics processing unit (GPGPU) programming. What makes vision processing so amenable to this technology is that many algorithms can be parallelised at the pixel/neighbourhood level, thereby making it possible to directly take advantage of the thousands of cores on a modern GPU.
Paired with general purpose languages such as CUDA and OpenCL, exploiting GPU capabilities within robotics perception pipelines is very common place nowadays and central to the real-time capabilities of autonomous vehicles.
For example, this approach is key in our work on visual mapping where we incorporate measurements from every pixel in every image frame at 30Hz to build highly detailed 3D models of an environment from a handheld camera.
Trials of autonomous cars have met with uneven results in the US. What challenges do you think such trials will face in Ireland over and above the American experience?
We have seen amazing progress in autonomous vehicles over the last decade. This is clear when we compare the capability of the vehicles in the first DARPA challenge to those being trialled in the US today. With this said it is important not to underestimate the remaining challenges, and associated timeframes, in achieving Level 5 autonomy.
As such it is important to ensure that Ireland is ready to fully support research and development in this space from a legal and policy perspective. For example, bodies such as the Automobile Association have been voicing concerns over the prospect of Ireland falling behind due to the lack of a clear plan on issues such as legislating for autonomous vehicles to be used on Irish roads. With a recent report from Arup predicting a potential for 100,000 jobs in the area by 2030, it is important for a proactive approach to be taken on this issue.
Aside from policy, Ireland does present some significant technical challenges for such trials due to the variation in the quality and consistency of the road infrastructure, and the environmental conditions posed by the Irish climate. This is not necessarily a bad thing since addressing these challenges is an important part of the development of the technology; and is in fact part of the on-going autonomous vehicles research agenda within Lero.
With this said, it will still be some time before we see a vehicle autonomously drive the Wild Atlantic Way.
Moving away from cars, what other fields do you see benefiting from more advanced spatial awareness and how will they change?
Self-driving cars must be able to function reliably in the complex and unstructured real-world over extended periods of time. This requires a level of spatial awareness that facilitates the interplay between perception, cognition and control.
Key to the development of such spatial awareness is an ability to construct a map of the vehicle’s environment using data captured from on-board sensors. This problem, known as Simultaneous Localisation and Mapping (aka SLAM), is one of the most intensely researched problems in robot perception in the last 30 years; and one of the main research focusses of my group.
The SLAM problem appears in many guises across different fields, with solutions to the problem not just central to self-driving cars, but also key to the developments across the spectrum of intelligent mobile devices including household robots, augmented reality (AR) headsets, and drones.
For example, in order for an AR headset to render realistic immersive content it must understand the 3D structure of the world and its movement within it. Further afield it is also being applied in areas such as movie special FX and digital archaeology.
Robust solutions to the SLAM problem are now available, however these solutions primarily focus on building geometric representations of the device’s environment. This has led SLAM researchers to consider semantic mapping where AI is utilised to identify the locations, poses, and categories of objects within the environment. Such maps facilitate spatial reasoning at a much higher level than possible from their lower-level geometric counterparts, which will in turn result in a step change in capabilities across a wide range of intelligent devices.
Machine vision also has also found a place in security and social media through facial recognition. In the age of GDPR how should researchers approach the problem of users giving over potentially identifiable data/scans to third parties?
This is a very important question and something that researchers across computer science, and particularly AI and related areas, should be conscious. In computer vision, as algorithms are becoming more sophisticated, the potential for mining information from image and video based data in increasing.
This brings with it an added responsibility for researchers to clearly specify and control the use of the data that they collect. Universities and publicly funded research organisations are already well equipped in this regard, with rigorous ethical review committees and procedures in place to ensure that their researchers adhere to strict ethical research protocols.
However, when we consider cases such as the Cambridge Analytica incident, I believe the solution has to go beyond such ethical structures and must consider how computer scientists are trained to understand their personal responsibility when it comes to what they create and do with technology. This is a point of major concern at the moment with many leading universities reconsidering the level of ethics education that is provided on their computer science (CS) and engineering programmes.
This is often delivered as part of a broader module and does not achieve the necessary impact with students. To quote a recent New York Times article on the topic, what is clear is that given the potential for misuse of modern technology, CS graduates must be trained to have a “more medicine-like morality”.
Subscribers 0
Fans 0
Followers 0
Followers