This project will provide comprehensive driving data and analysis tools for the design and testing of an autonomous vehicles (AV) software stack. The data collection platform is an urban vehicle retrofitted with the latest AV sensors. The data will be obtained from local NSW scenarios such as Sydney, Orange, and the Future Mobility Testing Facility at Cudal.
The resulting dataset will include raw data from multiple sensor modalities such as cameras, lidar and radar, that provide information on the surroundings. It will also provide data which can be used to improve and validate existing models.
Post-processed information from the sensors will be computed in the form of specialised maps of the driving environment. This data will be provided in open formats and with a comprehensive set of software tools to facilitate the use for research and development (R&D) purposes.
Transport for NSW (TfNSW) is collaborating with the University of Sydney’s Australian Centre for Field Robotics on a three-year automated vehicles (AV) project to support local research and development.
This project aims to provide data and essential tools and for developing, testing and adapting autonomous vehicles technologies to the Australian environment and demonstrate the use of capabilities in defined scenarios. Furthermore, it intends to disseminate the information by using open data protocols.
The first step toward introducing AVs onto Australian roads is data collection. This process requires vehicles to be retrofitted with perception technology similar to the type used by autonomous vehicles. These vehicles need to be driven around towns and cities to collect data in order to support the evaluation, development, validation, and optimization of algorithms.
The collected raw and post-processed data can then be used to analyse the behaviour of software or system components from the AV software stack, e.g. dedicated electronic control units.
Different metrics can be used to identify relevant driving scenarios. Various metrics can be calculated depending on the available data input and the systems to be tested.
Once the metrics are available and ready to be inspected, they are analysed to evaluate performance and identify interesting and relevant situations (edge cases) from the data. For example, identifying where the system is not working correctly or pinpointing the events when the vehicle had issues that prevented the safe operation.
Research into vision and lidar-based algorithms, particularly using deep learning techniques, has attracted significant attention within the intelligent transportation systems community in recent years. It is challenging, however, to translate this research into real-world applications due to the crucial differences between theoretical research outcomes and practical real-world scenarios. For many vehicle-related applications such as autonomous driving, this has delayed the production of viable products due to safety concerns.
Deep learning algorithms that have been trained and tested on existing publicly available datasets (Kitti, Oxford, Waymo) have shown impressive results for a variety of vision and lidar-based tasks. However, these results do not always translate under real-world conditions. The most common problems appear to be the limited generalisation of models and algorithms’ poor robustness in different environments. A model trained using data from a specific domain may not transfer to a different environment that varies significantly from the training data. Including locally annotated images in the training, validation and testing data improves the models’ performance in a new environment; this process is also known as transfer learning.
Datasets are essential to validate algorithms to demonstrate a certain level of robustness and generalisability. Given the unpredictable nature and variety of real-world scenarios, testing the long-term reliability of newly developed algorithms can be extremely challenging. To validate algorithms or models, it is necessary to test against a large variety of different scenarios that can challenge each sensor modality and provide edge cases for classification techniques.
The provision of comprehensive driving data and software analysis tools for the design and testing of Intelligent Transportation Systems (ITS ) applications is considered a vital component. This project will produce a dataset that will include raw vehicle sensor data and post-processed data from multiple sensor modalities including cameras, lidar, radar, etc. It will also provide annotated ground-truth for image segmentation which can be used to improve and validate existing models to improve generalisation.
It will also look at augmenting the data by generating synthetic images with different appearances. Post-processed information from the sensors will be computed in the form of maps of the driving environment. This data will be provided with a comprehensive set of software tools to facilitate the use for R&D purposes.
The Future Mobility Testing Centre is a TfNSW facility that is currently used for testing advanced vehicle safety technologies. The testing site has a 1.5 km runway and significant potential to be extended to demonstrate various complex urban scenarios. This facility is very close to the town of Cudal and is approximately 30 km from Orange. This site has the objective of facilitating the safe deployment of autonomous vehicles in Australia as part of the TfNSW vision for the future of mobility.
Furthermore, TfNSW plans to establish this area as a preferred development and demonstration site for R&D by providing the industry and academia with state-of-the-art infrastructure and comprehensive data to test the different components of Advanced Driver Assistance Systems and other autonomous mobility technologies.
This project will produce reports with analysis of results, data, algorithms, guidelines and recommendations, addressing the following topics:
- A platform designed for data collection to support the development and testing of autonomous vehicles. The vehicle will be equipped with specialised sensors and computational resources for data processing and logging.
- An open-format dataset (first release) containing standardised AV sensor information collected in diverse locations in NSW (University of Sydney, TfNSW testing site at Cudal, Cudal and Orange). This dataset includes development and visualisation tools to facilitate its utilisation and access.
- The dataset’s second release will include labelled images for semantic and instance segmentation on different environmental appearances and localisation feature-based and semantic maps for navigation purposes.
The project will also demonstrate how a localisation map built with the dataset information can be consumed by an automated vehicle to improve its localisation with respect to the environment.
More from iMOVE