
Pailton Engineering appoints new managing director
Dave Pound will bring his experience in automation and collaborative robots to the company.
DARPA’s RACER program is developing autonomous off-road vehicles capable of driving on complex terrains at high speeds. The technology was recently tested in California and Arizona, focusing on the vehicle’s ability to perceive and adapt to challenging environments without human intervention. It’s just one example of many tests that have taken place in the last two years. Here, Roger Brereton, head of sales at steering system manufacturer Pailton Engineering, explains why continued testing in extreme environments is key to developing uncrewed vehicles for military applications.
Autonomous military vehicles have the potential to offer numerous advantages, such as reducing human casualties, enhancing mission efficiency and operating in hazardous environments where human presence is impractical or too risky. These vehicles range from unmanned ground vehicles (UGVs) designed for reconnaissance and logistics, to autonomous combat vehicles capable of engaging in active warfare.
At the core of these autonomous systems lie sophisticated deep learning algorithms. These algorithms enable vehicles to perceive their environment, make decisions and learn from their experiences, mimicking aspects of human cognition. However, developing these algorithms for military applications presents unique challenges, particularly due to the complexity and unpredictability of the terrains these vehicles must encounter.
Developing autonomous vehicles for non-military applications, such as autonomous trucking, involves navigating well-defined roads with clear markings. The controlled environments and infrastructure support available in commercial applications provide a more predictable and less hostile context for these vehicles. For instance, autonomous trucks primarily operate on highways with consistent road conditions, making the navigation process more straightforward. In contrast, military vehicles must perform reliably in unpredictable and often adverse conditions, without the luxury of clear paths or consistent terrains.
Testing for unique and hostile environments
Although robotics and deep learning algorithms may be new technologies, the need for specialised testing is not new. Any technology supplied for a military application must be designed and tested to perform reliably in extreme temperatures and on inhospitable terrains. This is equally true, whether we are talking about a bevel box that must provide unbeatable ingress protection for deep wading applications, or an algorithm that can help an uncrewed vehicle inspect a site where a chemical weapons attack has taken place.
The latter scenario is one example where testing is currently ongoing and proving successful. In the UK, the Ministry of Defence's Defence Science and Technology Laboratory (Dstl) has successfully trialled the Hybrid Area Reconnaissance and Survey (HARS) vehicle. This autonomous vehicle, equipped with advanced sensor technology, demonstrated its capability to detect chemical and radiological hazards while navigating hazardous environments autonomously.
Developing deep learning algorithms for these vehicles involves significant challenges. For these algorithms to generalise well across different terrains, they must be trained on diverse datasets from various geographical locations and environmental conditions. Military operations require real-time decision making, so these algorithms must process sensor data and make decisions instantaneously. The unpredictable nature of military environments means that algorithms must be highly adaptable to handle unexpected obstacles, dynamic threats and changing mission parameters without human intervention.
Much of the necessary testing is now taking place through allies pooling their resources and expertise. For example, The Trusted Operation of Robotic Vehicles in Contested Environments (TORVICE) trial, conducted at the Cultana Training Area in South Australia, involved defence scientists from Australia, the UK, and the US. This trial aimed to test the resilience of robotic vehicles in electronic warfare environments, subjecting them to various electronic and GPS jamming attacks. As the TORVICE example highlights, ensuring the cybersecurity of autonomous systems is also critical, as these vehicles must be safeguarded against cyberattacks that could compromise their functionality or be used to gather intelligence.
As the focus remains on enhancing the robustness and reliability of deep learning algorithms, future advancements may include improved sensor integration, more sophisticated simulation environments, and enhanced collaborative frameworks between allied nations. Significant progress has been made, but the journey towards fully autonomous military vehicles capable of navigating the most challenging terrains is ongoing and will necessitate a lot more testing in extreme environments.
ChatGPT was trained on the entirety of the internet, yet it still regularly makes mistakes. In safety critical applications, the margin for error is much smaller and any technology that is designed for a military application must perform in environments that are complex and dangerous.
Dave Pound will bring his experience in automation and collaborative robots to the company.
Fresh EU data reiterates existing concerns
Military vehicle design is a balance of learning lessons from past conflicts and preparing for the unpredictable demands of future battlefields...