kuva
Home / English homepage - Lapland UAS / Who we are / Developing a simulator platform to advance autonomous systems in forestry usage
Articles in English

Developing a simulator platform to advance autonomous systems in forestry usage

Narkilahti Aleksi.png
1.8.2024 8:00

Aleksi Narkilahti, BEng. Specialist in Digital Solutions Expertise Group, Lapland University of Applied Sciences

The FrostBit Software Lab at Lapland University of Applied Sciences is developing a simulator platform from the ground up to advance the development, training, and validation of autonomous driving and safety systems for forestry usage. The development work is enabled and resourced by the AGRARSENSE project which includes several use cases for agriculture and forestry.

Lapland University of Applied Sciences is part of forestry use case which is led by Komatsu Forest AB. At the time of writing, we are halfway through the AGRARSENSE project, and the development is on-going, with an expected completion date for our work by the end of 2024.

AGRARSENSE is a large 3-year Chips Joint Undertaking consortium project aiming to develop microelectronics, photonics, and electronic packaging for agricultural use and forestry. The project will also develop related ICT and data management to realize large-scale field demonstrators for real industrial needs. The AIFor -project, which is related to AGRARSENSE-project application, is co-financed by Business Finland. You can read more about the project at AGRARSENSE.eu website.

Simulator use cases in the AGRARSENSE project

Simulators are widely used in many industries because they provide a repeatable and controllable platform for testing, developing, and validating various systems. In our case, the simulator needs to allow for repeatable, cost-effective, and efficient testing of autonomous and safety systems in forestry, using simulated sensors, forestry machines and drones before testing systems in real-world scenarios.

The simulator needs to serve as a versatile tool for two key functions: real-time simulation and data collection. This requires the simulator environments to be as realistic looking as possible, enabling the collection of synthetic data from the simulator, such as RGB images or LiDAR (Light Detection and Ranging) point clouds. The simulator and the collected data can then be used for various AI training, validation purposes, and algorithm development.

Why develop own simulator?

At the start of the project, FrostBit Software Lab explored a few available simulator options, including the CARLA simulator, which is an open-source simulator for autonomous driving research. We have been using the CARLA simulator for several years for multiple projects, but to better meet our project goals, we chose to develop the simulator from the ground up using Unreal Engine 5.

Since our project focuses only on forestry, many features offered by the CARLA simulator, such as the traffic light system among other systems designed for city driving, are unnecessary for us. More importantly, CARLA simulator still utilizes Unreal Engine 4, while our project required the new capabilities of Unreal Engine 5 for more realistic environments and higher performance. Thus, the decision to develop our own simulator, leveraging Unreal Engine 5, proved to be the most practical and effective solution.

Developing virtual sensors and data communication

Since we decided to develop the simulator from the ground up using Unreal Engine 5, our two major tasks were virtual sensor development and data communication. Fortunately, CARLA simulator source code is open-source, and many of the sensors we need in our simulator had already been implemented by CARLA’s team and were familiar to us. This made the implementation of sensor functionality less daunting.

While we took heavy inspiration from CARLA’s sensor implementations, we made significant modifications to almost all the sensors. For example, our simulator's camera sensors work differently under the hood compared to those in the CARLA simulator. While the final output of the sensor is similar, our approach is more performant. Similar changes and improvements were made to other sensors as well. For instance, in the previous EU-funded project, Energy ECS, CARLA’s LiDAR sensor implementation was heavily modified to significantly improve its performance, allowing the simulation of multiple LiDAR sensors simultaneously. This change was brought to this project along with some other things from previous projects which helped to jump-start the simulator development.

The simulator includes the following sensors, all of which are parameterized to allow changing settings such as camera resolution, LiDAR channel count and so on:

- RGB camera
- Depth camera
- Semantic segmentation camera
- DVS (Dynamic Vision Sensor)
- Thermal camera
- LiDAR
- Radar

The simulator also includes the following sensors, which are not attempting to mimic any real-life sensor but are information that we can get from Unreal Engine which can be helpful for testing and development purposes:

- Transform
- Collision
- Overlap

Picture 1 Narkilahti.png

Picture 1. Simulating multiple sensors simultaneously on the test map

For data communication, the simulator needs efficient interaction with other software. We chose to use ROS (Robot Operating System) for data communications. ROS is an open-source framework that allows communication between different programs regardless of the programming language used, if a ROS library exists for that language. While the CARLA simulator offers robust APIs in Python and C++, our ROS-based approach simplifies development significantly by reducing complexity. To enable communication between Unreal Engine and ROS, we used an existing open-source ROSIntegration plugin.

Aiming for realistic looking environments

The project plan from the start was to create as realistic looking environments as possible while still being able to run in real-time, which is no easy task. Fortunately, Unreal Engine has made significant advancements in recent years.

Unreal Engine 5's new rendering features allow 3D modelers to create high-quality models that the engine can render effectively without extensive manual adjustments. While these new rendering features are exceptional, there are still considerations to keep in mind, but that might be a topic for another time. Overall, Unreal Engine 5 enables us to create high-quality 3D models and render them in real-time with decent performance.

To create digital twins of real-world forest locations, we chose a small part of Hirvas in Rovaniemi as our first site due to its convenient proximity. We took numerous photos and videos of the area, collaborating with lecturers, specialists, and students from the Lapland University of Applied Sciences' land surveying and forestry engineering departments. They used a drone equipped with an RGB camera to capture aerial images. This collaboration provided valuable insights and a substantial amount of reference material for 3D modeling and digital twin creation.

The second area is in Vindeln, Sweden, where our use case demonstrations will happen in the project's last half of second year and final year of the project. For this area we received photos, point clouds and height data from Komatsu Forest AB that allowed us to create the area inside Unreal Engine.

Picture 2 Narkilahti.png
Picture 2. Screenshot of Vindeln digital twin map from the simulator.

Both digital twin maps also feature fully dynamic lighting and extensive weather controls, such as date, month, and time, which move the sun based on the real-world coordinates of the sites. Other controls include wind intensity, cloudiness, fog, precipitation, and snow amount to name a few. The simulator’s dynamic lighting and weather system allows testing systems under various weather conditions.

Vehicles and walkers

The simulator includes forwarder and harvester forestry vehicles, along with a drone added at the request of some partners. The use of drones has exploded in recent years, including in the forestry industry. The drone can help spot potential dangers, like animals, from above, which sensors mounted on the vehicle might miss. The drone can also be used for forestry vehicles path planning for example.

Additionally, the simulator offers an easy-to-use tool for configuring sensors on vehicles. This tool enables users to test different sensor setups. Given the size of the forestry machines, finding optimal locations to mount sensors for full coverage in all directions is challenging. Identifying the best placements and types of sensors is an ongoing task that our partners continue to explore.

Picture 3 Narkilahti.png

Picture 3. Tool to configure sensors on vehicles

The simulator also includes walkers, such as humans, reindeer, and moose, which can be set to follow specific paths or move randomly around the area. They are essential for testing different sensors and AI systems' ability to recognize humans or animals. Safety is the top priority for any autonomous vehicle, especially large machinery that can cause significant damage. If a human or animal is detected to be too close to the machine, it should immediately stop operating to ensure safety.

Summary and next steps

The team at FrostBit Software Lab has done a lot of work in the first half of the AGRARSENSE project. We have received good feedback and continued interest from our project partners, which is always a promising sign.

With many of the planned features for the simulator now complete, we are entering a phase focused on implementing the remaining features and improving the existing ones. Specifically, we will be improving the visuals and performance of the digital twin maps. Additionally, further work is needed on the vehicles and walkers.

For further inquiries about the project, please contact AGRARSENSE project manager: Anne Saloniemi (anne.saloniemi(a)lapinamk.fi)

 

> Return to the Lapland UAS Blog front page

> Siirry Pohjoisen tekijät -blogin suomenkieliselle etusivulle