Engineers increasingly rely on simulation to augment, and, in some cases, replace, costly and time consuming experimental work. However, current simulation capabilities are sometimes inadequate to capture phenomena of interest.
In tracked vehicle analysis, for example, the interaction of the track with granular terrain has been difficult to characterize through simulation due to the prohibitively long simulation times associated with many-body dynamics problems. This is the generic name used in this case to characterize dynamic systems with a large number of bodies encountered, for instance, when one adopts a discrete representation of the terrain in vehicle dynamics problems.
However, these many-body dynamics problems can now capitalize on recent advances in the microprocessor industry that are a consequence of Moore's law, of doubling the number of transistors per unit area roughly every 18 months. Specifically, until recently, access to massive computational power on parallel supercomputers has been the privilege of a relatively small number of research groups in a select number of research facilities, thus limiting the scope and impact of high performance computing (HPC).
This scenario is rapidly changing due to a trend set by general-purpose computing on graphics processing unit (GPU) cards. Nvidia's CUDA (compute unified device architecture) library allows users to use the streaming multiprocessors available in high-end graphics cards. In this setup, a latest generation Nvidia GPU Kepler card reached 1.5 Teraflops by the end of 2012 owing to a set of 1536 scalar processors working in parallel, each following a SIMD (single instruction multiple data) execution paradigm.
Despite having only 1536 scalar processors, such a card is capable of managing tens of thousands of parallel threads at any given time. This overcommitting of the GPU hardware resources is at the cornerstone of a computing paradigm that aggressively attempts to hide costly memory transactions with useful computation, a strategy that has lead, in frictional contact dynamics simulation, to a one order of magnitude reduction in simulation time for many-body systems.
The challenge of using parallel computing to reduce simulation time and/or increase system size stems, for the most part, from the task of designing and implementing many-body dynamics specific parallel numerical methods. Designing parallel algorithms suitable for frictional contact many-body dynamics simulation remains an area of active research.
Some researchers have suggested that the most widely used commercial software package for multi-body dynamics simulation, which draws on a so-called penalty or regularization approach, runs into significant difficulties when handling simple problems involving hundreds of contact events, and thus cases with thousands of contacts become intractable. Unlike these penalty or regularization approaches where the frictional interaction is represented by a collection of stiff springs combined with damping elements that act at the interface of the two, the approach embraced by researchers at U.S. Army TARDEC and University of Wisconsin-Madison draws on a different mathematical framework.
Specifically, the parallel algorithms rely on time-stepping procedures producing weak solutions of the differential variational inequality (DVI) problem that describes the time evolution of rigid bodies with impact, contact, friction, and bilateral constraints. When compared to penalty methods, the DVI approach has a greater algorithmic complexity, but avoids the small time steps that plague the former approach.
One of the challenging components of this method is the collision detection step required to determine the set of contacts active in the many-body system. These contacts, crucial in producing the frictional contact forces at work in the system, are determined in parallel.
The engineering application used to demonstrate this parallel simulation capability was that of an autonomous light tracked vehicle that would operate on granular terrain and negotiate an obstacle course. To further illustrate the versatility of the simulation capability, the vehicle was assumed to be equipped with a drilling device used to penetrate the terrain. Both the vehicle dynamics and the drilling process were analyzed within the same HPC-enabled simulation capability.
The modeling stage relied on a novel formulation of the frictional contact problem that required at each time step of the numerical simulation the solution of an optimization problem. The proposed computational framework, when run on ubiquitous GPU cards, allowed the simulation of systems in which the terrain is represented by more than 0.5 million bodies leading to problems with more than one million degrees of freedom. The numerical solution for the equations of motion was tailored to map on the underlying GPU architecture and was parallelized to leverage more than 1500 scalar processors available on modern hardware architectures.
Simulation gets on track
The simulation of the unmanned vehicle captured the dynamics of a complex system comprised of many bilateral and unilateral constraints. Using a combination of joints and linear actuators, the tracked vehicle model was created and then simulated navigating over either flat rigid terrain or deformable terrain made up of gravel-type granular material. The vehicle was modeled to represent a small, autonomous lightweight tracked vehicle that could be sent to another planet or used to navigate dangerous terrain.
There were two tracks, each with 61 track shoes. Each track shoe was made up of two cylinders and three rectangular plates and had a mass of 0.34 kg. Each shoe was connected to its neighbors using one pin joint on each side, allowing the tracks to rotate relative to each other only along one axis. Within each track there were five rollers, each with a mass of 15 kg, and one idler and one sprocket, both with a mass of 15 kg.
The chassis was modeled as a rectangular box with a mass of 200 kg and moments of inertia were computed for all parts using a CAD package. The purpose of the rollers is to keep the tracks separated and support the weight of the vehicle as it moves forward. The idler is necessary as it keeps the track tensioned. It is usually modeled with a linear spring/actuator but for the purposes of demonstration it was fixed to the vehicle chassis using a revolute joint. The sprocket is used to drive the vehicle and was also attached to the chassis using a revolute joint.
Torque was applied to drive the track, with each track driven independently of the other. When the sprocket rotates, it comes into contact with the cylinders on the track shoe and turns the track with a gear-like motion.
The track for the vehicle was created by first generating a ring of connected track shoes. This ring was dropped onto a sprocket, five rollers, and an idler, which was connected to the chassis using a linear spring. The idler was pushed with 2000 N of force until the track was tensioned and the idler had stopped moving. This pre-tensioned track was then saved to a data file and loaded for the simulation of the complete vehicle.
In this simulation scenario, the tracked vehicle was dropped onto a flat surface and a torque was applied to the sprocket to drive it forward; the forces on several revolute joints connecting the track shoes were analyzed as they traveled around the sprocket.
Transient behavior was observed when the torque was applied to the sprocket at 1 s and the track shoe connected to this joint came into contact with the sprocket at 5 s. The oscillatory behavior of the joint forces could be attributed to several factors.
First, the tension in the track was very high; there was no spring/linear actuator attached to the idler, so high tension forces could not be dampened. Second, the combination of a high pre-tensioning force (2000 N) and lack of a linear actuator on the idler resulted in high revolute joint forces.
The forces in the joint were highest when the track shoe first came into contact with the sprocket. As the track shoe moved around the sprocket, the force decreased as subsequent track shoes and their revolute joints helped distribute the load. It should be noted that the gearing motion between the track shoes and the sprocket was not ideal as it was not very smooth. In a more realistic model, forces between track shoes would be overlapping so that the movement of the tracks would be smoother and the forces experienced by the revolute joints would be smaller.
The tracked vehicle was simulated moving over a bed of 84,000 granular particles. The particles were modeled as large pieces of gravel with a radius of .075 m, and a density of 1900 kg/m³. A 100 N·m torque was applied to both sets of tracks to move the vehicle. Note that unlike the case where the vehicle moves on a flat section of ground, the forces experienced by the revolute joints are much noisier. Individual grains move under the tracks as the vehicle moves causing large vibrations to travel through the shoes. These vibrations would be reduced when modeling a more compliant terrain material that could dissipate energy on contact.
Results and future work
Researchers succeeded in expanding parallel simulation capabilities in multi-body dynamics. The many-body dynamics problem of interest was modeled as a cone complementarity problem whose parallel numerical solution scales linearly with the number of bodies in the system. These developments have directly resulted in the ability to simulate complex tracked vehicles operating on granular terrain.
The parallel simulation capability was demonstrated in the context of an application that emphasized the interplay between light-vehicle track/terrain dynamics, where the vehicle length becomes comparable with the dimensions associated with the obstacles expected to be negotiated by the vehicle.
The simulation capability was anticipated to be useful in gauging vehicle mobility early in the design phase, as well as in testing navigation/control strategies defined/learned on the fly by small autonomous vehicles as they navigate uncharted terrain profiles.
In terms of future work, a convergence issue induced by the multi-scale attribute of the vehicle-terrain interaction problem needs to be addressed. Additionally, technical effort will focus on extending the entire algorithm to run on a cluster of GPU-enabled machines, further increasing the size of tractable problems. The modeling approach remains to be augmented with a dual discrete/continuum representation of the terrain to accommodate large scale simulations for which an exclusively discrete terrain model would unnecessarily burden the numerical solution.
This article is based on SAE International technical paper 2013-01-1191 by Dan Negrut, Daniel Melanz, and Hammad Mazhar, University of Wisconsin-Madison, and David Lamb, Paramsothy Jayakumar, and Michael Letherwood, U.S. Army TARDEC.