3. Examples¶
This section will provide you with selected examples on how to setup AGX related content in Unity 3D, focusing on the practical side of creating your own virtual physics assets.
To follow along the examples, download the example assets corresponding to the example you want to learn, available as Custom Unity Packages.
3.1. Demo Scene¶

Demo scene containing many of the features of AGX Dynamics for Unity, exemplifying how different
parts may be configured. The package containing the scene can be downloaded here:
AGXUnity_Demo
.
The scene contains the following features:
3.1.1. Hydro- and Aerodynamics¶

3.1.2. Constraint¶

3.1.5. Adaptive Model Order Reduction - AMOR and Wire¶

528 boxes that eventually will merge with the rotating, hinged, rigid body and some parts will split when the wrecking ball hits.¶
3.2. Wheel Loader on Terrain¶
Here we set up a wheel loader on a Unity Terrain using the AGX Dynamics Deformable Terrain addon, which will let the vehicle dig and alter the terrain. We will be using the example wheel loader controllers provided by the AGX Unity package that shows off some basic ways of maneuvering a vehicle using input from keyboard or gamepad.
The Wheel loader model has been created in Algoryx Momentum and imported into AGXUnity as a prefab.
The completed scene including all the assets needed to recreate the example can be downloaded here: unitypackage
Note
Sometimes the Wheel Loader Input component doesn’t compile correctly together with the Input System package. We suggest starting the guide with the Input section to help prevent this, as shown below, and also to import the Input System package before you import the Example DL300 package linked above. See also the troubleshooting section in the bottom of this example.
3.2.1. Input¶
Note
Using the new Unity Input System requires version 2019.2.6 or later when
AGX Dynamics for Unity depends on the ENABLE_INPUT_SYSTEM
define symbol.
The example controllers use the new Unity Input System package to allow for easy configuration of multiple input sources, in this case keyboard and gamepad. To use it, install it in the project using the Package Manager Unity window, which as of the time of writing this guide is still a preview package. The alternative is to use the legacy input manager as swown below.

3.2.1.1. [Optional] Legacy Input Manager¶
Using the old Unity InputManager, AGXUnity.Model.WheelLoaderInputController
requires
some defined keys. The most straight forward approach is to copy the content below and replace
the already existing settings in ProjectSettings/InputManager.asset
.
%YAML 1.1
%TAG !u! tag:unity3d.com,2011:
--- !u!13 &1
InputManager:
m_ObjectHideFlags: 0
serializedVersion: 2
m_Axes:
- serializedVersion: 3
m_Name: jSteer
descriptiveName:
descriptiveNegativeName:
negativeButton:
positiveButton:
altNegativeButton: left
altPositiveButton: right
gravity: 3
dead: 0.3
sensitivity: 1
snap: 1
invert: 0
type: 2
axis: 0
joyNum: 0
- serializedVersion: 3
m_Name: kSteer
descriptiveName:
descriptiveNegativeName:
negativeButton: left
positiveButton: right
altNegativeButton:
altPositiveButton:
gravity: 3
dead: 0.001
sensitivity: 2
snap: 1
invert: 0
type: 0
axis: 0
joyNum: 0
- serializedVersion: 3
m_Name: jThrottle
descriptiveName:
descriptiveNegativeName:
negativeButton:
positiveButton:
altNegativeButton:
altPositiveButton:
gravity: 3
dead: 0.05
sensitivity: 1
snap: 0
invert: 0
type: 2
axis: 9
joyNum: 0
- serializedVersion: 3
m_Name: kThrottle
descriptiveName:
descriptiveNegativeName:
negativeButton:
positiveButton: up
altNegativeButton:
altPositiveButton:
gravity: 3
dead: 0.001
sensitivity: 2
snap: 0
invert: 0
type: 0
axis: 0
joyNum: 0
- serializedVersion: 3
m_Name: jBrake
descriptiveName:
descriptiveNegativeName:
negativeButton:
positiveButton:
altNegativeButton:
altPositiveButton:
gravity: 3
dead: 0.05
sensitivity: 1
snap: 0
invert: 0
type: 2
axis: 8
joyNum: 0
- serializedVersion: 3
m_Name: kBrake
descriptiveName:
descriptiveNegativeName:
negativeButton:
positiveButton: down
altNegativeButton:
altPositiveButton:
gravity: 3
dead: 0.001
sensitivity: 2
snap: 0
invert: 0
type: 0
axis: 0
joyNum: 0
- serializedVersion: 3
m_Name: jElevate
descriptiveName:
descriptiveNegativeName:
negativeButton:
positiveButton:
altNegativeButton:
altPositiveButton:
gravity: 3
dead: 0.3
sensitivity: 1
snap: 0
invert: 1
type: 2
axis: 1
joyNum: 0
- serializedVersion: 3
m_Name: kElevate
descriptiveName:
descriptiveNegativeName:
negativeButton: s
positiveButton: w
altNegativeButton:
altPositiveButton:
gravity: 3
dead: 0.001
sensitivity: 1
snap: 0
invert: 0
type: 0
axis: 0
joyNum: 0
- serializedVersion: 3
m_Name: jTilt
descriptiveName:
descriptiveNegativeName:
negativeButton:
positiveButton:
altNegativeButton:
altPositiveButton:
gravity: 3
dead: 0.3
sensitivity: 1
snap: 0
invert: 0
type: 2
axis: 3
joyNum: 0
- serializedVersion: 3
m_Name: kTilt
descriptiveName:
descriptiveNegativeName:
negativeButton: a
positiveButton: d
altNegativeButton:
altPositiveButton:
gravity: 3
dead: 0.001
sensitivity: 1
snap: 0
invert: 0
type: 0
axis: 0
joyNum: 0
3.2.2. Create the terrain¶
To quickly create a Unity Terrain with AGX Dynamics Deformable Terrain, we will use the top menu command AGXUnity->Model->Deformable Terrain as shown below.

Note
An AGX Deformable Terrain component could also be added through the “Add Component” on a game object, which might be suitable if modifying an existing terrain instead of starting from scratch.
3.2.3. Modify and decorate the terrain¶
Next, use the Unity tools to modify the terrain to an interesting shape with varying heights, and optionally add and paint a terrain layer.
See also
Here we use the standard Unity3D terrain modeling features. For more details, see the Unity Terrain Tools documentation: https://docs.unity3d.com/Manual/terrain-Tools.html


3.2.4. Import the example Wheel Loader AGX model¶
Objects to simulate in AGX can be created from basic shapes using Unity and AGX Unity tools, but it is recommended to use external tools for the creation of complex models. Here, we will import a .agx-file that contains visual meshes, constraints and rigid bodies already setup.
This is done by right clicking the file, selecting the menu option Import AGX file as prefab and then placing the prefab in the scene.


Included in the AGXUnity package are some example components to control and simulate a wheel loader. We will add these components to the newly created prefab by using the Add Component button on the prefab root object.
WheelLoader
WheelLoaderBucketTiltController
WheelLoaderInputController

3.2.5. Set up the shovel component¶
To give the wheel loader the ability to deform the terrain, we have to setup a Deformable Terrain Shovel component. To do this, select the RigidBody object corresponding to the bucket of the wheel loader.


Next, we need to setup the shovel edges. Use the tools as shown below to set up the cutting edge, the top edge and the cutting direction.

Finally, we add this Shovel component to the list of Shovel components recognized by the Deformable Terrain.

3.2.6. Contact Materials¶
In order to adjust the friction between the ground and the wheels, we can specify a Contact Material. To do this, we will create a number of assets:
One ShapeMaterial-asset to represent the ground material
Two ContactMaterial-assets to represent the intersection between the ground material and the two front and rear wheel ShapeMaterials (predefined in the model)
A FrictionModel-asset to define the type of friction calculations used on the ContactMaterials
The below image shows one way of creating the assets and the created assets after renaming to suitable names.


Next, we will set up the contact material to use the other assets as shown below. The wheel ShapeMaterial (DoosanDL300FrontTireMaterial and DoosanDL300RearTireMaterial) should be available in the menu by clicking the Select button to the right of the Material 2 field.
For wheel friction, Contact Reduction Mode can be used to provide a more stable simulation on uneven terrain with many points of contact (wheels), a high friction value (1) could also be used to simulate high grip - rubber on coarse gravel.

Now we will apply the new ShapeMaterial to the relevant AGX physical object, i.e. the ground. This is done by selecting the objects and dragging/dropping the shape materials from the asset list as shown below.

Finally, the two new contact materials need to be registered in the ContactMaterialManager in the scene, as shown below.

3.2.7. Deformable Terrain Particle Renderer¶
To visualize the soil particles as they are created, we will set up the DeformableTerrainParticleRenderer component. This is done by adding it to the the GameObject with the DeformableTerrain component, and setting a visual object (such as a basic sphere) as the visual to represent a soil particle. The visual should be an object in the scene, preferably hidden from the main camera view.

Here, we use the Unity way of creating a 3D sphere, remove the PhysX collider, move it out of view and assign it to the newly created particle renderer. A basic sphere will of course look not very interesting, so a model resembling a rock could be used instead, if available.
3.2.8. Test Drive¶
We’re good to go! Position the camera, start the simulation and use the keyboard to drive the wheel loader across the terrain, digging as you go! Some controls:
Drive / brake and turn: WASD keys
Raise/lower boom, tilt bucket: arrow keys

3.2.9. Trouble Shooting¶
If you are using the Unity Input System package and nothing happens when you try to steer the vehicle, it is possible that the Wheel Loader Input Controller component is not functioning correctly. If working correctly, the component should look like this:

If it is not working correctly, it will probably look like this instead:

If your component looks like in the second example, you can try the following:
Remove the Input System package using the Package Manager
Reinstall the Input System package using the Package Manager
Select the Wheel Loader Input Controller component and assign the empty asset by using the button to the right of the empty field.
Hopefully this fixes the problem. If nothing works, you could as an alternative try the legacy input option outlined above.
3.3. ML-Agents Wheel Loader Way Point Controller¶
Unity ML-Agents toolkit can be used to train autonomous agents using reinforcement learning with AGX Dynamics high fidelity physics. Realtime performance with small errors give the potential for sim2real transfer. This is a simple example how to setup the agent with observations, actions and reward together with AGXUnity. As well as, how you can step the ML-Agents environment together with the simulation and reset the scene after a completed episode. This agent controls a wheel loader to drive over uneven deformable terrain towards a list of way points. The ML-Agents documentation is a good resource for ML-Agents concepts.
The example scene can be downloaded here: unitypackage
.
In addition to AGXUnity you also must install ML-Agents. The Unity package is directly installed with the Unity package manager. That is enough for evaluating the pre-trained agent shipping with the example package. If you want to train your own agent you must install the ML-Agents python package. See the ML-Agents installation documentation for installation options. This example is trained using the versions:
com.unity.ml-agents (C#) v.1.3.0.
mlagents (Python) v0.19.0.
mlagents-envs (Python) v0.19.0.
Communicator (C#/Python) v1.0.0.
3.3.1. Create the Learning Environment¶
The learning environment is where an agent lives. It is a model of the surroundings that cannot be controlled but still may change as the agent acts upon it. This learning environment consists of:
An uneven deformable terrain for the wheel loader to drive on. Following the steps in Wheel Loader on Terrain.
A list of way points for the wheel loader to drive towards.
If the WayPoints
game object is active the wheel loader will try to drive towards each way point in order. It will also try
to align itself towards the way points forward direction. Therefore, should each way point point towards the next way point in
the list. If the way points do not point toward the next way point or are too far apart the agent may encounter a state too
different from states it has observed during training, and will likely fail to
drive to the next target.

If the WayPoints
game object is deactivated six random way points are created. These way points are recreated each time the
wheel loader reaches the last way point. The agent was trained on random way points. It has never driven the pre-determined
path during training.
It is also possible to speed up training by disabling the deformable terrain game object and enabling a simple flat ground plane instead. An agent trained on a flat ground will probably manage on uneven terrain, but is more likely to fail. An agent trained on deformable terrain will probably do just fine on flat ground.
3.3.2. Create an Agent¶
The agent is created as an implementation of the ML-Agents base class Agent
.
Create a new GameObject
Add a new Component and choose the script WheelLoaderAgent
Add a new Component and choose Decision Requester
Set the fields on the component as in:

There are three important methods that must be implemented in every agent script.
OnEpisodeBegin()
- initializes and resets the agent and the environment each episode.CollectObservations(VectorSensor sensor)
- collects the vector observations every time a decision is requested.OnActionRecieved(float[] vectorAction)
- sets the action computed by the policy each time a decision is requested.
Each of these are described in more detail below.
3.3.3. Initialization and Resetting the Agent¶
Instead of importing the wheel loader prefab into the unity scene it is created at runtime by the Agent script. When a RL-training episode ends it is common to reset the agent as well as other things in the scene. If your agent is a simple system of bodies you might be able to easily reset the transforms, velocities etc. However, for more complicated models it is usually easier to completely remove them from the simulation and reinitialize them. The typical way to do this in AGXUnity is:
// Destroy the gameobject for the wheel loader.
DestroyImmediate( WheelLoaderGameObject );
// Manually call garbage collect. Important to avoid crashes.
AGXUnity.Simulation.Instance.Native.garbageCollect();
// Reinitiate the wheel loader object
WheelLoaderGameObject = Instantiate( WheelLoaderResource );
WheelLoaderGameObject.transform.position = new Vector3( 40.0f, 0.07f, 1.79f );
WheelLoaderGameObject.transform.rotation = Quaternion.Euler( -90, 0, 0 );
WheelLoader = WheelLoaderGameObject.AddComponent<AGXUnity.Model.WheelLoader>().GetInitialized<AGXUnity.Model.WheelLoader>();
foreach( var script in WheelLoaderGameObject.GetComponentsInChildren<AGXUnity.ScriptComponent>() )
script.GetInitialized<AGXUnity.ScriptComponent>();
When training the Agent (wheel loader), it attempts to solve the task of driving towards the next target. The training
episode ends if the Agent; achieves the goal, is too far away from the goal or times out. At the start of each episode, the
OnEpisodeBegin()
method is called to set-up the environment for a new episode. In this case we:
Check if the way points exists, if not, it creates a couple of random way points.
If we reached the final way point we call destroy on the wheel loader and the terrain and recreates them again.
Set the next active way point.
Before the first episode, the ML-Agents Academy is set to only step after the Simulation
have stepped. By default the
ML-Agents Academy
steps each FixedUpdate()
. But since it is not required to step the Simulation
in FixedUpdate()
it is safer to make sure the Academy
steps in PostStepForward
.
// Turn off automatic environment stepping
Academy.Instance.AutomaticSteppingEnabled = false;
// Make sure environment steps in simulation post.
Simulation.Instance.StepCallbacks.PostStepForward += Academy.Instance.EnvironmentStep;
3.3.4. Observing the Environment¶
The agent must observe the environment to make a decision. ML-Agents supports scalar observations collected in a feature vector and/or full
visual observations, i.e. camera renderings. In this example we use simple scalar observations since visual observations can often
leads to long training times. The agent must receive enough observations to be able to solve the task. The vector observation is collected
in CollectObservations(VectorSensor sensor)
method. In this example we give the agent the:
Distance to the next way point
The direction to the next way point in local coordinates
How the wheel loader leans in world coordinates
Angle between wheel loaders forward direction and way points forward direction
Current speed of the wheel loader
The angle of the wheel loader waist hinge
The speed of the wheel loaders waist hinge
The current RPM of the engine
These observations are also stacked four times (set in the Behavior Parameters
component).
3.3.5. Taking Actions and Assigning Rewards¶
When driving towards a way point the wheel loader must control the throttle
and the steering
. We have chosen to exclude every
other possible action (elevate
, tilt
, brake
) since they are not required for the task.
The computed actions are received as an argument in the method OnActionReceived(float[] vectorAction)
. They are clamped
to appropriate ranges and set as control signals on the engine
and steeringHinge
.
We use a sparse reward function. The agent receives a constant negative reward of \(r_t = -0.0001\) for each time step it did not reach the way point. Thus, encouraging it to reach the way point fast. If the agent passes the goal way point it receives a reward that depends on the distance to the way point and how well the wheel loader is aligned towards the way points forward direction. The reward is defined as,
Where \(r_{pos}\) and \(r_{rot}\) is defined as,
and
Where \(d\) is the distance to the passed way point and \(f_{\text{w}}\) is forward direction for the wheel loader and \(f_{\text{p}}\) is forward direction of the passed way point, both in world coordinates.
3.3.6. Training the Agent¶
After installing the ML-Agents python API you can train the agent using Soft-Actor-Critic (SAC) or Proximal-Policy-Optimization (PPO). For
a faster training session it is recommended to disable the deformable terrain game object and enable the box ground game object instead.
It is possible to start a training session that communicates directly with the Unity editor, enabling you to watch the agent to fail in
the beginning and continuously improving. Run the command mlagents-learn config.yaml --run-id=training_session
and press play in
the Unity editor.
The file config.yaml
specifies hyperparameters for the RL-algorithms. We have used:
behaviors:
Wheel Loader Agent:
trainer_type: ppo
hyperparameters:
batch_size: 2024
buffer_size: 20240
learning_rate: 1e-4
beta: 1e-4
epsilon: 0.2
lambd: 0.95
num_epoch: 3
learning_rate_schedule: constant
network_settings:
normalize: true
hidden_units: 64
num_layers: 2
vis_encode_type: simple
reward_signals:
extrinsic:
gamma: 0.995
strength: 1.0
keep_checkpoints: 200
checkpoint_interval: 100000
max_steps: 2e7
time_horizon: 512
summary_freq: 10000
environment_parameters:
wheel_loader_curriculum:
curriculum:
- name: close
completion_criteria:
measure: reward
behavior: Wheel Loader Agent
signal_smoothing: true
min_lesson_length: 1000
threshold: 0.85
value:
sampler_type: uniform
sampler_parameters:
min_value: 3.0
max_value: 5.0
- name: further
completion_criteria:
measure: reward
behavior: Wheel Loader Agent
signal_smoothing: true
min_lesson_length: 1000
threshold: 0.90
value:
sampler_type: uniform
sampler_parameters:
min_value: 4.5
max_value: 8.0
- name: furthest
value:
sampler_type: uniform
sampler_parameters:
min_value: 7.0
max_value: 12.0
This is also an example of how to use curriculum learning in ML-Agents. We define three different lessons, which controls the possible distance to the next way point. Curriculum learning can greatly improve training times in sparse reward environments, by making the reward more likely in the beginning and then increasing the task difficulty gradually. It is very possible to improve these hyperparameters. For config options view the ML-Agents documentation.
Training in the editor can be quite slow. Alternatively, it is possible to build the unity project and specify the resulting executable as the environment with
the argument --env=<path to executable>
. This avoids overhead from running the editor. For even faster training sessions add the arguments
--num-envs=N
and --no-graphics
, where the former starts N
separate environments and the latter disables camera renderings. The command
can then be mlagents-learn config.yaml --env=Build\Example.exe --no-graphics --num-envs=2 --run-id=training_session
. List all possible arguments with
mlagents-learn --help
.
The training results is saved in the directory results/<run-id>
. It is both tensorflow checkpoints that is used for resuming training sessions, and
exported <behavior-name>.nn
model files. This is the final policy network saved in a format used by the Unity Inference Engine. In the editor it is possible
to use these trained models for inference, i.e. training of the policy do not continue, but the current policy is used to control the agent. Choose the file as
Model
in the Behavior Parameters
component for the relevant agent.
Finally it is possible to track the training progress using tensorboard. Run the command tensorboard --logdir=results
, open a browser window and navigate to
localhost:6006.

3.4. ML-Agents Robot poking box controller¶
In this ML-Agents example an agent is trained to control an industrial ABB robot. The goal is to move the robot’s end effector to a certain pose and remain there. The robot is controlled by setting the torque on each motor at each joint.
The example scene can be downloaded here: unitypackage
.
This example is trained using the ML-agent versions:
com.unity.ml-agents (C#) v.1.4.0.
mlagents (Python) v0.20.0.
mlagents-envs (Python) v0.20.0.
Communicator (C#/Python) v1.0.0.
3.4.1. The Learning Environment¶
The learning environment consist of the robot, the target box and a static ground. The goal for the robot is to match the target box transform with its tool tip. The robot aims for the middle of the box and to rotate the tip of the tool so that it aligns with the normal of the green side of the box. The simulation time step is 0.005 s and the length of each episode is 800 steps. The agent takes one decision each time step. When the episode ends, the target box is moved to a new random pose within a limited distance from the previous. The robot then aims for the new target pose from its current state, thus giving the agent experience of planning paths from different configurations. Every four episodes the state of the robot is also reset. The current agent was only trained on targets within a certain limited distance in front of the robot.

The robot is also reset if the tool tip collides with the robot. This ends the episode early, reducing the possible reward. Doing this speeds up the training in the early stages of training.
3.4.2. Action and Observations¶
The observation for the robot are
Current angle of each hinge joint
Current velocity of each hinge joint
The relative pose of the target compared to the tool tip
The tool tip pose relative to the robot
The tool tip velocity
This adds up to 30 scalars that are stacked for two time steps.
The action in each time step is the torque on each of the 6 motors for every hinge joint on the robot.
3.4.3. Reward¶
The agent is rewarded for being close to the target pose. The reward function shaped so that the agent starts to receive a small reward starting 0.7 meters away from the target and then increases exponentially.
The reward based on position is calculated as
The reward based on rotation is calculated as
where \(q\) is the quaternion between the tool and the target. The final reward is then
where \(c\) is a constant for scaling the reward.
3.5. Deck Crane¶
The Deck Crane demonstrates the use of wires and some very useful constraints between rigid bodies. This scene is part of the video tutorial Modeling a crane with wires, available here: YouTube - Modeling a crane with wires . The Unity content starts at this timestamp.
The tutorial shows the workflow of modelling a complete crane system starting from a CAD model. It utilizes Algoryx Momentum for the modelling of the crane parts including dynamic properties such as joints, materials, rigid bodies and collision shapes.
The example scene can be downloaded here:
unitypackage
3.6. Grasping Robot¶
This scene illustrate the use of DIRECT solver for frictional contacts. This allows for dry and robust contact friction. The robot is controlled using keyboard or a gamepad. This example uses the Input System package. For more information, see Section 3.2.9
The robot model has been created in Algoryx Momentum and imported into AGXUnity as a prefab.
The package containing the scene can be downloaded here:
AGXUnity_GraspingRobot
.

3.6.1. Control using Gamepad¶
Right Stick Y - Moves the robot arm up/down
Left Stick X - Move the robot arm left/right
Left Stick Y - Move The robot arm in/out (from the base)
D-Pad (X/Y) - Controls the lower hinges which move the lower part of the robotic arm.
Button A/B - Open/Close Yaw
Right/Left Bumper - Rotate wrist left/right
3.6.2. Control using Keyboard¶
PageUp/Down - Moves the robot arm up/down
Left/Right - Move the robot arm left/right
Up/Down - Move The robot arm in/out (from the base)
W/S/A/D - Controls the lower hinges which move the lower part of the robotic arm.
Z/C - Open/Close Yaws
E/Q - Rotate wrist left/right
3.7. Articulated Robot¶
This scene demonstrates the Articulated Root component which enables the possibility to place Rigid Body instances in a hierarchical structure.
The model is an FBX model of a jointed robot system including two fingers for grasping.
The package containing the scene can be downloaded here:
AGXUnity_ArticulatedRobot
.

3.7.1. Control using Gamepad¶
Left/Right trigger - open/close grasping device
Left/Right Shoulder - Rotate Hand
Left Horizontal - Rotate base
Left Vertical - Shoulder up/down
Right Vertical - Elbow
Right Horizontal - Wrist1
D-Pad vertical - Wrist2
D-Pad horizontal - Wrist3
3.7.2. Control using Keyboard¶
A/D - rotate base joint
S/W - rotate shoulder joint
Q/E - rotate elbow joint
O/P - rotate wrist1
K/L - rotate wrist2
N/M - rotate wrist3
V/B - rotate hand
X - close pincher
Z - open pincher
3.8. Excavator on terrain¶

This scene demonstrates an excavator with tracks operating on the AGX Dynamics Deformable Terrain.
The setup of the terrain is done in the same way as in the Wheel Loader on Terrain example.
The completed scene including all the assets needed to recreate the example can be downloaded here: unitypackage
For the setup of the input/steering we refer to the Wheel loader example
The control of the tracks are done via a drivetrain configuration including a combustion engine. For more information, see the implementation in the Engine
class.
3.8.1. Control of camera¶
The camera is by default following the excavator using the LinkCamera.cs script. By pressing F1, the FPSCamera script is activated allowing for a free roaming camera.
F1 - Toggle FPSCamera
Left/Right - Move left/right
Up/Down - Move forward/backward
3.8.2. Control using Gamepad¶
Right Stick X - Boom up/down
Right Stick Y - Move bucket
Left Stick X - Swing left/right
Left Stick Y - Stick up/down
D-Pad (X/Y) - Drive forward/backward/left/right
3.8.3. Control using Keyboard¶
PageUp/Down - Boom up/down
Insert/Delete - Move bucket
T/U - Swing left/right
Home/End - Stick up/down
Up/Down - Forward/Backward
Left/Right - Turn Left/Right