
Make sure you have the Reinforcement learning tutorial open to reference as you go through this process ( linked here ). Your script should look like the screenshot below when you are done. If the road tiles are not oriented correctly when we import them later in the tutorial, come back to these lines and change this value to other increments of 90 until they load correctly. Change the number after “offset” from 180 to 90. Comment these out before running if you want to train headless (no GUI). These lines will start the livestream when our training script starts. Your script should look like the screenshot above. Omniverse_kit.set_setting( "/app/livestream/proto", "ws" )Įxt_t_extension_enabled_immediate( "", True )Įxt_t_extension_enabled_immediate( "", True ) Omniverse_kit.set_setting( "/app/window/drawMouse", True ) On line 31, change args.headless to True. We will have to edit some of these files slightly to make our livestream work. This directory contains the files we need for our reinforcement learning sample problem. Your directory structure may vary slightly but should be something along those lines. If you follow the preparation steps, you should see a directory called python_samples in ~/docker/isaac-sim/. Go ahead and SSH into the machine on which you are hosting the Isaac Docker container. We will come back to actually using it later in this tutorial. Follow the guide here to download the client. You will also need to install the Omniverse Kit Remote Client in order to access the live stream of Isaac sim. After this is done, you will also be able to edit your files via an IDE like Visual Studio which makes development more convenient. Please follow the steps NVIDIA lays out for this. In order to edit the files we will be working with in a persistent manner, we will need to set up our container as is described here. While running Isaac locally would be more convenient, this setup allows us to utilize server hardware to run Isaac at its full potential. We will be running our Isaac instance remotely and then live streaming the GUI output to watch our training happen in real time. This tutorial will assume you already have the Docker Isaac container deployed and working (maybe we want Ernesto to create some documentation as well for the actual install?). In this sample the goal is to train a virtual version of NVIDIA’s Jetbot to follow a road by creating a simulated environment in Isaac to use for training. To begin my journey of learning to use NVIDIA Isaac, I decided to start with the Reinforcement learning sample provided by NVIDIA.
