FS Studio Logo

By Bobby Carlton

Robotics, AI, and a slew of other cutting-edge technologies will re-shape our world. But how soon will that happen?

Caio Viturino works here at FS Studio as a Simulations Developer and is incredibly focused and passionate about how Robotics and Artificial Intelligence will change everything from warehouse automation to having an impact on our everyday life. His robotics journey started when he was as an undergraduate in Mechatronics Engineering between 2010 and 2015.

Along with his amazing work here at FS Studio, Viturino is also a PhD student at the Electrical Engineering Graduate Program at the Federal University of Bahia (UFBA) in Salvador, Brazil, supervised by Prof. Dr. André Gustavo Scolari Conceição and a researcher at the Laboratory of Robotics at UFBA.

With the current state of how industries are looking at robots and AI to play a crucial role in how we work and socialize, we thought it would be important to learn more about what it is that he does and dig into some questions of technologies and where he thinks it is heading.

During our interview Viturino first explains how he ended up on this path with robotics saying, "Shortly after my bachelor's degree, I started a master's degree in Mechatronics Engineering in 2017 with an emphasis on path planning for robotic manipulators. I was able to learn about new robotic simulators during my master's, like V-REP and Gazebo, and I also got started using Linux and Robot Operating System."  

Robots
Caio Viturino and his Robot

In 2019 Viturino started a Ph.D. in Electrical Engineering with a focus on robotic grasp. He primarily used ROS (robot operating system) to work with UR5 from Universal Robots and Isaac Sim to simulate the robotic environment. "In my work, I seek to study and develop robotic grasping techniques that are effective with objects with complex geometries in various industrial scenarios, such as bin picking."  

The Tools and Why

At first, Viturino was hired as a consultant here at FS Studio in July of 2022 to work on a project for Universal Robots using Isaac Sim. After the conclusion of this work, he was hired to work on projects involving artificial intelligence and robotics that are related to scenario generation, quadruped robots, and robotic grasping. 

He tells me that he primarily uses the following for most of his research:

Pybullet - An easy to use Python module for physics simulation, robotics and deep reinforcement learning based on the Bullet Physics SDK. With PyBullet you can load articulated bodies from URDF, SDF and other file formats.

Isaac Sim - A scalable robotics simulation application and synthetic data generation tool that powers photorealistic, physically-accurate virtual environments to develop, test, and manage AI-based robots.

Isaac Gym - provides a basic API for creating and populating a scene with robots and objects, supporting loading data from URDF and MJCF file formats.  

I asked Viturino about his current work on PyBullet, Isaac SIM, quadrupeds learning to walk. Why is this work important to him and why are robotics important in general?

"Robots will not be a replacement for the human labor force but will aid in difficult or repetitive tasks," said Viturino. Just recently Amazon announced their new AI powered robot called Sparrow, designed to do exactly what Viturino is saying here.

He then tells me that for these robots to perform these tasks, it is necessary to develop their cinematic and dynamic models, and test route planning algorithms so that the robot can go from point A to point B while avoiding static and dynamic obstacles, among other difficult tasks.  

These algorithms will require time and significant investment to implement in real-world scenarios. Robotic simulators will lower these costs and risks by enabling all of these algorithms to be tested in simulation before being implemented on actual hardware.  

NeRFs
NeRF Drums

In a previous post on the FS Studio blog, Viturino and I talked about NeRFs. One question I had for him was how will NeRFs and robotics combined change the world of automation, and is there a way to speed up the creation of robotic SIM?

"Robotic simulations are being used more frequently as a means of training and testing mobile robots before deploying them in the real world. This is known as sim2real. For instance, we could create a 3D model of a warehouse and then train various robots in that environment to plan routes, recognize objects, and avoid collisions with dynamic obstacles."  

One thing to mention is that the process isn't that simple. Yes, NeRFs can help a lot in this regard since we may easily and quickly obtain a 3D model of the surrounding area but modeling an environment can take a lot of time and money.

Robotics with Grasping, Trajectory Planning and Deep Learning

When asked about his passion and how robotic grasp objects, trajectory planning and Deep Learning. Viturino tells me that Deep Learning enables the practical and effective use of several algorithms that are only effective in specific environments or situations. For instance, a classic robotic grasping algorithm needs physical properties of objects, such as mass and dynamic and static attributes, to work. These properties are impossible to obtain when considering unknown objects.    

Artificial Intelligence allows robots to perform grasping tasks without worrying about physical properties that are difficult to obtain. These algorithms are getting better at working with any object and in every environment.     

However, there is a lot to be explored in order to find a complete solution for all robotic problems, or to put it another way, a unique algorithm that plans routes, executes preening, identifies obstacles, among other things, in the style of DeepMind. In addition, the computational performance or reliability of these algorithms still limits their practical use. Viturino explains that the gap between industry and academia has been closing significantly over the past few years.

How Far Are We from Robotic Help and Companionship

When we think of modern day robots that can be used in our normal everyday life, we think of things such as an iRobot Roomba vacuum to keep our floors clean or something like Piaggio My Gita robot that follows you around and does things like carry groceries or your computer. But truthfully we all would love for the day where we can have our own astromech droid like R2-D2 to be our on-the-fly problem solver and companion throughout the day. I asked Viturino about this. How far are we from this?

"I think we have a gap where the pieces still don't fit together completely. Imagine that each component of this puzzle is a unique algorithm, such as an algorithm for understanding emotions, another for identifying people, controlling the robot's movements, calculating each joint command, and determining how to respond appropriately in each circumstance, among others."

According to Viturino, the pieces still need to be very carefully built and be developed so they can be assembled and then have them all fit together perfectly. "I think we won't be too far from seeing such in sci-fi movies given the exponential growth of AI in the last decade." Granted, we won't get something like R2-D2 anytime soon, but you could paint a My Gita robot to look like R2!

But it does take me to my next question. I personally own an Anki Vector AI robot. He's been in my family since the beginning and we've all come to really love Vector's presence in the house. I wanted to know Viturino's thoughts on more robotics like Vector, Roomba and My Gita becoming more popular as a consumer product.

He explains that this greatly depends on how well-received this type of technology is accepted by the general public. The younger generation is more receptive to this technology. Price and necessity are also important considerations while purchasing these robots.  

Viturino then says that the robotics community will need to demonstrate that these robots are necessary, much like our cellphones, and are not just a novelty item for robotics enthusiasts like us. This technology should be democratized and easily accessible to all. 

A company in Brazil by the name of Human Robotics is heavily focused on building robots for commercial use in hotels and events as well as domestic use, such as caring for and monitoring elderly people. However, he doesn't think the population is completely open for this technology.  

He's right, there's still some hesitation on using robots for daily tasks, but there is some traction.

AI, SLAM, LiDAR, Facial Tracking. Body Tracking. What Else Will Be Part of the Robotic Evolution?

Viturino focuses on one part of this question, saying that he thinks that as artificial intelligence advances, we will use simpler sensors. Today, it is already possible to create a depth image with a stereo RGB camera. Or perhaps synthesizing new views from sparse RGB images (NeRF). But he believes that the day will come when we will only need a single camera to get all data modalities.  

"There are other technologies, particularly in autonomous vehicles, such as passive thermal cameras. Despite it, the technology is restricted by armies and governments, and the cost is high. However, it may be a promise for the future."

As we come to the end of our conversation one thing Viturino brings up is he believes that simulation allows us to develop, test, and go beyond imagination, without fear of damaging robots and stuff, which can cost a lot of money or dismissal and an unpayable fine, depending on the damage haha. After we've tested our ideas in the simulation, then we're ready to deploy the software in the hardware.  

As for his work in robotics and AI, and closing the gap of what's possible now and the future of what we hope for, he believes that NVIDIA is working to develop ever-more accurate simulations through the use of their PhysX library, which is now available as an open-source version 5.1. As a result, the gap between simulation and reality will close more and more, increasing the reliability of robotic applications.  

"We are in an era where we must be bold and creative to overcome the limits already reached, with agility and teamwork."  

You can learn more about Caio and his work by checking out his Github page.

This Meta SDK will bring hand interactions to your XR experiences.

By Dilmer Valecillos

Originally posted on LearnXR.io

Today I am super excited to share an announcement regarding the following new SDK which I believe will be a huge addition to anyone who wants to work with VR or Passthrough with Oculus, also be sure to watch THIS HAND INTERACTION SDK VIDEO and trust me that it will be worth your time.

The Oculus Interaction SDK is a hands and controllers interaction components library which provides very realistic interactions and allows to easily use prefabs when building games or apps for virtual reality with and without Passthrough features.

‪📌 To get started with the Interaction SDK be sure to check out this document with requirements and download links: https://developer.oculus.com/documentation/unity/unity-isdk-interaction-sdk-overview/

SDK
The PokeExamples scene showcases the PokeInteractor on various surfaces with touch limiting

The following sample scenes are available in the Interaction SDK:

👉 Basic Grab scene which showcases a scene with the HandGrabInteractor

👉 Complex grab scene which showcases a scene with the simpler GrabInteractor but with the addition of Physics, Transforms and Constraints on objects.

👉 Basic Ray scene which showcases ray interactions with Unity canvas.

👉 Basic Poke scene which showcases UI interactions such as buttons, scrollable areas, and box-based proximity fields.

👉 Basic Pose detection scene which demonstrates pose detection for several common hand poses such as Thumbs Up, Thumbs Down, Rock, Paper, Scissors, and Stop

Let me know if you have any questions after watching this announcement and know that I will personally be covering every single feature available in the SDK and I am working closely with Oculus to make sure I have all the info I need for future videos.

This new announcement and video series also contains source code examples which you can access from GitHub today.

Thanks everyone and enjoy it 🙂 time to play with XR !

Dilmer

Today I had a unique xCode scenario crop up that caused multiple files in my project to appear as missing (red in the project file list).  First, the scenario…the project is held in a git repo and development is done using a modified git-flow technique.  As such we are often merging feature branches in to the develop branch and vice versa.

 

After merging the develop branch in to a feature branch and then returning to the develop branch numerous source files appeared as missing.  I verified the files still existed on the filesystem, deleted the files from the project and added them back in with no change in file status in xCode, they still appeared missing.  I then deleted them again, re-added them to the project (still missing) and then tried to adjust the location setting of each file to be "Relative to Project".  This did not work either.   I made another attempt to add the files and then manually select the location of each file in the filesystem.  Although the files were there xCode still showed them as missing.

 

Seeing no other option than to re-create the project file by creating a duplicate project with a different name I came across a fix.  I created the new project and then proceeded to move all of the header files in to one of the new project's sub directories via the Terminal.  As soon as the header files were moved, all missing .cpp files in the original project showed up (file names turned to black from red).  I then moved the header files back to their original location on the filesystem and my original project was back in business.

 

I do not know what the underlying cause of this corruption was, but in the end the fix was to move the files out of their original directory on the file system and then move them back.  Hopefully this saves another developer time trying to figure out why xCode chose to incorrectly mark some files as missing

crossmenu