FS Studio Logo

By Bobby Carlton

One of the most significant benefits of automation and robotics as part of your workforce is increased efficiency.

The warehouse is a critical part of the supply chain. It is the place where inventory is stored and orders are fulfilled, and in recent years, there has been a growing trend of introducing technology such as VR/AR/MR, digital twins, real-time simulation, 3D AI, automation and robotics into the warehouse in many industries. This trend has been driven by the need to improve efficiency and productivity while reducing costs and keeping human employees safe.


Robotics and automation can help to improve accuracy in picking and packing orders. For example Amazon is turning toward robotics to assist their employees by taking on more of the tedious and repetitive tasks found in their own warehouses, where they can also help to reduce the time it takes to fulfill an order. In some cases, they can even help to reduce the amount of inventory that needs to be stored in the warehouse.


There are many different types of robots that can be used in the warehouse. The most common type is the articulated robot. These robots have a series of joints that allow them to move freely around the warehouse and are often used for tasks such as picking and packing orders.

Another type of robot that is often used in warehouses is the gantry robot. These robots are mounted on a fixed frame and move along a set path. These robots are typically used for tasks such as loading and unloading trucks.

You'll also find line follower robots used in many warehouses. Simply put, these robots use a line to guide them through their daily tasks such as delivering product to bins or sending product off for shipping.

Robots are not the only form of automation that is being used in warehouses. There are also a number of automated storage and retrieval systems (AS/RS) that are being deployed. These systems use a variety of technologies such as XR, lasers, sensors, and conveyors to automate the movement of inventory within the warehouse.

These systems use a variety of technologies such as lasers, sensors, and conveyors to automate the movement of inventory within the warehouse.


What is important to note here is that the warehouse industry is in the midst of a major transformation. Thanks to advances in technology such as AR and VR, robotics, digital twinning, real-time simulation and 3D AI, warehouses are becoming increasingly automated, with robots and other automated systems taking on an ever-increasing share of the workload. This shift is being driven by a number of factors, including the need for greater efficiency, accuracy, and safety.

One of the most significant benefits of automation is increased efficiency. Automated systems can work around the clock, without breaks or vacations, and can complete tasks much faster than human workers. In addition, automated systems are less likely to make mistakes than human workers, which can lead to significant savings in terms of time and money.

Another benefit of automation is improved safety. Automated systems can eliminate or reduce many of the hazards associated with traditional warehouse work, such as lifting heavy objects or working with dangerous chemicals. In addition, automated systems can be designed to meet or exceed all relevant safety standards.

Video by FS Studio

Finally, automation can help to improve the overall accuracy of warehouse operations. By eliminating human error, automated systems can help to ensure that inventory is always accurate and that orders are filled correctly, which have a positive impact on industry 4.0 goals. This can lead to happier customers and fewer returns.

Automated systems and robotics are becoming increasingly common, as they offer a number of benefits over traditional manual labor. These benefits include increased efficiency, improved safety, and enhanced accuracy. As the cost of automation decreases and the benefits continue to increase, it's likely that we'll see even more warehouses turning to technologies such as XR and digital twinning to improve how automation and robotics fit into the warehouse environment in the years to come.

By Bobby Carlton

Robotics, AI, and a slew of other cutting-edge technologies will re-shape our world. But how soon will that happen?

Caio Viturino works here at FS Studio as a Simulations Developer and is incredibly focused and passionate about how Robotics and Artificial Intelligence will change everything from warehouse automation to having an impact on our everyday life. His robotics journey started when he was as an undergraduate in Mechatronics Engineering between 2010 and 2015.

Along with his amazing work here at FS Studio, Viturino is also a PhD student at the Electrical Engineering Graduate Program at the Federal University of Bahia (UFBA) in Salvador, Brazil, supervised by Prof. Dr. André Gustavo Scolari Conceição and a researcher at the Laboratory of Robotics at UFBA.

With the current state of how industries are looking at robots and AI to play a crucial role in how we work and socialize, we thought it would be important to learn more about what it is that he does and dig into some questions of technologies and where he thinks it is heading.

During our interview Viturino first explains how he ended up on this path with robotics saying, "Shortly after my bachelor's degree, I started a master's degree in Mechatronics Engineering in 2017 with an emphasis on path planning for robotic manipulators. I was able to learn about new robotic simulators during my master's, like V-REP and Gazebo, and I also got started using Linux and Robot Operating System."  

Caio Viturino and his Robot

In 2019 Viturino started a Ph.D. in Electrical Engineering with a focus on robotic grasp. He primarily used ROS (robot operating system) to work with UR5 from Universal Robots and Isaac Sim to simulate the robotic environment. "In my work, I seek to study and develop robotic grasping techniques that are effective with objects with complex geometries in various industrial scenarios, such as bin picking."  

The Tools and Why

At first, Viturino was hired as a consultant here at FS Studio in July of 2022 to work on a project for Universal Robots using Isaac Sim. After the conclusion of this work, he was hired to work on projects involving artificial intelligence and robotics that are related to scenario generation, quadruped robots, and robotic grasping. 

He tells me that he primarily uses the following for most of his research:

Pybullet - An easy to use Python module for physics simulation, robotics and deep reinforcement learning based on the Bullet Physics SDK. With PyBullet you can load articulated bodies from URDF, SDF and other file formats.

Isaac Sim - A scalable robotics simulation application and synthetic data generation tool that powers photorealistic, physically-accurate virtual environments to develop, test, and manage AI-based robots.

Isaac Gym - provides a basic API for creating and populating a scene with robots and objects, supporting loading data from URDF and MJCF file formats.  

I asked Viturino about his current work on PyBullet, Isaac SIM, quadrupeds learning to walk. Why is this work important to him and why are robotics important in general?

"Robots will not be a replacement for the human labor force but will aid in difficult or repetitive tasks," said Viturino. Just recently Amazon announced their new AI powered robot called Sparrow, designed to do exactly what Viturino is saying here.

He then tells me that for these robots to perform these tasks, it is necessary to develop their cinematic and dynamic models, and test route planning algorithms so that the robot can go from point A to point B while avoiding static and dynamic obstacles, among other difficult tasks.  

These algorithms will require time and significant investment to implement in real-world scenarios. Robotic simulators will lower these costs and risks by enabling all of these algorithms to be tested in simulation before being implemented on actual hardware.  

NeRF Drums

In a previous post on the FS Studio blog, Viturino and I talked about NeRFs. One question I had for him was how will NeRFs and robotics combined change the world of automation, and is there a way to speed up the creation of robotic SIM?

"Robotic simulations are being used more frequently as a means of training and testing mobile robots before deploying them in the real world. This is known as sim2real. For instance, we could create a 3D model of a warehouse and then train various robots in that environment to plan routes, recognize objects, and avoid collisions with dynamic obstacles."  

One thing to mention is that the process isn't that simple. Yes, NeRFs can help a lot in this regard since we may easily and quickly obtain a 3D model of the surrounding area but modeling an environment can take a lot of time and money.

Robotics with Grasping, Trajectory Planning and Deep Learning

When asked about his passion and how robotic grasp objects, trajectory planning and Deep Learning. Viturino tells me that Deep Learning enables the practical and effective use of several algorithms that are only effective in specific environments or situations. For instance, a classic robotic grasping algorithm needs physical properties of objects, such as mass and dynamic and static attributes, to work. These properties are impossible to obtain when considering unknown objects.    

Artificial Intelligence allows robots to perform grasping tasks without worrying about physical properties that are difficult to obtain. These algorithms are getting better at working with any object and in every environment.     

However, there is a lot to be explored in order to find a complete solution for all robotic problems, or to put it another way, a unique algorithm that plans routes, executes preening, identifies obstacles, among other things, in the style of DeepMind. In addition, the computational performance or reliability of these algorithms still limits their practical use. Viturino explains that the gap between industry and academia has been closing significantly over the past few years.

How Far Are We from Robotic Help and Companionship

When we think of modern day robots that can be used in our normal everyday life, we think of things such as an iRobot Roomba vacuum to keep our floors clean or something like Piaggio My Gita robot that follows you around and does things like carry groceries or your computer. But truthfully we all would love for the day where we can have our own astromech droid like R2-D2 to be our on-the-fly problem solver and companion throughout the day. I asked Viturino about this. How far are we from this?

"I think we have a gap where the pieces still don't fit together completely. Imagine that each component of this puzzle is a unique algorithm, such as an algorithm for understanding emotions, another for identifying people, controlling the robot's movements, calculating each joint command, and determining how to respond appropriately in each circumstance, among others."

According to Viturino, the pieces still need to be very carefully built and be developed so they can be assembled and then have them all fit together perfectly. "I think we won't be too far from seeing such in sci-fi movies given the exponential growth of AI in the last decade." Granted, we won't get something like R2-D2 anytime soon, but you could paint a My Gita robot to look like R2!

But it does take me to my next question. I personally own an Anki Vector AI robot. He's been in my family since the beginning and we've all come to really love Vector's presence in the house. I wanted to know Viturino's thoughts on more robotics like Vector, Roomba and My Gita becoming more popular as a consumer product.

He explains that this greatly depends on how well-received this type of technology is accepted by the general public. The younger generation is more receptive to this technology. Price and necessity are also important considerations while purchasing these robots.  

Viturino then says that the robotics community will need to demonstrate that these robots are necessary, much like our cellphones, and are not just a novelty item for robotics enthusiasts like us. This technology should be democratized and easily accessible to all. 

A company in Brazil by the name of Human Robotics is heavily focused on building robots for commercial use in hotels and events as well as domestic use, such as caring for and monitoring elderly people. However, he doesn't think the population is completely open for this technology.  

He's right, there's still some hesitation on using robots for daily tasks, but there is some traction.

AI, SLAM, LiDAR, Facial Tracking. Body Tracking. What Else Will Be Part of the Robotic Evolution?

Viturino focuses on one part of this question, saying that he thinks that as artificial intelligence advances, we will use simpler sensors. Today, it is already possible to create a depth image with a stereo RGB camera. Or perhaps synthesizing new views from sparse RGB images (NeRF). But he believes that the day will come when we will only need a single camera to get all data modalities.  

"There are other technologies, particularly in autonomous vehicles, such as passive thermal cameras. Despite it, the technology is restricted by armies and governments, and the cost is high. However, it may be a promise for the future."

As we come to the end of our conversation one thing Viturino brings up is he believes that simulation allows us to develop, test, and go beyond imagination, without fear of damaging robots and stuff, which can cost a lot of money or dismissal and an unpayable fine, depending on the damage haha. After we've tested our ideas in the simulation, then we're ready to deploy the software in the hardware.  

As for his work in robotics and AI, and closing the gap of what's possible now and the future of what we hope for, he believes that NVIDIA is working to develop ever-more accurate simulations through the use of their PhysX library, which is now available as an open-source version 5.1. As a result, the gap between simulation and reality will close more and more, increasing the reliability of robotic applications.  

"We are in an era where we must be bold and creative to overcome the limits already reached, with agility and teamwork."  

You can learn more about Caio and his work by checking out his Github page.

By Dilmer Valecillos and Bobby Carlton

A lighter and more powerful VR headset is here, but should you get the Meta Quest Pro?

We got our hands on the Meta Quest Pro, Meta's $1,500 Enterprise MR (mixed reality) headset, and over the last few days we've been testing some of the new features available such as using this device with the Immersed application for no monitors work experience with and without color passthrough.

Check out our unboxing video as our Head of R&D, Dilmer Valecillos looks at the headset from a developers perspective and runs a few tests with the Movement SDK including body tracking, face tracking, and eye tracking as well as deploying demos built for Meta Quest 2 to this new Meta Quest Pro mixed reality device.

There are a lot of really amazing features are built into the headset, but will the Quest Pro replace physical monitors and be something we see on everyone's workstation? It's hard to say. However, the MR headset shows a lot of potential and it is easy imagining some people using the Quest Pro as one of their daily work tools in the same way that we use computers, tablets, mobile devices, websites, and various apps throughout the work day.

One way Meta made the Quest Pro more enterprise friendly was by reducing the weight of the headset by using pancake lenses. These lenses provide a shorter gap with smaller panels, which allows for a much slimmer and lighter headset. They also deliver improved clarity with 25% sharpness in the center and 50% sharpness at the edges, which solves the "god rays" (a specific type of lens flare that looks a bit like a sunbeam shining through the clouds and right on your eye) issues that previous VR headsets had.

Quest Pro
Image by Meta

In the video posted on the FS Studio YouTube page, Dilmer looks at how hand tracking and facial tracking are greatly improved on the headset. At the moment, the Quest Pro handles upper body tracking very well, but there is also code in the headsets SDK that shows full body tracking is not far off. This could be huge for helping employees with work tasks and collaboration.

Keep in mind that the Meta Quest Pro is designed for enterprise use, however you are able to play VR games on the headset. Resolution Games are currently working on a cool AR demo called Spatial Ops, that uses the headsets full color passthrough to turn your real-world environment into an awesome multi-player shooter.

We are just scratching the surface of what the Meta Quest Pro can do, and as our team spend more time with the headset, we will be able to provide you with more in-depth ways that we think the Quest Pro will be successful as a work tool for many industries focused on automation, robotics, and with employees out in the field or at a desk.

By Bobby Carlton

The next generation haptic gloves from HaptX arrives.

Since it launched its first product, the HaptX DK2, in January 2021, HaptX has been pushing the envelope in terms of haptic technology and how it can improve the way XR is used as an enterprise solution. Now, it's time for the public to get their hands on the company's next generation of gloves, the HaptX G1. This is a ground-breaking device that's designed to be used in large-scale implementations in many industries.

The new HaptX Gloves G1 features interesting features that are designed to meet the needs of its users. Some of these include wireless mobility, improved ergonomics, and multiuser collaboration.

“With HaptX Gloves G1, we’re making it possible for all organizations to leverage our lifelike haptics,” said Jake Rubin, founder and CEO of HaptX, in an official press release. “Touch is the cornerstone of the next generation of human-machine interface technologies, and the opportunities are endless.”

HaptX Gloves G1 leverages advances in materials science and the latest manufacturing techniques to deliver the first haptic gloves that fit like a conventional glove and delivers necessary precision tactile feedback for jobs that require that type of accuracy.

The flexible and soft materials used in the production of the HaptX G1 provide a level of comfort and dexterity that's not found in other products. To ensure the users are able to have a good fit, the GI glove is available in different sizes, such as the Medium, Large, and Extra large.

Built into the glove are hundreds of actuators that expand and contract on specific parts of the glove to provide a realistic sense of touch when you interact with various virtual objects. For example if you were to hold a wrench in VR or AR, the actuators in the glove would expand on the parts of your physical hand to convince you that you were actually holding a real wrench.

Tto do this, the G1 utilizes a wireless Airpack, which is a lightweight device that generates compressed air and controls its flow to provide a physical feedback via the actuators. The Airpack can be worn on your body in backpack mode or placed on a table for standing or seated applications.

The Airpack can be charged using a single charge, which provides it with an additional three hours of use, making it ideal for military, educational, and enterprise applications.

Image from HaptX

The HaptX SDK provides developers with a variety of features that make it easier to create custom applications for any industry or workflow. One of these is its advanced feedback technology, which can be used to simulate the microscale textures of various surfaces. The G1 also comes with a variety of plugins for platforms such as Unreal Engine and Unity, as well as C++ API.

According to Joe Michaels, HaptX's Chief Revenue Officer, many organizations resort to using game controllers when developing their metaverse strategies due to how ineffective they are at providing effective touch feedback. With the G1's ability to provide real-time feedback, businesses can now rely on its capabilities to improve their operations.

To celebrate the G1's launch, the company is currently taking pre-orders for the G1 through its website. For a limited time, customers can get a pair of the G1 for just $5,495. They can also save money by purchasing a bundle of four sizes.

HaptX G1 Palm
Image by HaptX

In addition to pre-ordering and a discounted bundle option, the company is also introducing a subscription program that provides its customers with a comprehensive service and support plan. The subscription includes the Airpack, the SDK, and a comprehensive maintenance and support package.

The subscription for the HaptX Gloves G1 starts at $495 a month. Pre-ordering the product allows customers to make a small deposit to cover the cost of the gloves. Once the G1 is delivered, they can select their subscription options.

The G1 is expected to ship in the third quarter of 2023. To learn more about the G1 and its subscription model, visit the HaptX website.

By Bobby Carlton

With its network perfectly synchronized with the real world, Digital Schiene Deutschland (Digital Rail for Germany, DSD) can run optimization tests and “what if” scenarios to test and validate changes in the railway system, such as reactions to unforeseen situations.

The German railway company, Deutsche Bahn is building a digital twin of its railway network that will allow them to monitor and improve the performance of its 20,500 miles of tracks and stations. Through an interconnected network of sensors and cameras and AI through Nvidia Omniverse, the railway can analyze the data collected by its sensors and cameras to identify the causes of its various operational issues and improve its performance.

Deutsche Bahn
Image from Nvidia

A digital twin can provide you with a quick overview of what's going wrong, but it can also help you prevent it. With the help of AI, you can learn how to fix issues and make the whole system work better. For instance, an AI can analyze a process and uncover design flaws and identify the cause of why it's happening. It can also help you schedule regular inspections and maintenance on certain parts of the machinery through predictive maintenance.

“With NVIDIA technologies, we’re able to begin realizing the vision of a fully automated train network,” said Ruben Schilling, who leads the perception group at DB Netz, part of Deutsche Bahn in an official Nvidia press release. "The envisioned future railway system improves the capacity, quality and efficiency of the network."

That said, it’s important to not underestimate the real-time aspect of AI’s role with digital twinning in industry 4.0. According to David Crawley, a professor at the University of Houston's College of Technology, the university collaborated with other organizations to develop a digital twin that can be used in its digital oilfield laboratory.

He noted that an oil rig worker in the South Pacific was able to use AR headgear to show an engineer how to fix a faulty part of the equipment without shutting down the operations.

According to Crawley, the use of AI in the metaverse allows people to engage in activities that are similar to what they're actually doing in the real world using a AR, VR, or WebXR. For instance, a worker hundreds of miles away can use a device like a Magic Leap 2 headset to fix a pipe or identify a problem with a valve.

There's also a symbiotic relationship between AI and digital twins that exists in an industrial metaverse.

“AI is ultimately the analysis of data and the insights we draw from it,” Lisa Seacat DeLuca, then a distinguished engineer and director of Emerging Solutions at IBM, during an interveiw with VentureBeat. “The digital twin provides a more effective way to monitor and share that data, which feeds into AI models. I boldly state that you can’t have AI without digital twins because they can bring users closer to their assets and enable them to draw better, more accurate and more useful insights.”

A digital twin can be built using the data collected by various sensors and devices and IOT. Aside from providing more data points, the digital twin can also help improve the AI's performance by allowing it to perform more effective simulations.

Deutsche Bahn Chief Technology Innovation Officer Rolf Härd, noted that the company can collect enough data to allow its AI to perform more impactful simulations and provide predictions that will help Deutsche Bahn be more efficient.

David Crawley explained how a digital twin can be used to perform predictive maintenance analyses on a trains components, and noted that because of his knowledge of how these components work, he can use the digital twin to model maintenance scenarios.

When creating a digital twin at such a large scale, the process can become a massive undertaking. You need a strategy and a roadmap to a custom-built 3D pipeline that connects computer-aided design datasets that are built within your ecosystem with high-definition 3D maps and various simulation tools. In this case Deutsche Bahn used Nvidia's Universal Scene Description 3D framework, to connect and combine data sources into a single shared virtual model.

Through digital twinning and data collected by the IoT sensors, Crawley and his team were able to identify areas where his organization can improve its operations. For instance, by analyzing the speed and weather of a train, he was able to identify where Deutsche Bahn could improve its service to its customers.

By Bobby Carlton

New technologies will allow people to interact with the world around them in various ways.

For some time now, people have been captivated by the notion of how new technologies will change the way we work, socialize, seek out entertainment, and approach education. This has led to the development of new ideas about how to create a better computing system for todays digitally connected world. Web 3.0 and spatial computing are innovations that will allow people to bring that vision to life.

Although some argue that Web 3.0 is here thanks to AR/VR technologies, others feel it's still in development but just around the corner. The fact is that the core components of Web 3.0 are here thanks to innovations such as AI, blockchain, VR/AR, IoT, and 5G.

Web 3.0 aims to drastically expand the utility of the internet, which has evolved from its text-based origins to a more interactive and socially consumed form of content creation. These technologies will allow people to experience a more intelligent and user-friendly digital world.

Web 3.0

Despite the technological advancements that have occurred over the past few years, the user experience on the web has always been this 2D experience. XR (VR/AR) and other similar technologies will allow people to experience a more accurate, interactive, and user-friendly digital world.

The spatial computing is a project that aims to digitize our 2D content and turn them into 3D worlds to transform them into a digital twin that's more accurate and user-friendly. The idea is that will allow people to interact with the virtual world around them through VR/AR and AI.

There are a lot of different names for this approach. The most popular has been the metaverse. However there are a number of other terms being used. Here are a just a few:

A report released in 2020 by Deloitte stated that the spatial web ( the term used by Deloitte) is the next evolution in information technology and computing. It is similar to the development of Web 1.0 and 2.0, and it will eliminate the boundary between physical objects and digital content.

Image from Forbes

The term spatial refers to the idea that digital information will be integrated into your physical real-world space, which is an inseparable part of the physical world.

In an article published on the Singularity Hub, Peter Diamandis, a prominent Silicon Valley investor, stated that the world will be filled with rich, interactive, and dynamic data over the next couple of years. This will allow people to interact with the world around them in various ways. The article also noted that the spatial web will transform various aspects of our lives, including education, retail, advertising, and social interaction.

The spatial web is built on the various technological advancements that have occurred over the past few years, such as Artificial Intelligence (AI), VR, blockchain, and IoT. These technologies are expected to have a significant impact on the development of the digital world and Web 3.0.

The four major trends that are expected to have a significant impact on the development of the digital world are expected to combine to create a single meta trend. This will allow computing to move into the space between the physical and digital worlds. This will allow future systems to interact with the world around them in various ways.

The various sensors and robotic systems that will be used in the virtual worlds will be able to collect and store data in order to support the spatial web. This will allow users to interact with the world around them in various ways. At the same time, the data collected and stored by these systems will be used to create reports and other applications that will allow individuals to interact with the world around them, and provide businesses with data-rich KPIs.

For instance, in the warehousing industry, traditional methods of picking and transporting orders have been used to successfully accomplish the task of navigating through millions of square feet of warehouse space. With the increasing number of websites that promise next day delivery, warehouses are constantly looking for new ways to improve their efficiency.

Through the use of robotics, automation, and various data points such as the location of cameras and sensors, the system can create 3D maps of warehouses. It can also suggest the ideal warehouse layout based on the data collected by its human workers, create "what if" scenarios, create improved employee training, uncover "hidden factories", and streamline workflow. This method can increase efficiency by up to 50%.

Another positive is that the use of this technology can help companies reduce their turnover rate and improve their employee satisfaction. It can also help them increase their self-worth by allowing them to perform their job more efficiently.

Although the spatial web is only a small glimpse of the potential of the future for business, it is still important to note that the various technologies that are currently being developed are still in their early stages of development. In a Baystreet article, the concept of the smart world of tomorrow relies on the four lenses that are designed to create a seamless and harmonious interaction between man and machine.

The spatial web is a framework that aims to enable the interoperability of various sub sectors and technologies. It can help create a network where all of these technologies can work together seamlessly. This will allow the ideologies of Human 2.0, Society 5.0, Industry 4.0, and Web 3.0 to transform into reality.

Smart factories will allow workers to collaborate and create a virtual environment where they can work together seamlessly. This can help them improve their efficiency and create a more harmonious environment for their customers. With the help of technology such as AR and VR, you can take a full-scale model of your company's product and visualize its various components in a room.

After the design has been created, it will be made available to the machines and robotic systems that will be used to create a digital twin. Combined with the use of AI and other advanced technologies, these systems will be able to track and automate the various parts of the product as it moves through your factory.

Image from FS Studio

From there, it becomes a cascade of efficiencies. Your products will be loaded and delivered to your customers on time thanks to automation, robotics, and a well trained workforce.

Retail outlets will use Web 3.0, XR tools and digital information to create an improved shopping experience in their stores. Through the use of smart mapping and routing technology, they will be able to improve the efficiency of the shopping experience that will allow them to map out the path that is most likely to lead to their desired items, or help them with product placement.

We are not totally there just yet but as technologies improve, we are seeing more adoption in many industries. Yes, there is an investment up front to you into Web 3.0 and spatial computing, but by educating yourself in digital twinning, the metaverse, real to life virtual spaces - or whatever you'd like to call it - employee safety, improved workflow, and ROI is what stands out.