FS Studio Logo

By Bobby Carlton

XRAI Glass launches its real-time Augmented Reality closed captioning app to users globally using Nreal glasses.

By utilizing AR glasses, individuals who experience hearing loss or those who are deaf can now read speech in real-time using closed captioning through AR glasses. XRAI Glass has launched a suite of solutions that allow users to experience the world through AR, according to an article at Auganix.com.

The software, which is called XRAI Glass, converts audio into a version of conversation that can be displayed on the user's AR glasses. It can also recognize the voice of the speaker and translate conversations in nine different languages.

To use the app, users need to own a set of Nreal Air AR glasses, which are tethered to a mobile device. Through its partnership with Nreal, XRAI Glass was able to provide users with a device that can be used to view conversations. After pairing the app with their Nreal Air glasses, individuals were able to see the real-world enhanced with the help of digital captions.

XRAI
Image from XRAI

On top of real-time audio transcription, the XRAI Glass app also includes the following features:

Personal assistant:

Through a command of "Hey XRAI" people can access an AI-powered personal assistant and ask questions such as what is the weather like in their area or ask for information, much like Siri or Alexa. The answer to your question will then automatically be displayed on your glasses where only you can see it.

Conversation recall:

You'll also be able to recall a conversation from the previous day by saying "Hey XRAI, what was I asked to pick up yesterday?”

Translation:

With translation available in multiple languages, XRAI Glass can also help you transcribe and subtitle conversations in nine of the world’s most spoken languages that include English, Mandarin, French, German, Italian, Japanese, Korean, Portuguese, and Spanish, and the company plans on rolling out more languages in the near future.

XRAI with Nreal Air
Nreal Air Image from Nreal

In a previous announcement, the company revealed that its software would soon be able to detect variations in pitch, accents, and voice tones, which plays a huge part in how we communicate, and will have an impact with Web3.0 experiences.

Dan Scarfe, the founder and CEO of XRAI Glass, said that the company was thrilled to announce the availability of its technology worldwide. He noted that the company's goal is to provide a solution that will help people with hearing loss connect with their communities. Through its partnership with various organizations, such as DeafKidz International, the company was able to test the product and learn from its users.

Due to the capabilities of XRAI Glass, the company has been able to help more people than it initially thought possible. For instance, people with neurodiversity, who have difficulty understanding sound and speech, have been able to benefit from the technology, and as we become more digitally connected, IoT is changing our personal and work lives.

Another way to see how XRAI Glass and Nreal's partnership would assist is in work environments that are incredibly loud with background noise, but verbal communication is still important. You could have workers using the glasses as a secondary layer of communication to make sure everyone is getting the correct information. You can also access the information through an app on your Android device.

This is important as more and more industries are turning to technologies such as digital twinning, AI, AR, and VR to be more productive and efficient.

Through its software, XRAI Glass is able to record conversations so that users can easily recall past interactions. The company offers various subscription plans, such as the Essential, Premium, and Ultimate. The basic plan, which is free, provides users with a basic screen duplicate mode and unlimited transcription.

The Premium plan, which is priced at £19.99 a month, includes 30 days of conversation history and unlimited transcription, and it also comes with a variety of additional features such as 3D support and translation into nine additional languages.

The Ultimate plan, which costs £49.99 a month, comes with everything that the Premium plan has. It also includes unlimited conversation history, cloud-enhanced transcription, and a personal AI assistant.

For more information about XRAI Glass, check out the company's website.

By Bobby Carlton

Robotics, AI, and a slew of other cutting-edge technologies will re-shape our world. But how soon will that happen?

Caio Viturino works here at FS Studio as a Simulations Developer and is incredibly focused and passionate about how Robotics and Artificial Intelligence will change everything from warehouse automation to having an impact on our everyday life. His robotics journey started when he was as an undergraduate in Mechatronics Engineering between 2010 and 2015.

Along with his amazing work here at FS Studio, Viturino is also a PhD student at the Electrical Engineering Graduate Program at the Federal University of Bahia (UFBA) in Salvador, Brazil, supervised by Prof. Dr. André Gustavo Scolari Conceição and a researcher at the Laboratory of Robotics at UFBA.

With the current state of how industries are looking at robots and AI to play a crucial role in how we work and socialize, we thought it would be important to learn more about what it is that he does and dig into some questions of technologies and where he thinks it is heading.

During our interview Viturino first explains how he ended up on this path with robotics saying, "Shortly after my bachelor's degree, I started a master's degree in Mechatronics Engineering in 2017 with an emphasis on path planning for robotic manipulators. I was able to learn about new robotic simulators during my master's, like V-REP and Gazebo, and I also got started using Linux and Robot Operating System."  

Robots
Caio Viturino and his Robot

In 2019 Viturino started a Ph.D. in Electrical Engineering with a focus on robotic grasp. He primarily used ROS (robot operating system) to work with UR5 from Universal Robots and Isaac Sim to simulate the robotic environment. "In my work, I seek to study and develop robotic grasping techniques that are effective with objects with complex geometries in various industrial scenarios, such as bin picking."  

The Tools and Why

At first, Viturino was hired as a consultant here at FS Studio in July of 2022 to work on a project for Universal Robots using Isaac Sim. After the conclusion of this work, he was hired to work on projects involving artificial intelligence and robotics that are related to scenario generation, quadruped robots, and robotic grasping. 

He tells me that he primarily uses the following for most of his research:

Pybullet - An easy to use Python module for physics simulation, robotics and deep reinforcement learning based on the Bullet Physics SDK. With PyBullet you can load articulated bodies from URDF, SDF and other file formats.

Isaac Sim - A scalable robotics simulation application and synthetic data generation tool that powers photorealistic, physically-accurate virtual environments to develop, test, and manage AI-based robots.

Isaac Gym - provides a basic API for creating and populating a scene with robots and objects, supporting loading data from URDF and MJCF file formats.  

I asked Viturino about his current work on PyBullet, Isaac SIM, quadrupeds learning to walk. Why is this work important to him and why are robotics important in general?

"Robots will not be a replacement for the human labor force but will aid in difficult or repetitive tasks," said Viturino. Just recently Amazon announced their new AI powered robot called Sparrow, designed to do exactly what Viturino is saying here.

He then tells me that for these robots to perform these tasks, it is necessary to develop their cinematic and dynamic models, and test route planning algorithms so that the robot can go from point A to point B while avoiding static and dynamic obstacles, among other difficult tasks.  

These algorithms will require time and significant investment to implement in real-world scenarios. Robotic simulators will lower these costs and risks by enabling all of these algorithms to be tested in simulation before being implemented on actual hardware.  

NeRFs
NeRF Drums

In a previous post on the FS Studio blog, Viturino and I talked about NeRFs. One question I had for him was how will NeRFs and robotics combined change the world of automation, and is there a way to speed up the creation of robotic SIM?

"Robotic simulations are being used more frequently as a means of training and testing mobile robots before deploying them in the real world. This is known as sim2real. For instance, we could create a 3D model of a warehouse and then train various robots in that environment to plan routes, recognize objects, and avoid collisions with dynamic obstacles."  

One thing to mention is that the process isn't that simple. Yes, NeRFs can help a lot in this regard since we may easily and quickly obtain a 3D model of the surrounding area but modeling an environment can take a lot of time and money.

Robotics with Grasping, Trajectory Planning and Deep Learning

When asked about his passion and how robotic grasp objects, trajectory planning and Deep Learning. Viturino tells me that Deep Learning enables the practical and effective use of several algorithms that are only effective in specific environments or situations. For instance, a classic robotic grasping algorithm needs physical properties of objects, such as mass and dynamic and static attributes, to work. These properties are impossible to obtain when considering unknown objects.    

Artificial Intelligence allows robots to perform grasping tasks without worrying about physical properties that are difficult to obtain. These algorithms are getting better at working with any object and in every environment.     

However, there is a lot to be explored in order to find a complete solution for all robotic problems, or to put it another way, a unique algorithm that plans routes, executes preening, identifies obstacles, among other things, in the style of DeepMind. In addition, the computational performance or reliability of these algorithms still limits their practical use. Viturino explains that the gap between industry and academia has been closing significantly over the past few years.

How Far Are We from Robotic Help and Companionship

When we think of modern day robots that can be used in our normal everyday life, we think of things such as an iRobot Roomba vacuum to keep our floors clean or something like Piaggio My Gita robot that follows you around and does things like carry groceries or your computer. But truthfully we all would love for the day where we can have our own astromech droid like R2-D2 to be our on-the-fly problem solver and companion throughout the day. I asked Viturino about this. How far are we from this?

"I think we have a gap where the pieces still don't fit together completely. Imagine that each component of this puzzle is a unique algorithm, such as an algorithm for understanding emotions, another for identifying people, controlling the robot's movements, calculating each joint command, and determining how to respond appropriately in each circumstance, among others."

According to Viturino, the pieces still need to be very carefully built and be developed so they can be assembled and then have them all fit together perfectly. "I think we won't be too far from seeing such in sci-fi movies given the exponential growth of AI in the last decade." Granted, we won't get something like R2-D2 anytime soon, but you could paint a My Gita robot to look like R2!

But it does take me to my next question. I personally own an Anki Vector AI robot. He's been in my family since the beginning and we've all come to really love Vector's presence in the house. I wanted to know Viturino's thoughts on more robotics like Vector, Roomba and My Gita becoming more popular as a consumer product.

He explains that this greatly depends on how well-received this type of technology is accepted by the general public. The younger generation is more receptive to this technology. Price and necessity are also important considerations while purchasing these robots.  

Viturino then says that the robotics community will need to demonstrate that these robots are necessary, much like our cellphones, and are not just a novelty item for robotics enthusiasts like us. This technology should be democratized and easily accessible to all. 

A company in Brazil by the name of Human Robotics is heavily focused on building robots for commercial use in hotels and events as well as domestic use, such as caring for and monitoring elderly people. However, he doesn't think the population is completely open for this technology.  

He's right, there's still some hesitation on using robots for daily tasks, but there is some traction.

AI, SLAM, LiDAR, Facial Tracking. Body Tracking. What Else Will Be Part of the Robotic Evolution?

Viturino focuses on one part of this question, saying that he thinks that as artificial intelligence advances, we will use simpler sensors. Today, it is already possible to create a depth image with a stereo RGB camera. Or perhaps synthesizing new views from sparse RGB images (NeRF). But he believes that the day will come when we will only need a single camera to get all data modalities.  

"There are other technologies, particularly in autonomous vehicles, such as passive thermal cameras. Despite it, the technology is restricted by armies and governments, and the cost is high. However, it may be a promise for the future."

As we come to the end of our conversation one thing Viturino brings up is he believes that simulation allows us to develop, test, and go beyond imagination, without fear of damaging robots and stuff, which can cost a lot of money or dismissal and an unpayable fine, depending on the damage haha. After we've tested our ideas in the simulation, then we're ready to deploy the software in the hardware.  

As for his work in robotics and AI, and closing the gap of what's possible now and the future of what we hope for, he believes that NVIDIA is working to develop ever-more accurate simulations through the use of their PhysX library, which is now available as an open-source version 5.1. As a result, the gap between simulation and reality will close more and more, increasing the reliability of robotic applications.  

"We are in an era where we must be bold and creative to overcome the limits already reached, with agility and teamwork."  

You can learn more about Caio and his work by checking out his Github page.

By Bobby Carlton

Through the M Mixed Reality initiative, BMW sets their sites on how XR technology will play a role with in-car entertainment and the rise in passenger economy.

BMW has created a way for drivers to actually be behind the steering wheel of a moving vehicle while wearing a VR headset designed to enhance your driving experience, and it puts the automaker on a new path as they dive into how XR technology, and the rise in passenger experiences and self-driving vehicles are becoming a reality.

The German automaker recently unveiled a new way for people to experience their M2 vehicle through a VR headset that has you actually driving the car. The M2 Project, which is part of the automaker's "M Mixed Reality" initiative, allows people drive a car that includes your foot on the gas pedal, braking, steering, turn signals, even the radio! Except instead of seeing the real-world, you're driving through a futuristic city. Check out the video below!

It sounds totally sketchy to be behind the wheel of any car while wearing something over your face that cuts out your real-world environment, but this wasn’t designed for normal streets; BMW designed their VR experience to be used on the company’s test track. The VR software can adapt and re-create the virtual course of locations around the world.

With that said, you could absolutely see something like this for your passengers!

Thanks to computer vision and simultaneous localization and mapping (SLAM), the car has some safety triggers built in to make sure you don't get into an accident. As an added layer of safety, a BMW employee also rides in the passenger seat to watch the road and press an extra brake pedal in front of them.

One company that sees the potential of VR and passenger economy is Holoride. They first announced their work back in 2019 showing how passengers could access their VR experiences as part of in-car entertainment. Since then, they have improved the experience using HTC's Flow VR headset and just recently announced the launch of their in-vehicle VR entertainment system in Germany. Select Audi vehicles can purchase the Holoride Pioneers’ Pack, which includes everything you need to transform your car into an “always-in-motion virtual space” where you can play games, browse the web, and more.

The inspiration for the BMW project came from the company's digital city, which was known as M Town. Alex Kuttner, the engineer who developed the VR experience, said that the BMW fans wanted to visit the city if it was a real place.

“M Town is a mindset,” Kuttner said in an official BMW press release. “It’s a town where everything is possible, and that was the moment I realized we aren’t only here for selling products. We’re here for selling emotions and experiences. These two things combined in mixed reality are only the start of something really great in the future.”

Almost two years ago, the company started working on the mixed-reality project, which was initially supposed to be used for the M5 model. However, instead, it was designed for the M2. According to the company's executives, the project could also be used to help drivers in racing competitions and training courses.

Frank van Meel, the company's CEO, said that the goal of the project was to give employees a chance to explore new ideas without having to think about the business case for each new innovation.

“I think the interesting thing is now we have an answer, and the question is, what is the question to this answer?” van Meel said. “There are so many ideas. We haven’t found the final answers, but we’re working on all of these kinds of ideas.”

Although the experience isn't yet available for the general public, the company invited several prominent gamers and content creators to participate in the development of the virtual reality experience. One of these was Cailee, a popular Twitch streamer and member of the G2 esports team. She said that she had previously tried playing video games in VR, but this was the first time that she actually used it. She believes that other games could also benefit from the technology.

“It’s just the most insane experience I’ve ever, ever had,” she said. “I play Rocket League, I’ve sunk so many hours into it and everything, but I really cannot describe the experience that I had in Munich.”

BMW hosted a demo of the mixed-reality experience in Lisbon. For those who were able to try it, the results were impressive, with everyone stating how the virtual course matched the actual driving experience, from speeding up, slowing down, and turns.

The first lap of the course featured a variety of obstacles that people had to avoid. To help "gamify" the experience, drivers had the task of collecting coins along the way. On the second and third laps, a timed element demanded that drivers accelerate, which provided them with the real feeling of racing. The suspension of reality allowed drivers to feel more comfortable with the way they drove.

According to David Hartono, the creative tech director of Monogrid, BMW's interest in gaming is evidenced by the company's decision to turn the vehicle's internal display into a controllers. Last month, BMW partnered with AirConsole to allow players to play games using the company's in-car display. He noted that the company's use of VR technology could help reinforce its image as an innovative and pioneering company.

Sean MacPhedran, a senior director at SCS, a digital agency based in California, praised the BMW's mixed-reality experience, saying that it was a step up from the traditional methods of driving luxury cars. It also highlighted the company's capabilities in a more consumer-friendly manner.

“With BMW doing so much work with mixed reality and industrial 4.0, it’s hard to telegraph that to a consumer,” MacPhedran said. “A consumer doesn’t care about all the stuff you’re doing to make a car that much better. It kind of reminds me of how they show the car in the wind tunnel, but now they’re doing this to show how advanced a car is.”

Several car brands have started experimenting with VR and AR in the past couple of years. In 2017, Lucid, a luxury electric vehicle startup, opened its New York City showroom to allow people to explore its virtual models. In 2022, Porsche and Audi announced that they would be partnering with the startup Holoride to develop in-car VR systems that would be used to give passengers an incredible immersive experience.

While Nissan turned to AR to show potential car buyers how safe their cars are and used VR as a fun tongue-in-cheek way to start a conversation about recruiting mechanics.

Along with the auto industry, we are also seeing aviation look at VR technology to change the way their customers experience air travel.

According to Mike Ramsey, an analyst at Gartner, car companies are constantly looking for ways to keep up with the technological advancements that are happening in the industry. However, he noted that BMW was one of the first companies to invest in both virtual and augmented reality systems. Despite the company's early involvement in the technology, he believes that the company's use of VR is more about brand building and setting their targets on the rise of XR and passenger experiences.

“It’s one of those things that every single car company is investing in but nobody has figured out what the business value is,” he said. “Augmented reality, virtual reality, all of these technologies. For a company that has a performance-oriented orientation, they’re going to look at that as a way to expand their brand beyond the physical to wherever you travel, so to speak.”

BMW
BMW Mixed Reality Experience. Image by BMW

According to Heiko Wenczel, the director of the UE business at Epic Games, the BMW experience was built using real-time sensors and interactions with the car's surroundings. He said that being able to test and experience something in virtual reality is very beneficial for developing real-world products.

“You can translate that into any part of the manufacturing and automotive world,” Wenczel said. “Like when you design you get real-time feedback automatically, like what that is, and the human scale of designing cars and like understanding what mobility will be in the future needs that kind of interaction in real-time.”

Instead of rushing to develop something, companies often have to find a reason for their actions and how they can make money from it. According to van Meel, it's important to start with a low-budget project to avoid investing too much money. Although BMW wouldn't provide the exact amount of money that it spent on the project, van Meel noted that the company's budget was relatively lean.

“If you take a step back and you say, well, it’s not finished yet, but I can see a lot of creativity and a lot of potential that is still a little bit unclear,” said van Meel. “You just should let it happen if it’s not insanely expensive, of course, because then you need to make decisions right away.”

With the more automakers looking at XR technology as part of the in-ride experience and focusing on passenger economy, and companies like Einride with their driverless technology, the automotive industry is entering a brand new phase.

By Bobby Carlton

EXTERMINATE! EXTERMINATE!! No no no! This isn't that kind of robot! Amazon's Sparrow robot is here to help and uses machine learning algorithms and a custom gripper to take on those tedious mind-numbing work tasks.

During the Delivering the Future Conference in Boston, Amazon showed off Sparrow, a new robot that will one day play a crucial role in helping Amazon workers by handling some of the more tedious and mentally draining tasks found in a warehouse environment.

According to Amazon, Sparrow, uses artificial intelligence and computer vision to move products before they're put into a crate. In a video shown at the event, the robotic arm was shown picking up a board game, a bottle of vitamins, a set of sheets, and other typical items that you'd find in a company warehouses, and then placing those items in crates.

Sparrow
Amazon's latest robot called "Sparrow" Image by Amazon

Sparrows arms have been designed specifically to pick up boxes that are generally uniform in shape. However, according to Jason Messinger, the company's robotic manipulation manager, Sparrow does have the ability to handle items that have varying sizes and curvature.

To grab items, the robot uses suction cups that are strategically placed on the arm where it can then firmly pick up products in the same way an octopus uses their own suctions cups on their tentacles to grab a fish or an object.

Sparrow's suction cups. Image by Amazon

“This is not just picking the same things up and moving it with high precision, which we’ve seen in previous robots,” said Messinger.

In 2012, Amazon acquired robotics company, Kiva Systems for $775 million and since then they have been adding more robotics to their warehouse infrastructure. Over time Kiva evolved and became Amazon Robotics, which is Amazon's in-house incubator of robotic fulfillment systems.

With global initiatives like the Artificial Intelligence Act pushing warehouses to create a safer and more efficient work process through AI, robotics, automation and XR technology, it only makes sense that Amazon, the second largest employer in the U.S. behind Walmart, embraces and adopts more robotics technology for their workflow.

Along with Sparrow, Amazon also showed off a fleet of other new robot models. They also featured a variety of other innovations that Amazon believes will help improve the efficiency and effectiveness of its operations.

Using robots like Sparrow isn't about replacing the human workforce. As a matter of fact, Amazon is still very much invested in their human workers. In a recent post on the Amazon blog, the company talked about their employees saying:

The design and deployment of robotics and technology across our operations have created over 700 new categories of jobs that now exist within the company—all because of the technology we’ve introduced into our operations. These new types of roles, which employ tens of thousands of people across Amazon, help tangibly demonstrate the positive impact technology and robotics can have for our employees and for our workplace. Supporting our employees and helping them transition and advance their career into roles working with our technology is an important part of how we will continue to innovate.

Amazon blog

The company even offers an Amazon Mechatronic and Robotics Apprenticeship, a 12-week classroom apprentice program, which is covered by Amazon, is followed by 2,000 hours of on-the-job training and industry-recognized certifications, helping their employees learn new skills and pursue in-demand, technical maintenance roles in robotics.

The company's vision is to use robotics like Sparrow to reduce its reliance on front-line workers by implementing more automation in its fulfillment centers. This will allow Amazon to improve the efficiency of its operations and reduce their dependency of relying on a labor force. According to a recent Recode report the company is worried that they may run out of workers to hire by 2024.

In June, Amazon unveiled its first fully autonomous robot, which can work alongside warehouse workers. It also introduced other systems that can move packages. The company acquired Cloostermans, which develops warehouse machinery and robotics.

According to Amazon, about 75% of the items that the company's customers receive through its delivery process are handled by robots like Sparrow. So next time you're up late at night scrolling through Amazon and you decide to finally buy those VR accessories, there's a good chance a robot picked the item for you.

By Bobby Carlton

The next generation haptic gloves from HaptX arrives.

Since it launched its first product, the HaptX DK2, in January 2021, HaptX has been pushing the envelope in terms of haptic technology and how it can improve the way XR is used as an enterprise solution. Now, it's time for the public to get their hands on the company's next generation of gloves, the HaptX G1. This is a ground-breaking device that's designed to be used in large-scale implementations in many industries.

The new HaptX Gloves G1 features interesting features that are designed to meet the needs of its users. Some of these include wireless mobility, improved ergonomics, and multiuser collaboration.

“With HaptX Gloves G1, we’re making it possible for all organizations to leverage our lifelike haptics,” said Jake Rubin, founder and CEO of HaptX, in an official press release. “Touch is the cornerstone of the next generation of human-machine interface technologies, and the opportunities are endless.”

HaptX Gloves G1 leverages advances in materials science and the latest manufacturing techniques to deliver the first haptic gloves that fit like a conventional glove and delivers necessary precision tactile feedback for jobs that require that type of accuracy.

The flexible and soft materials used in the production of the HaptX G1 provide a level of comfort and dexterity that's not found in other products. To ensure the users are able to have a good fit, the GI glove is available in different sizes, such as the Medium, Large, and Extra large.

Built into the glove are hundreds of actuators that expand and contract on specific parts of the glove to provide a realistic sense of touch when you interact with various virtual objects. For example if you were to hold a wrench in VR or AR, the actuators in the glove would expand on the parts of your physical hand to convince you that you were actually holding a real wrench.

Tto do this, the G1 utilizes a wireless Airpack, which is a lightweight device that generates compressed air and controls its flow to provide a physical feedback via the actuators. The Airpack can be worn on your body in backpack mode or placed on a table for standing or seated applications.

The Airpack can be charged using a single charge, which provides it with an additional three hours of use, making it ideal for military, educational, and enterprise applications.

HaptX
Image from HaptX

The HaptX SDK provides developers with a variety of features that make it easier to create custom applications for any industry or workflow. One of these is its advanced feedback technology, which can be used to simulate the microscale textures of various surfaces. The G1 also comes with a variety of plugins for platforms such as Unreal Engine and Unity, as well as C++ API.

According to Joe Michaels, HaptX's Chief Revenue Officer, many organizations resort to using game controllers when developing their metaverse strategies due to how ineffective they are at providing effective touch feedback. With the G1's ability to provide real-time feedback, businesses can now rely on its capabilities to improve their operations.

To celebrate the G1's launch, the company is currently taking pre-orders for the G1 through its website. For a limited time, customers can get a pair of the G1 for just $5,495. They can also save money by purchasing a bundle of four sizes.

HaptX G1 Palm
Image by HaptX

In addition to pre-ordering and a discounted bundle option, the company is also introducing a subscription program that provides its customers with a comprehensive service and support plan. The subscription includes the Airpack, the SDK, and a comprehensive maintenance and support package.

The subscription for the HaptX Gloves G1 starts at $495 a month. Pre-ordering the product allows customers to make a small deposit to cover the cost of the gloves. Once the G1 is delivered, they can select their subscription options.

The G1 is expected to ship in the third quarter of 2023. To learn more about the G1 and its subscription model, visit the HaptX website.

by Bobby Carlton and Dilmer Valecillos

Unity, MRTK, Needle Tools and 8th Wall are just some of the tools you'll need to develop!

The Meta Quest Pro is available now and we have already seen some very cool things being teased from developers leading up to its launch. Of course, in order to develop, you need an amazing set of tools. Our Head of R&D, Dilmer Valecillos took a moment to take a deep dive into some of the top development tools you can use with your Quest Pro headset to develop and launch your own XR experiences.

Develop
Image from Meta

In the video, Dilmer gives us a little bit of comparison of between the Quest Pro and the recently released Magic Leap 2 MR headset. He also gives us some perspective on color passthrough and what that will mean not only for the Meta Quest Pro, but for XR in general.

Some of the tools mentioned here are Unity, Unreal, MRTK 3, Needle Tools, 8th Wall, and Mozilla, and a brief skimming of how to use them for the Quest and deploy your builds.

All of these tools along with others will be essential for you to develop incredible XR experiences on the Quest Pro headset that you can bring into your workforce as a training platform, or used for social events and entertainment.

You can expect an even deeper dive into the Meta Quest Pro in upcoming posts here on our blog and on our YouTube page.

crossmenu