FS Studio Logo

By Bobby Carlton

On top of being a badass looking set of wheels, Audi's new concept vehicle has four built in AR headsets and transforms into a truck.

German carmaker Audi has unveiled a concept vehicle that uses AR glasses to show 3D content in the real world. It also eliminates the need for traditional control panels by allowing users to interact with digital features through the glasses.

Along with the AR feature the vehicle, which is called the Activesphere, has a range of over 600 kilometers and no local emissions. It also features an 800v charging system and can transform into pickup truck on demand.

Audi
Photo by Audi

Yes, we've made it to the age of transforming cars! Sort of.

The company claims that the AR features will be able to customize the content for both the passengers and the drivers. For instance, drivers will be able to access navigation information while the passengers are surfing the web.

Oliver Hoffmann, a member of the company's technical development board, said that the concept vehicles show the company's vision for the future of premium mobility and in-car virtual tools.

The concept vehicles are equipped with four AR headsets, which are designed to provide the driver and the passenger with a variety of information and services. According to the company, the system can identify the user's focus and provide more detailed data.

“The sphere concept vehicles show our vision for the premium mobility of the future. We are experiencing a paradigm shift, especially in the interior of our future Audi models. The interior becomes a place where the passengers feel at home and can connect to the world outside at the same time," said Oliver Hoffmann, Member of the Board of Management for Technical Development at Audi, adding "The most important technical innovation in the Audi Activesphere is our adaptation of augmented reality for mobility. Audi dimensions creates the perfect synthesis between the surroundings and digital reality.”

The controls for various functions are located in front of the vehicle's elements. For instance, the audio and AC controls are positioned over the speakers. Through the use of AR technology, the vehicle can also display 3D topography images and other traffic information while in off-road mode.

Along with in-car assistance and entertainment, the Audi Activesphere passenger can take the headset out of the vehicle and use to navigate a bike trail or find the ideal descent while skiing down a mountain. The company noted that information about the car, its battery range, and the charging stations nearby can be accessed both inside and outside of the vehicle. It also provides weather forecasts and other advance warnings.

Activesphere
Photo by Audi

The vehicle's interior features various notable features, such as an autonomous driving mode that can deactivate the steering wheel and the dashboard. It also has a next-gen dashboard that can serve as an x-large soundbar. Additional consoles containing four AR headsets are located above the center console.

Of course Audi is just one of many car manufacturers looking at how they can bring XR technology and make it part of the driver and passenger experiences. BMW actually puts drivers into VR as they test drive their vehicles, and Holoride announced their new system that lets your passengers use VR for in-car entertainment through an HTC Flow VR headset.

For more information and photos of Audi's Activesphere concept car, click here.

By Bobby Carlton

If rumors are correct, you'll be able to say "Hey Siri" to create an AR experience that can be used on the spot or published on their App Store and access on their Reality Pro headset and other devices.

Even though Apple hasn't announced an actual mixed reality headset, the rumor mill is swirling that the company will be releasing a tech heavy hardware capable of both and AR experiences. On top of hardware, it seems that there are new rumors on software.

Although it has been widely reported that the upcoming Apple Reality Pro will have a dedicated app store, it is also rumored that the company wants to allow people who don't know how to code to create apps for the headset simply by saying "Hey Siri" that can then be published on the App Store.

Reality Pro

The report claims that users will be able to create AR apps using Siri, and the assistant will build something that's based on real-world objects. This would be very useful, as it would allow them to accurately represent their surroundings and would allow people to create solutions on-the-fly.

For instance, users could create an app that lets them view and interact with virtual animals that move around a room or around various objects in 3D, or perhaps you need a quick AR solution to help you determine a better way to layout a warehouse.

Bloomberg has previously reported that Apple is working on its own content for the Apple Reality Pro. According to a report from The Information, the company is planning on releasing content that's focused on health and wellness. This could include apps that help users improve their fitness and meditation.

According to the people who have been briefed on Apple's plans for the headset, the company is also planning on releasing apps that will help users improve their health and wellness. One of the early demos for the device featured a Zen garden.

During one of the early demonstrations for executives, Apple allowed users to explore the pages of Dr. Seuss' book "Oh, the Places you'll Go" by blending its fantastical setting with the real world.

The Apple Reality Pro is expected to be unveiled this spring and will be available to consumers later this year. Apple is also reportedly planning on releasing support for creating apps through Siri at the same time.

Critics of the idea of using Siri to create virtual reality and AR apps for the upcoming Apple Reality Pro have been right. However, I believe that this is not the case. Instead, it would appear that the assistant would be used as an initial interface to help users start the process.

This is an example of how Apple is taking advantage of its existing VR and AR work in the iPhone by allowing users to create new 3D models of objects through its Object Capture API. It's easy to see how this could be a part of the company's plans for the headset.

By Bobby Carlton

Even with the news of Microsoft shutting down all mixed reality projects, employees at Toyota sees positive results using the Microsoft HoloLens 2 for training and collaboration.

Even though written instructions may be precise and thorough, sometimes a professional can't master a new skill just by looking at them. This is where XR technology can help and why Toyota is using it with their workforce.

When Toyota started experimenting with Microsoft's HoloLens, it discovered that the technology could be incredibly useful. In a case study released by both Toyota and Microsoft, David Kleiner, who heads Toyota's research lab in North America, describes a particular use case that shows how the HoloLens can be used to improve the efficiency of his team.

In a logistics center in New Jersey, a Toyota employee was having a hard time finding the right way to install a door edge guard on a Toyota vehicle. To solve the problem, the employee and his colleague connected with a fellow Toyota colleague in California using the HoloLens 2 AR headset. They were able to see through the other person's eyes and learn how to install the door guard.

The instructions were captured using the HoloLens 2 and were then used in Microsoft's Dynamics 365 guides system, which makes it easier for workers to access the same help.

The case study shows how the technology helped teams improve their efficiency and reach their continuous improvement goals. The study also explained how workers at various logistics centers in the US used the HoloLens 2 for training and collaboration.

Toyota's priority is making sure their teams are safe and have the tools needed to be successful, but in a way that is efficient, and according to Kleiner the company's ability to solve problems and train people quickly is very important in order to bring new products to market faster. To overcome the limitations of time and location, Toyota is currently using mixed reality and the metaverse to enable their employees to share knowledge and improve their performance.

Toyota's decision to use the Microsoft HoloLens 2 was also due to the company's approach toward the development of the next wave of the “industrial metaverse”. Unlike other companies, Microsoft doesn't just provide the right hardware to its customers. Instead, it offers a complete package that includes everything a company needs to get started with the technology.

Through its various offerings, including the Microsoft Azure and Dynamics 365 platforms, companies can create digital twins of their factories or supply chains. They can also use the cloud to simulate their operations and improve their environmental impact. With the help of Microsoft's technology, Toyota can now benefit from the company's extensive ecosystem to enable frontline workers to improve their performance.

Toyota's goal is to make the HoloLens a "screen" for its frontline workers, enabling them to access all of their company's tools without having to carry a laptop. With the help of the cloud, businesses like Toyota can now provide their employees with a variety of tools and experiences designed to improve their productivity.

"By using mixed reality applications on HoloLens 2, we have a 3D expression of the technical information, which means we free up both our hands. We’re far more efficient and look cool while we’re working," said Hiroshi Sakai, Specialist and Lead of Mixed Reality Service Information at Toyota Motor Corporation

One of the tools that can be used by Microsoft to improve the efficiency of its frontline workers is Remote Assist, which is a remote assistance tool that can be used for monitoring and controlling the door edge.

This tool can be used by frontline workers to bring coworkers into their physical space, enabling them to receive calls and annotations from anywhere. They can also see what the other team members are seeing in real-time through the 3D space. Toyota has been working with Microsoft on the integration of the Remote Assist app and the Microsoft HoloLens since the latter was released in 2016.

Through the partnership, Toyota was able to create an enhanced version of the "Guides" experience for the HoloLens. It allows workers to start a session and get immediate access to a trainer, as well as additional support if they need it.

Before, technicians attached QR codes to the hood of vehicles to guide users through holographic instructions using the HoloLens and Guides. Unfortunately, these holograms often moved when the user moved around a car. To address this issue, Toyota and Microsoft collaborated to develop an "object understanding" solution, which allowed technicians to lock a hologram on a vehicle. This ensures that staff members have a consistent experience.

Toyota has continued to find the value in the "Guides" as a training tool. According to Kleiner, the company's trainers can now use the tool to allow their students to work independently, which means they can learn at the same time. The training times for various individual users have been reduced by up to 50%.

Toyota's innovations have helped the company move its efforts with the development of the HoloLens 2 platform into the enterprise. Kleiner noted that when the technology was first introduced to the company's IT department, it was very easy to implement because it was just another Windows machine.

According to Kleiner, Toyota's team is expecting the "Guides" to become a key component of the company's workforce in the years to come. He also noted that the company will continue providing employees with the necessary tools and resources to improve their productivity.

Toyota has now been able to deploy a device that is easy to maintain and can be used by every employee. This allows the company's workers to participate in various business conversations.

Toyota
Image from Toyota

Since this case study was published, it was announced that Microsoft was drastically reducing all teams working on AR and VR and ending all of their mixed reality projects.

One of the teams that was let go was the one that created the mixed reality tool kit known as the MSRTK. This was a cross-platform system that was designed to create spatial anchors in virtual space. It was able to work with various VR headsets, such as those made by Meta and Microsoft's own Hololens 2.

It's unsure how this will impact Toyota's endeavors with the HoloLens 2. But with the success Toyota had using XR technology as an enterprise solution, the company can easily shift and turn to headsets such as the Meta Quest Pro, the HTC Vive XR Elite, or a Pico 4 headset, which all provide full color passthrough, giving the user access to MR experiences.

There is also the rumored Apple mixed reality headset which would come packed with technology needed for Toyota for them to continue their work using XR.

One thing to keep in mind is that Microsoft isn't totally out of the game when it comes to XR tools. The company will be launching their Microsoft Mesh application sometime in 2023.

By Bobby Carlton

NVIDIA's cuOpt is a game changer executives looking to increase their investments in automation and digital technologies to improve their organizations' efficiency.

During CES 2023, Nvidia unveiled some incredible new features for its Isaac Sim software that will allow researchers and developers to better train and improve AI robots for various tasks that include areas such as manufacturing, agriculture, retail, and more.

According to Nvidia, the development of AI-based robots requires that they be placed in realistic environments. With the latest version of Isaac Sim, which is now available, developers can now test their models across different operating conditions.

The company's Isaac platform is composed of various tools such as the ROS module, which runs on the robots and the cuOpt software for route optimization. It also includes the Sim-ready assets, a toolkit for training models, and the TAO optimization system.

“The Isaac robotics platform is designed to accelerate the development and deployment of all manner of robots, and we have a number of software tools and SDKs that address different parts of this solution,” said Gerard Andrews, product manager for Nvidia’s robotics platform, during a CES briefing.

NVIDIA’s tools are built on the foundation of its AI suite and Omniverse, which is a platform that enables the creation and operation of digital twinning applications.

cuOpt

New features

These include new tools and assets for logistics and warehouse operations, such as a conveyor belt utility and a behavior simulation tool for testing safety systems. It additionally has a variety of research tools, such as the Isaac Gym and the Isaac Cortex.

The company's goal is to provide researchers and developers with the necessary tools and resources to improve and develop AI models for various tasks. According to Andrews, the use of simulation will allow them to create a virtual proof of their creations.

Despite the company's efforts in simulation, Andrews noted that the company's work still remains to be done. Some of the factors that will be contributing to the development of new tools include improving the capabilities of its existing tools, such as Isaac Sim, and creating new ones that are specifically designed for different tasks.

“Closing the sim2real gap means the more that the robot performs in simulation like it’s expected to perform in the real world then you are going to get more use cases, more utility, and more value, so we spent a lot of time focusing on how to make our simulations more realistic for that robot user or robot developer,” said Andrews.

He noted that the company also focused on making its tools more flexible and modular. These factors allow the company to provide researchers and developers with the necessary tools and resources to improve and develop AI models for various tasks.

In a recent article on the Nvidia blog site Senior PMM at NVIDIA Robotics, Erin Rapacki wrote about how companies can optimize robot route planning using NVIDIA cuOpt for Isaac Sim.

In Rapacki’s article, she looks at how cuOpt API from NVIDIA enables operations researchers to create real-time fleet routing. It can be used to solve various routing problems, such as job scheduling, robotic route planning, and dynamic rerouting.

The extension for the Isaac Sim simulation environment from NVIDIA includes the cuOpt engine. This component is integrated with the company's Omniverse application.

"Mailroom workers pick up mail and parcels from different stations and deliver them to various recipients. They know that some envelopes are time-sensitive so they use their knowledge to plan routes with the shortest possible delivery time.

This mail delivery puzzle can be mathematically addressed by using techniques from operations research, a discipline that deals with applying analytical models to improve decision-making and system efficiency. The mathematical science behind operations research is also highly applicable to the process modeling and management of robotics, industrial automation, and material handling systems."

For logistics professionals, real-time optimization problems are often encountered, such as the travel salesman issue (TSP), vehicle routing problem (VRP, and pickup and delivery problem (PDP).

The more academic version of the travel salesman problem is known as the PDP and VRP. It involves asking a question about the shortest route that can take between each of the destinations, “given a list of destinations and distances between each pair of destinations, what is the shortest possible route that visits each destination exactly once and returns to the original location?”

The use of the travel salesman problem in logistics applications can help reduce the time it takes to move materials from one place to another. For instance, it can be used to improve the efficiency of a manufacturing facility's transportation network.

In addition, robotics companies can use cuOpt in their planning processes for the deployment of their robots and continuous operation. For instance, during the planning phase of a project, the facility's process layout can help predict the throughput requirements. This process helps with a successful project ROI, according to the author.

The extension for Isaac Sim from NVIDIA allows continuous operation of the robot fleet while it's inside the facility. It can be used to route the vehicles according to various system variables, such as the traffic conditions, obstacles, and peak demand for throughput.

Before, companies used a lower-fidelity simulation called discrete event simulator, to design their routing and material handling processes. With the help of cuOpt, they can now use a real-time solution for the planning and implementation of their robots. This component can be used to solve various routing problems, such as the transportation of vehicles and the scheduling of jobs.

McKinsey stated that executives are increasing their investments in automation and digital technologies to improve their organizations' efficiency. “More than 60 percent of our respondents reported that they have either implemented or are scaling up digital and automation solutions.”

For instance, if a company builds mobile robots or robotic forklifts, it can model how they can move material with varying timeliness compared to people or conveyor belts. To fully understand the system's systemic differences, it's important to analyze the entire movement of an object from its origin location to its destination.

To transform existing processes into robotic operations, a company can use the cuOpt extension for Isaac Sim. This component can be utilized to analyze the various steps involved in the design and implementation of their robots, and improve their efficiency, which is outlined below by Rapacki in her article on optimizing robot route planning with cuOpt for Isaac Sim.

Robot readiness:

Redesign of brownfield facilities:

Real-time analytics and rerouting:

To help us understand how this works, Rapacki gives us two examples. One in manufacturing and the other in warehousing.

Manufacturing

A manufacturing process involves the timely delivery of parts to the downstream steps of a facility. If the parts arrive late, the factory might not be able to produce as many products that day.

Getting the materials to their destination quickly is a critical component of a manufacturing process, and inefficient route planning can lead to delays.

Isaac Sim supports conveyor belt and people simulation. (Source: Nvidia Corp.)

Warehousing

In warehouses, traffic and floor obstacles can delay the movement of mobile robots. They need dynamic rerouting to react to variable conditions, such as when the route is obscured. If the robots get stuck or slow down, they can act as a constraint or bottleneck and affect the entire operation.

The continuous movement of a material is a critical component of a company's operations, and it's important that the robots are always working in the right context. Having the necessary data streams can help floor managers improve the efficiency of their operations.

With the cuOpt extension, a company can easily implement a variety of optimization techniques and improve the efficiency of its operations. It's built on a patent-pending engine that can evaluate and analyze multiple solutions.

The ability to connect to the performance of NVIDIA's hardware is a key component of the cuOpt extension. With the ability to create thousands of configurations and environments in a short time, a company can easily improve the efficiency of its processes.

The ability to customize system parameters such as speed of delivery, budget, and robustness can help a company identify the optimal layout for its operations. For instance, in the warehouse and material handling industry, there are specific needs for efficiency and optimization.

For example:

One of the most critical factors that a company can consider when it comes to optimizing its operations is the right operational decisions. With the ability to make dynamic decisions, a company can improve its processes and maximize its output. Through the cuOpt extension, users and robotic companies can benefit from the ability to take action immediately.

This will have a significant impact on the work we do here at FS Studio. For example here is a list of tools we've used with current and past projects. Future digital twin projects will absolutely take advantage of the cuOpt extension.

NVIDIA's goal is to make its tools more modular and flexible, and focus on making its simulations more realistic for both developers and researchers. As the number of robots deployed on the market continues to increase, the company's efforts will continue to be focused on making its tools more capable of handling the challenges of these new robots.

One of the main factors that contributed to the development of the company's simulation tools is the need to include people in their simulations as workers increasingly interact with robots. This capability allows people to perform certain tasks, such as pushing carts or stacking packages.

We’re excited about people simulation – the ability to drop characters into the environment and issue commands to those characters and let them take part in a complex event-driven simulation where you can test the software on the robots,” said Andrews.

In the company's initial release, the tools have a variety of predefined behaviors that allow people to perform certain tasks, such as going to a certain location and avoiding obstacles.

Isaac Sim improves RTX LiDAR and sensor support. (Source: Nvidia Corp.)

One of the most important factors that the company considered when it came to developing its simulation tools was the need to make them more accurate when it comes to rendering data from sensors. Through the use of NVIDIA RTX technology, the company was able to provide its Isaac Sim with a physically accurate representation of the data collected by the sensors.

“We improved our sensor performance, and specifically for LiDAR, we have ray tracing, which provides accurate performance where the sensor data generated in the simulator starts to mimic and mirror what you’ll get from the real-world sensor.”

According to NVIDIA, ray tracing can provide a more accurate representation of the sensor data in various lighting conditions. It can also support rotating and solid state configurations. Several new models for LiDAR, such as Slamtec, Ouster, and Hesai have been added.

The company's latest release of its simulation tools includes new 3D assets that can be used to build physically accurate environments. These assets can help speed up the process of creating complex simulations.

The latest version of Isaac Sim also comes with new features for researchers working on complex robot programming and reinforcement learning. These include the Isaac Gym and the Isaac Cortex. A new tool called Isaac ORBIT allows researchers to create functional simulation environments for motion planning and robot learning.

Developers of robot operating systems can now use Isaac Sim's upgrades for Windows and ROS 2. According to NVIDIA, this will allow them to create complex simulations of the software.

Source: Nvidia Corp.

Cloud access

NVIDIA's focus on the cloud has grown as it allows users to access the latest version of its software and its applications more easily. Andrews noted that this allows the company to benefit from the scalability and accessibility of the cloud.

The availability of Isaac Sim in the cloud allows researchers working on robotic projects to collaborate more easily, and it can help them train and test virtual robots faster. Developers of Isaac replicator software can now create large datasets that can be used to create simulations of real-world environments. They can then use the company's platform to implement route planning and fleet task management.

The company's product, known as replicator, is built on the Omniverse technology platform and can be used to create synthetic data models. According to Andrews, it can help researchers train AI models by providing them with a way to supplement their existing data sets.

“We believe simulation is the critical technology to advance robotics and it will be the proving ground for robots,” said Andrews. “We have numerous customers that are working with us that have shared how they have been able to use Isaac Sim so far.”

According to NVIDIA, over a thousand companies and over a million developers have used various parts of the Isaac ecosystem to develop and test virtual robots. Some of these include companies that have used Isaac Sim to develop physical robots.

Use case examples range from Telexistence’s beverage restocking robots and Sarcos Robotics’ robots that pick and place solar panels in renewable energy installations to Fraunhofer’s development of advanced AMRs and Flexiv’s use of Isaac Replicator for synthetic data generation to train AI models.

To begin using the NVIDIA cuOpt for Isaac Sim extension, use the following resources:

Even though there was hardly any mention of the word "metaverse", CES 2023 still had a big showcase of the latest AR/VR tools, XR solutions for enterprise, and how industries planned on utilizing the technology moving forward. All without using the "M" word!

CES 2023 is officially over and this year we saw a big focus on how consumers, enterprise, and industries will access AR/VR experiences. What was interesting was the lack of the word "metaverse" with almost every presenter, hardware and software booth, and even with attendees.

One of the biggest announcements was from Nvidia and their Omniverse platform. What it would mean for the future of computing, robotics, automation and of course, gaming.

Apple wasn't part of CES this year, but that didn't mean people weren't talking about the swirling rumors of a possible Apple mixed reality headset in 2023. At this point it's still rumor but that didn't stop people from talking about it.

Of course when it comes to XR headsets, HTC created the most buzz with their HTC Vive XR Elite mixed reality headset. A lot of positive response on this device and many were shocked with a price tag just over $1,000.

Not only did we see announcements from companies, but we also saw some interesting announcements from industries and how they are planning on adopting XR technology moving forward. Tom Emrich, Director of Product for Niantic put together a few of those announcements on his LinkedIn profile.

Automotive Industry

Beauty Industry

Software Industry

CES

To give us more perspective about CES 2023, here are some additional thoughts of what was announced by Tom Ffiske, editor over at Immersive Wire.

CES 2023 Summary

CES Analysis

HTC unveiled the Vive XR Elite for $1,299, with 110-degree FoV, 4K resolution, and a 90Hz refresh rate.

Sol Reader is an e-ink VR headset that is under 100g, has 30 hours of battery life, and will cost $350

Other notable CES announcements according to Tim Ffiske:

By Bobby Carlton

Apple’s possible XR headset will let you change realities through a digital crown and will have realistic avatar XR video calls.

Apple mixed reality headset rumors are swelling with a lot of experts convinced that we will finally see a mixed-reality device from the tech giants sometime in 2023. Of course it makes sense that there is a lot of talk considering CES - the biggest conference focused on new technology - is underway in Las Vegas and every tech expert is there talking about the latest rumors they’ve heard from their secret sources.

Although Apple has not officially announced any product, it is widely expected that the company will finally unveil a mixed-reality headset that will feature a variety of new features. In typical Apple fashion, they wouldn’t do it at CES. If they had a product launch, Apple would hold its own specific event for that. As of right now, Apple’s next event is WWDC which is scheduled for June 7, 2023.

Apple
Apple CEO, Tim Cook
Image from Apple

Despite the overabundance of new rumors all over social media, it’s still hard to really pinpoint what Tim Cook and Apple will actually release. However a recent report published in The Information gives us a much clearer picture of the rumored headset thanks to an inside source knowledgeable about Apple's upcoming XR headset.

It was a lot of information and thankfully the folks over at Macrumors went through the detailed article and pulled all of the good details for us.

Reported Apple MR Specs

Design Rumors

As for price, it’s being reported that Apple could be selling its MR headset at a starting price of around $3,000 depending on its model. 

According to the internal source, the device will draw power from its waist-mounted battery. You’ll also be able to switch out batteries with ease and without powering down the device thanks to the headset having a small backup battery built into it. This feature was reportedly conceptualized by Jony Ive, Apple's former chief designer.

Apple

The device is reportedly built with two processors, one of which is a main system-on-a-chip (SoC) and one for video processing. Something very similar to Qualcomm's AR2 Gen 1 chip.

It is also rumored that the device will have an ISP that will de-warp the distorted image from the cameras into the headset, which is very important for mixed-reality applications. The chip is reportedly built using a custom-made memory from SK Hynix.

The device's headband is reportedly made from a material similar to the Apple Watch's sports bands, but its integrated speakers can be a privacy issue. Developers can use a different type of headband that's designed to work with a Mac.

What is really interesting is that it’s being reported that Apple’s MR headset will have a digital crown, similar to the Apple Watch, and is supposed to allow users to quickly switch between the physical and virtual worlds with a turn of the dial.

It's believed that the device's field of view is 120 degrees, which is wider than that of Meta Quest Pro and other open AR headsets. Through an integrated motor, the headset's lens distance can be adjusted automatically.

According to the report by The Information, the upcoming headset will have a display that shows the user's facial expressions. This feature is said to help reduce the isolation that users feel when they put on the device. The report also claims that the screen will consume little power and provide a low refresh rate, which is similar to what the Apple Watch offers.

It's also reported that Apple will utilize a 4K display with a micro OLED center for the user's eye. This will be accompanied by an LG display for the peripherals. Eye tracking will reportedly be integrated, which will help reduce the device's power consumption.

A big first for any type of MR headset is that the headset will feature multiple cameras that can detect the user's legs. In addition, it will reportedly have two LIDAR scanners.

Besides education, the report claims that the headset will feature XR video conferencing, which will allow users to interact with each other using realistic avatars. An AI process will reportedly be used to estimate the jaw and eyebrow movements of the user's avatar.

Apple also wants to give you the ability to transition between the Mac screen and the XR display. For example, if you pull the 2D Maps app from the Mac screen, it can show a 3D model of a city in XR mode.

Again, Apple hasn’t made any official announcements. However, this information does come from a very reliable source, and if this is all true Apple could be setting the bar very high for mixed-reality headsets.

HTC Vive XR Elite
Image by HTC

But again, these are all rumors and we already are seeing devices such as Magic Leap, the Meta Quest Pro, and the HTC Vive XR Elite (which is being launched during CES) available to consumers now. Apple is okay with playing the long-game when it comes to certain technology, but they can’t wait too long. XR technology is an incredibly fast race, and you can’t win the race if you’re not in it. 

So maybe Apple will finally jump into the AR/VR headset race in 2023.

crossmenu