by Bobby Carlton and Dilmer Valecillos
The Meta Quest Pro is available now and we have already seen some very cool things being teased from developers leading up to its launch. Of course, in order to develop, you need an amazing set of tools. Our Head of R&D, Dilmer Valecillos took a moment to take a deep dive into some of the top development tools you can use with your Quest Pro headset to develop and launch your own XR experiences.
In the video, Dilmer gives us a little bit of comparison of between the Quest Pro and the recently released Magic Leap 2 MR headset. He also gives us some perspective on color passthrough and what that will mean not only for the Meta Quest Pro, but for XR in general.
Some of the tools mentioned here are Unity, Unreal, MRTK 3, Needle Tools, 8th Wall, and Mozilla, and a brief skimming of how to use them for the Quest and deploy your builds.
All of these tools along with others will be essential for you to develop incredible XR experiences on the Quest Pro headset that you can bring into your workforce as a training platform, or used for social events and entertainment.
You can expect an even deeper dive into the Meta Quest Pro in upcoming posts here on our blog and on our YouTube page.
Jens Huang talks about the future of AI, robotics, and how NVIDIA will lead the charge.
By Bobby Carlton
A lot was announced and I did my best to keep up! So let's just jump right in!
NVIDIA CEO Jens Huang unveiled new cloud services that will allow users to run AI workflows during his NVIDIA GTC Keynote. He also introduced the company's new generation of GeForce RTX GPUs.
During his presentation, Jens Huang noted that the rapid advancements in computing are being fueled by AI. He said that accelerated computing is becoming the fuel for this innovation.
He also talked about the company's new initiatives to help companies develop new technologies and create new experiences for their customers. These include the development of AI-based solutions and the establishment of virtual laboratories where the world's leading companies can test their products.
The company's vision is to help companies develop new technologies and create new applications that will benefit their customers. Through accelerated computing, Jens Huang noted that AI will be able to unlock the potential of the world's industries.
The New NVIDIA Ada Lovelace Architecture Will Be a Gamer and Creators Dream
Enterprises will be able to benefit from the new tools that are based on the Grace CPU and the Grace Hopper Superchip. Those developing the 3D internet will also be able to get new OVX servers that are powered by the Ada Lovelace L40 data center. Researchers and scientists will be able to get new capabilities with the help of the NVIDIA LLMs NeMo Service and Thor, a new brain with a performance of over 2,000 teraflops.
Jens Huang noted that the company's innovations are being put to work by a wide range of partners and customers. To speed up the adoption of AI, he announced that Deloitte, the world's leading professional services firm, is working with the company to deliver new services based on the NVIDIA Omniverse and AI.
He also talked about the company's customer stories, such as the work of Charter, General Motors, and The Broad Institute. These organizations are using AI to improve their operations and deliver new services.
The NVIDIA GTC event, which started this week, has become one of the most prominent AI conferences in the world. Over 200,000 people have registered to attend the event, which features over 200 speakers from various companies.
A ‘Quantum Leap’: GeForce RTX 40 Series GPUs
NVIDIA's first major event of the week was the unveiling of the new generation of GPUs, which are based on the Ada architecture. According to Huang, the new generation of GPUs will allow creators to create fully simulated worlds.
During his presentation, Huang showed the audience a demo of the company's upcoming game, which is called "Rover RTX." It is a fully interactive simulation that uses only ray tracing.
The company also unveiled various innovations that are based on the Ada architecture, such as a Streaming Multiprocessor and a new RT Core. These features are designed to allow developers to create new applications.
Also introduced was the latest version of its DLSS technology, which uses AI to create new frames by analyzing the previous ones. This feature can boost game performance by up to 4x. Over 30 games and applications have already supported DLSS 3. According to Huang, the company's technology is one of the most significant innovations in the gaming industry.
Huang noted that the company's new generation of GPUs, which are based on the Ada architecture, can deliver up to 4x more processing throughput than its predecessor, the 3090 Ti. The new GeForce RTX 4090 will be available in October. Additionally, the new GeForce RTX 4080 is launching in November with two configurations.
Huang noted that the company's Lightspeed Studios used the Omniverse technology to create a new version of Portal, one of the most popular games in history. With the help of the company's AI-assisted toolset, users can easily up-res their favorite games and give them a physical accurate depiction.
NVIDIA Lightspeed Studios used the company's Omniverse technology to create a new version of Portal, which is one of the most popular games in history. According to Huang, large language models and recommender systems are the most important AI models that are currently being used in the gaming industry.
He noted that recommenders are the engines that power the digital economy, as they are responsible for powering various aspects of the gaming industry.
The company's Transformer deep learning model, which was introduced in 2017, has led to the development of large language models that are capable of learning human language without supervision.
“A single pre-trained model can perform multiple tasks, like question answering, document summarization, text generation, translation and even software programming,” said Huang.
The company's H100 Tensor Core GPU, which is used in the company's Transformer deep learning model, is in full production. The systems, which are shipping soon, are powered by the company's next-generation Transformer Engine.
“Hopper is in full production and coming soon to power the world’s AI factories."
Several of the company's partners, such as Atos, Cisco, Fujitsu, GIGABYTE, Lenovo, and Supermicro, are currently working on implementing the H100 technology in their systems. Some of the major cloud providers, such as Amazon Web Services, Google Cloud, and Oracle, are also expected to start supporting the H100 platform next year.
According to Huang, the company's Grace Hopper, which combines the company's Arm-based CPU with Hopper GPUs, will deliver a 7x increase in fast-memory capacity and a massive leap in recommender systems, weaving Together the Metaverse, L40 Data Center GPUs in Full Production
During his keynote at the company's annual event, Huang noted that the future of the internet will be further enhanced with the use of 3D. The company's Omniverse platform is used to develop and run metaverse applications.
He also explained how powerful new computers will be needed to connect and simulate the worlds that are currently being created. The company's OVX servers are designed to support the scaling of metaverse applications.
The company's 2nd-generation OVX servers will be powered by the Ada Lovelace L40 data center GPUs. Thor for Autonomous Vehicles, Robotics, Medical Instruments and More.
Today's cars are equipped with various computers, such as the cameras, sensors, and infotainment systems. In the future, these will be delivered by software that can improve over time. In order to power these systems, Huang introduced the company's new product, called Drive Thor, which combines the company's Grace Hopper and the Ada GPU.
The company's new Thor superchip, which is capable of delivering up to 2,000 teraflops of performance, will replace the company's previous product, the Drive Orin. It will be used in various applications, such as medical instruments and industrial automation.
3.5 Million Developers, 3,000 Accelerated Applications
According to Huang, over 3.5 million developers have created over 3,000 accelerated applications using the company's software development kits and AI models. The company's ecosystem is also designed to help companies bring their innovations to the world's industries.
Over the past year, the company has released over a hundred software development kits (SDKs) and introduced 25 new ones. These new tools allow developers to create new applications that can improve the performance and capabilities of their existing systems.
New Services for AI, Virtual Worlds
Huang also talked about how the company's large language models are the most important AI models currently being developed. They can learn to understand various languages and meanings without requiring supervision.
The company introduced the Nemo LLM Service, a cloud service that allows researchers to train their AI models on specific tasks, and to help scientists accelerate their work, the company also introduced the BioNeMo LLM, a service that allows them to create AI models that can understand various types of proteins, DNA, and RNA sequences.
Huang announced that the company is working with The Broad Institute to create libraries that are designed to help scientists use the company's AI models. These libraries, such as the BioNeMo and Parabricks, can be accessed through the Terra Cloud Platform.
The partnership between the two organizations will allow scientists to access the libraries through the Terra Cloud Platform, which is the world's largest repository of human genomic information.
During the event, Huang also introduced the NVIDIA Omniverse Cloud, a service that allows developers to connect their applications to the company's AI models.
The company also introduced several new containers that are designed to help developers build and use AI models. These include the Omniverse Replicator and the Farm for scaling render farms.
Omniverse is seeing wide adoption, and Huang shared several customer stories and demos:
The company also introduced a new Nano for Robotics that can be used to build and use AI models.
Huang noted that the company's second-generation processor, known as Orin, is a homerun for robotic computers. He also noted that the company is working on developing new platforms that will allow engineers to create artificial intelligence models.
To expand the reach of Orin, Huang introduced the new Nano for Robotics, which is a tiny robotic computer that is 80x faster than its predecessor.
The Nano for Robotics runs the company's Isaac platform and features the NVIDIA ROS 2 GPU-accelerated framework. It also comes with a cloud-based robotics simulation platform called Iaaac Sim.
For developers who are using Amazon Web Services' (AWS) robotic software platform, AWS RoboMaker, Huang noted that the company's containers for the Isaac platform are now available in the marketplace.
New Tools for Video, Image Services
According to Huang, the increasing number of video streams on the internet will be augmented by computer graphics and special effects in the future. “Avatars will do computer vision, speech AI, language understanding and computer graphics in real time and at cloud scale."
To enable new innovations in the areas of communications, real-time graphics, and AI, Huang noted that the company is developing various acceleration libraries. One of these is the CV-CUDA, which is a cloud runtime engine. The company is also working on developing a sample application called Tokkio that can be used to provide customer service avatars.
Deloitte to Bring AI, Omniverse Services to Enterprises
In order to accelerate the adoption of AI and other advanced technologies in the world's enterprises, Deloitte is working with NVIDIA to bring new services built on its Omniverse and AI platforms to the market.
According to Huang, Deloitte's professionals will help organizations use the company's application frameworks to build new multi-cloud applications that can be used for various areas such as cybersecurity, retail automation, and customer service.
NVIDIA Is Just Getting Started
During his keynote speech, Huang talked about the company's various innovations and products that were introduced during the course of the event. He then went on to describe the many parts of the company's vision.
“Today, we announced new chips, new advances to our platforms, and, for the very first time, new cloud services,” Huang said as he wrapped up. “These platforms propel new breakthroughs in AI, new applications of AI, and the next wave of AI for science and industry.”
The next time you’re shopping, looking at items neatly stacked on shelves in any aisle of your favorite store - some high, some low, some with promotional signage – take note that where those store employees put those products isn’t random.
There is actually a lot of research and science that goes behind where they place your favorite snacks or any item you pick up from the shelf. That data is then turned into a planogram and that is how stores and brands market their products.
Traditional marketing research has had consumers take surveys about product placement and visuals while shopping in a real store, and since shopping in a VR environment closely reflects how people shop in the real-world, researchers have recently been using VR to reconstruct store shelves - which can yield amazing results.
However, there is a much deeper layer of data-rich information that merchandising strategists have struggled to collect, but thanks to VR with eye-tracking technology, that data is now obtainable.
A great example of this is when it came to strategizing the best placement of Kellogg’s new Pop-Tart Bites on store shelves, the Battle Creek, MI based company turned to VR to come up with a market research merchandising solution that uses eye-tracking technology to give Kellogg’s marketing specialist the ability to literally look through the eyes of the shopper, and observed their gazing habits while choosing items before placing them into their shopping carts.
The program collected data such as how many seconds a consumer looked at an item? Did the consumer look at merchandising signage? What direction did their eyes go when scanning a shelf? Did they look at the competitor’s products?
The technology behind VR eye-tracking used a Qualcomm headset powered by a Snapdragon 845 Mobile VR Platform that had shoppers wear a VR headset as they shopped in a virtual environment. Eye-tracking technology made possible through software from InContext Solutions and eye-tracking data analytics capabilities from Cognitive3D, allowed researchers the ability to track eye movement during the virtual shopping experience.
Through the technology of VR and eye-tracking, Kellogg’s was able to get into the head and behind the eyes of shoppers to collect data that they would have missed through traditional market research.
This approach made it much easier to alter variables such as placement, assortment or signage to test configurations and see much those alterations and changes could impact consumer habits.
Kellogg explored multiple scenarios of where the best placement of their Pop-Tart Bites would be on shelves:
The results of Kellogg’s efforts equated to an 18% increase in total brand sales!
Traditionally, eye level placement was considered prime real-estate for products on a shelf. Yet, Kellogg’s was able to prove the opposite through VR and eye-tracking technology - and the increase in sales is the proof.
“XR provides transformative value to the enterprise,” said Patrick Costello, senior director of business development at Qualcomm Technologies, Inc., “This proof of concept with Accenture and Kellogg Company demonstrates the benefits of full immersion and eye-tracking and we expect several customers to follow with similar deployments.”
For Kellogg’s and other companies, the impact of VR merchandising solutions has the potential to transform product placement by examining consumer buying behavior in a faster, more affordable way at a larger scale, with more holistic conclusions.
“By combining the power of VR with eye-tracking and analytics capabilities, it allows significant new insights to be captured while consumers shop by monitoring where and how they evaluate all products across an entire shelf or aisle,” said Camera, adding, “Ultimately, this enables product placement decisions to be made that can positively impact total brand sales, versus only single product sales.”
When it comes to using XR technology and the process of digital twinning of any type of training exercise, whether it’s a hard skill or a soft skill, organizations are seeing huge returns and big benefits.
This includes things such as the exploration of “what if” scenarios, being able to practice often and fail without real-world consequences, having an emotional and physical engagement, and that feeling of risk to help drive the experience.
Approaching training using XR solutions becomes a data-rich environment.
One important part of that data is the number of key performance indicators (KPI’s) you can uncover through this approach.
KPI’s help organizations (large or small) ensure that they are hitting their training and engagement goals with their workforce and even their workflow. This measurement of information is critical for strategic and operational improvement by delivering an incredibly detailed picture of where your organization is succeeding in your training and engagement, as well as where you might be failing.
The amazing thing about XR is that it can be used to amplify those metrics, which is a huge advantage for any organization.
Through research we have found that XR has proven itself to be an incredible training resource for a number of industries such as hospitality, medical, automotive and first responders just to name a few. Data shows that we can actually improve employee training and get better engagement results through this approach.
It should also be mentioned that XR can have a financial impact on an organization by cutting costs in the time it takes to train employees.
Some XR companies have found:
By leveraging virtual reality to measure KPI’s, we are able to take those metrics and turn it into a visual, physical, and emotional experience that can blow KPI’s out of the water and provide you with something much deeper than you’d expect from traditional KPI data collection.
After all, the more information you can receive from your KPI could mean big advantages in improving everything from reducing your Time-to-Fill metrics, to leadership training and overall improvement with employee engagement.
To give you an example, let’s look at how XR can take the Kirkpatrick Four-Level Training Evaluation Model and boost its data for a more robust picture of your training endeavors.
Reaction: When you train employees using XR, they are able to explore the training from multiple perspectives allowing for the exploration of multiple reactions. Employees find themselves immersed in a training environment where they are reacting through visual and audio cues, and are participating both physically and emotionally.
XR training allows you to put employees into a potentially dangerous or stressful situation without actually putting them in danger, and the employee will react to the training as if they were actually in that scenario. You are getting a true real-world reaction that includes physical and emotional feedback from the training that normally isn’t possible through traditional classroom training. This gives you a richer and more precise measurement that can be used to shape future training.
Learning: Looking at other XR case studies, individuals who learn through XR saw a 40% decrease in time spent training, while retaining more information. Whether the training is something such as preparing employees for Black Friday, training sales reps to be better at their jobs, or a helping professional hockey players stay mentally sharp while recovering from an injury. Learning new tasks through XR can create actual muscle memory both mentally and physically, which will enhance the KPI.
KPI’s are no longer limited to measuring retained information from lectures or textbooks, but through XR, you are now able to measure both information acquired in a classroom environment, as well as measuring physical and emotional knowledge from your training.
Behavior: Evaluating employee behavior based on training they received gets a big boost from XR. Employees are able to use XR to not only see things from their own perspective – but through XR – they are able to observe their own behavior from multiple perspectives or as a different person, sex, or nationality. It allows the employee to really get a big picture of how they are reacting and behaving during a certain situation.
Learners are also able to explore options and even behave in a manner that they normally wouldn’t with zero constraints or consequences. Using XR to explore KPI’s, you are able to explore behavior issues such as sexual discrimination or racism within your work environment, and come up with a better training strategy.
Results: Adding XR gives you results that are data-rich and presents an incredibly detailed picture of how your employees react to your training. It goes beyond what power point or textbooks can offer and gives an overhead view of key performance indicators that is super boosted with information that can help you move forward towards successful training, or it may highlight an area that you might need to make changes in your approach.
Surveys and interviews alone only give you part of your key performance indicators, but KPI’s along with a layer of XR can provide you with an incredibly deep view of training effectiveness in ways you could have never imagined.
The results are remarkably detailed, rich in data, and will give your training goals a massive boost that you couldn’t access through traditional KPI’s. The data can reshape the future of your training goals and offer data-rich improvements in how you train employees.
For example, one company turned to XR to train new employees on how to spray chemical coatings through a XR painting software using an all-in-one system called SimSpray that utilizes Oculus hardware paired with a custom sensor and custom “Spray Gun” controller.
This training allowed students to learn-by-doing in a virtual space with no emissions or need for protective equipment. The training could be reviewed via video playback, and skills could be perfected by trying over and over again.
The VR experience provided important KPI metrics to allow the company to see how well new hires were learning their new jobs as well as explore where they needed additional training. It also allowed the organization to see how current employees were learning new processes and procedures.
The results changed how the organization looked at their KPI’s and are now exploring how XR can re-shape all of their training initiatives.
This is one of many cases of how XR can intensify an organization’s KPI. The potential is there, and the data you get in return is huge. This is just one of the many benefits of XR and digital twinning.
With each evolution of technology organizations and industries are finding new and creative ways to improve workflow and capture important data that they can use within their company landscape.
During the opening ceremonies at AWE (Augmented World Expo) 2022, Unity CEO John Riccitiello predicted that by 2030 all websites will transform from being a location for information and e-commerce, to being a lobby (or hub) for a branded metaverse experience.
During his keynote, Riccitiello said that this is not only how the metaverse economy will grow, but will become the next evolution of any business website in any industry, and that this will be a lobby for your own digital twin, with your own IP and mission statement.
Riccitiello also stressed five important things about the future of websites.
The financial institution Citi agrees with Riccitiello! According to a recent report published by the global banking company, they predict that by 2030, the metaverse will have grown to 5 billion consumers and predict that the metaverse will be valued at an $8 trillion to $13 trillion business opportunity.
President Volodymyr Zelensky recently turned to using AR as a way to reach out to people through a pre-recorded volumetric video. In the 3D AR video, Zelensky said "Ukraine has a chance for global digital revolution," adding, "a chance for every visionary to show their value, skills, technologies and ambitions."
The goal of the video was to not only engage with the people of Ukraine, but to also reach out to the global technology community to get them involved in helping his country rebuild their digital infrastructure.
To create the video, Evercoast CEO Ben Nunez visited Kyiv to record the official presidential address. In order to make accessing the AR video easier, Zelensky made his message accessible without the need of an app. Instead, you simply scan a QR code to open the message.
This is a big part of a campaign called Rebuilding Ukraine.
Read more about Zelensky using AR here.
New data out of Johns Hopkins University School of Medicine shows that VR training for surgery is more effective than they thought.
The study titled "Evaluation of a Slipped Capital Femoral Epiphysis VR Surgical Simulation for the Orthopedic Trainee", which was published in the Journal of the American Association of Orthopedic Surgeons Global Research & Reviews, had 21 orthopedic trainees using Osso VR medical training to perform a slipped capital femoral epiphysis screw fixation procedure.
The results showed that VR training was subjectively higher-rated than conventional reading and video methods.
Currently Osso VR's training platform is being used in a number of hospitals and healthcare centers. Just recently Osso secured $66 million in funding to expand their platform.
During Apple's recent WWDC event, the tech giant made a lot of announcements for a new M2 Chip, a new MacBook Air, iOS16, and other goodies. What they didn't mention was a peep on the rumored Apple VR/AR headset.
However, during the announcement, Apple gave people a quick look at a new piece of software called RoomPlan. The ARKit powered tool uses your camera and LiDAR Scanner on an iPhone or iPad to easily scan a room to create a 3D floor plan.
Once the file has been created, you can easily export it into various USDZ-compatible tools such as Cinema 4D, Shapr3D, and AutoCAD, where you can then manipulate the file and even build out VR worlds, games, or a place to call your own in the metaverse.
Anyone with an Apple Developer account can download the SDK for RoomPlan here.
The very popular SocialVR platform RecRoom has hit a milestone with 75 million lifetime users since 2016. The company shared this information during its sixth birthday.
If you've never spent time in RecRoom, you're missing out! It's an incredible metaverse experience filled with multiple worlds and a number of fun games that include laser tag, basketball, treasure hunts and even haunted houses!
At the moment, RecRoom has 29 million active users from all around the world who can jump in through a Quest VR headset, iOS, Android, PlayStation, XBox and Steam devices. One standout device for RecRoom is mobile access. RecRoom is reporting a 640% year-over-year increase and enthusiasm for mobile metaverse experiences, giving users an easy way to jump into a game or spend time with friends hanging out and sharing virtual goods.
It doesn't matter if you're just getting started with Augmented Reality or if you're an expert in the subject.
These books are a must-read for any Augmented reality enthusiast.
So without further ado, let's get started with #1.
This book is most useful for those who are already working in the AR space.
It provides clear explanations and narratives about how AR will change our daily lives.
One of the key insights of this book is that the best technology is invisible; it enhances life in a way you don't notice.
When an interaction with technology is good, the user doesn't operate technology; it integrates into his routine in a way that eliminates all thinking, just like driving a car or scrolling through social media.
This is where AR is heading; it will become one with our routine to the point where you don't even notice you're using it.
This book is for non-technical readers who are interested in Augmented Reality.
It starts by telling you the history of the technology. Then it goes into telling us about the visionaries, companies, industry experts, and products that are piloting this new technology.
The authors also speak about the "prime directive of humans," which explained in the simplest terms is our goal to create and utilize tools that will increase our chances of survival and improve the quality of our lives.
Augmented Reality, Virtual Reality, and spatial computing are the latest tools at our disposal.
With the changes in the field of VR and AR coming at a break-neck pace, it's becoming increasingly difficult to keep up with all of the advancements.
This book covers a broad spectrum of topics related to augmented reality.
From the science of human perception to the evolution of technology and augmented reality applications in complex fields such as medicine.
Anyone interested in the field of AR will feel like an expert after reading this book.
Life has significantly transformed over the past 40+ years.
This applied to your life, how you communicate with others, how you travel to work, how you pay your bills, and how you work (likely, your job didn't exist 50 years ago)
The authors give their opinions about the coming impact immersive technologies will have on our lives.
They compare it to the smartphone revolution that changed society a decade ago.
This is a must-read for any AR enthusiast. The book will leave you wondering what's possible and impatiently waiting for the future.
This is a book about the future of AR, VR, and MR.
The author speaks at great length about Oculus Rift, Microsoft HoloLens, and other similar technologies.
This book will help you get the lay of the land; you will learn the terminology, the techniques, and various viewpoints on where things are headed.
Some of the excellent key take points that the author teaches us early on in the book are:
1 the killer app is other people
2 tech succeeds when it makes what we're doing better, faster, and cheaper
3 we always overestimate the short term and underestimate the long term.
This one is different from the other books on the list.
Even though it is a book about VR & AR, its main focus is on teaching you how to create marketing plans for these applications.
This book will help you grasp VR/AR as an experience. As storytelling, or even better, the emerging term: storyliving.
If you are holding back because you are concerned that VR/AR may not apply to your brand because you are not in the gaming industry, read this book. VR/AR is indiscriminate.
Because VR/AR possesses the power to Entertain, Explain, and Train. That applies to any industry.
Want to see what an augmented reality book would look like?
Well, look no further than Tide Pools.
This book isn't just a book to read; it's a book to experience.
You can hear the sounds of the waves as you see the tide pool creatures actually moving on the pages.
It's a truly unique experience.