FS Studio Logo

Jens Huang talks about the future of AI, robotics, and how NVIDIA will lead the charge.

By Bobby Carlton

A lot was announced and I did my best to keep up! So let's just jump right in!

NVIDIA CEO Jens Huang unveiled new cloud services that will allow users to run AI workflows during his NVIDIA GTC Keynote. He also introduced the company's new generation of GeForce RTX GPUs.

During his presentation, Jens Huang noted that the rapid advancements in computing are being fueled by AI. He said that accelerated computing is becoming the fuel for this innovation.

He also talked about the company's new initiatives to help companies develop new technologies and create new experiences for their customers. These include the development of AI-based solutions and the establishment of virtual laboratories where the world's leading companies can test their products.

The company's vision is to help companies develop new technologies and create new applications that will benefit their customers. Through accelerated computing, Jens Huang noted that AI will be able to unlock the potential of the world's industries.


The New NVIDIA Ada Lovelace Architecture Will Be a Gamer and Creators Dream

Enterprises will be able to benefit from the new tools that are based on the Grace CPU and the Grace Hopper Superchip. Those developing the 3D internet will also be able to get new OVX servers that are powered by the Ada Lovelace L40 data center. Researchers and scientists will be able to get new capabilities with the help of the NVIDIA LLMs NeMo Service and Thor, a new brain with a performance of over 2,000 teraflops.

Jens Huang noted that the company's innovations are being put to work by a wide range of partners and customers. To speed up the adoption of AI, he announced that Deloitte, the world's leading professional services firm, is working with the company to deliver new services based on the NVIDIA Omniverse and AI.

He also talked about the company's customer stories, such as the work of Charter, General Motors, and The Broad Institute. These organizations are using AI to improve their operations and deliver new services.

The NVIDIA GTC event, which started this week, has become one of the most prominent AI conferences in the world. Over 200,000 people have registered to attend the event, which features over 200 speakers from various companies.

A ‘Quantum Leap’: GeForce RTX 40 Series GPUs


NVIDIA's first major event of the week was the unveiling of the new generation of GPUs, which are based on the Ada architecture. According to Huang, the new generation of GPUs will allow creators to create fully simulated worlds.

During his presentation, Huang showed the audience a demo of the company's upcoming game, which is called "Rover RTX." It is a fully interactive simulation that uses only ray tracing.

The company also unveiled various innovations that are based on the Ada architecture, such as a Streaming Multiprocessor and a new RT Core. These features are designed to allow developers to create new applications.

Also introduced was the latest version of its DLSS technology, which uses AI to create new frames by analyzing the previous ones. This feature can boost game performance by up to 4x. Over 30 games and applications have already supported DLSS 3. According to Huang, the company's technology is one of the most significant innovations in the gaming industry.

Huang noted that the company's new generation of GPUs, which are based on the Ada architecture, can deliver up to 4x more processing throughput than its predecessor, the 3090 Ti. The new GeForce RTX 4090 will be available in October. Additionally, the new GeForce RTX 4080 is launching in November with two configurations.

  1. The 16GB version of the new GeForce RTX 4080 is priced at $1,199. It features 9,728 CUDA cores and 16 GB of high-speed GDDR6X memory. Compared to the 3090 Ti, the new 4080 is twice as fast in games.
  2. The 12GB version of the new GeForce RTX 4080 is priced at $899. It features 7,680 CUDA cores and 12 GB of high-speed GDDR6X memory. DLSS 3 is faster than the 3090 Ti, making it the most powerful gaming GPU available.

Huang noted that the company's Lightspeed Studios used the Omniverse technology to create a new version of Portal, one of the most popular games in history. With the help of the company's AI-assisted toolset, users can easily up-res their favorite games and give them a physical accurate depiction.

NVIDIA Lightspeed Studios used the company's Omniverse technology to create a new version of Portal, which is one of the most popular games in history. According to Huang, large language models and recommender systems are the most important AI models that are currently being used in the gaming industry.

He noted that recommenders are the engines that power the digital economy, as they are responsible for powering various aspects of the gaming industry.

The company's Transformer deep learning model, which was introduced in 2017, has led to the development of large language models that are capable of learning human language without supervision.

Image from NVIDIA

“A single pre-trained model can perform multiple tasks, like question answering, document summarization, text generation, translation and even software programming,” said Huang.

The company's H100 Tensor Core GPU, which is used in the company's Transformer deep learning model, is in full production. The systems, which are shipping soon, are powered by the company's next-generation Transformer Engine.

“Hopper is in full production and coming soon to power the world’s AI factories."

Several of the company's partners, such as Atos, Cisco, Fujitsu, GIGABYTE, Lenovo, and Supermicro, are currently working on implementing the H100 technology in their systems. Some of the major cloud providers, such as Amazon Web Services, Google Cloud, and Oracle, are also expected to start supporting the H100 platform next year.

According to Huang, the company's Grace Hopper, which combines the company's Arm-based CPU with Hopper GPUs, will deliver a 7x increase in fast-memory capacity and a massive leap in recommender systems, weaving Together the Metaverse, L40 Data Center GPUs in Full Production

During his keynote at the company's annual event, Huang noted that the future of the internet will be further enhanced with the use of 3D. The company's Omniverse platform is used to develop and run metaverse applications.

He also explained how powerful new computers will be needed to connect and simulate the worlds that are currently being created. The company's OVX servers are designed to support the scaling of metaverse applications.

The company's 2nd-generation OVX servers will be powered by the Ada Lovelace L40 data center GPUs. Thor for Autonomous Vehicles, Robotics, Medical Instruments and More.

Today's cars are equipped with various computers, such as the cameras, sensors, and infotainment systems. In the future, these will be delivered by software that can improve over time. In order to power these systems, Huang introduced the company's new product, called Drive Thor, which combines the company's Grace Hopper and the Ada GPU.

The company's new Thor superchip, which is capable of delivering up to 2,000 teraflops of performance, will replace the company's previous product, the Drive Orin. It will be used in various applications, such as medical instruments and industrial automation.

3.5 Million Developers, 3,000 Accelerated Applications

According to Huang, over 3.5 million developers have created over 3,000 accelerated applications using the company's software development kits and AI models. The company's ecosystem is also designed to help companies bring their innovations to the world's industries.

Over the past year, the company has released over a hundred software development kits (SDKs) and introduced 25 new ones. These new tools allow developers to create new applications that can improve the performance and capabilities of their existing systems.

New Services for AI, Virtual Worlds

Image from FS Studio

Huang also talked about how the company's large language models are the most important AI models currently being developed. They can learn to understand various languages and meanings without requiring supervision.

The company introduced the Nemo LLM Service, a cloud service that allows researchers to train their AI models on specific tasks, and to help scientists accelerate their work, the company also introduced the BioNeMo LLM, a service that allows them to create AI models that can understand various types of proteins, DNA, and RNA sequences.

Huang announced that the company is working with The Broad Institute to create libraries that are designed to help scientists use the company's AI models. These libraries, such as the BioNeMo and Parabricks, can be accessed through the Terra Cloud Platform.

The partnership between the two organizations will allow scientists to access the libraries through the Terra Cloud Platform, which is the world's largest repository of human genomic information.

During the event, Huang also introduced the NVIDIA Omniverse Cloud, a service that allows developers to connect their applications to the company's AI models.

The company also introduced several new containers that are designed to help developers build and use AI models. These include the Omniverse Replicator and the Farm for scaling render farms.

Omniverse is seeing wide adoption, and Huang shared several customer stories and demos:

  1. Lowe's is using Omniverse to create and operate digital twins of its stores.
  2. The $50 billion telecommunications company, Charter, which is using the company's AI models to create digital twins of its networks.
  3. General Motors is also working with its partners to create a digital twin of its design studio in Omniverse. This will allow engineers, designers, and marketers to collaborate on projects.
Image from Lowes

The company also introduced a new Nano for Robotics that can be used to build and use AI models.

Huang noted that the company's second-generation processor, known as Orin, is a homerun for robotic computers. He also noted that the company is working on developing new platforms that will allow engineers to create artificial intelligence models.

To expand the reach of Orin, Huang introduced the new Nano for Robotics, which is a tiny robotic computer that is 80x faster than its predecessor.

The Nano for Robotics runs the company's Isaac platform and features the NVIDIA ROS 2 GPU-accelerated framework. It also comes with a cloud-based robotics simulation platform called Iaaac Sim.

For developers who are using Amazon Web Services' (AWS) robotic software platform, AWS RoboMaker, Huang noted that the company's containers for the Isaac platform are now available in the marketplace.

New Tools for Video, Image Services

According to Huang, the increasing number of video streams on the internet will be augmented by computer graphics and special effects in the future. “Avatars will do computer vision, speech AI, language understanding and computer graphics in real time and at cloud scale."

To enable new innovations in the areas of communications, real-time graphics, and AI, Huang noted that the company is developing various acceleration libraries. One of these is the CV-CUDA, which is a cloud runtime engine. The company is also working on developing a sample application called Tokkio that can be used to provide customer service avatars.

Deloitte to Bring AI, Omniverse Services to Enterprises

In order to accelerate the adoption of AI and other advanced technologies in the world's enterprises, Deloitte is working with NVIDIA to bring new services built on its Omniverse and AI platforms to the market.

According to Huang, Deloitte's professionals will help organizations use the company's application frameworks to build new multi-cloud applications that can be used for various areas such as cybersecurity, retail automation, and customer service.

NVIDIA Is Just Getting Started

During his keynote speech, Huang talked about the company's various innovations and products that were introduced during the course of the event. He then went on to describe the many parts of the company's vision.

“Today, we announced new chips, new advances to our platforms, and, for the very first time, new cloud services,” Huang said as he wrapped up. “These platforms propel new breakthroughs in AI, new applications of AI, and the next wave of AI for science and industry.”

By Bobby Carlton

The Internet of Things (IoT) is a system of devices and objects that can be connected to each other and communicate with other systems and devices without human intervention. These objects or devices usually have sensors, cameras, and RFID tags, and they can communicate with one another through a communication interface. These systems can then perform various tasks and provide a single service to the user.

The truth is that IoT is the foundation and backbone of digital twinning.

As we become more digitally connected in almost all aspects of our lives, IoT becomes a vital component of the consumer economy by enabling the creation of new and innovative products and services. The rapid emergence and evolution of this technology has led to the creation of numerous opportunities but also some challenges.

Due to the technological convergence across different industries, the scope of IoT is becoming more diverse. It can be used in various fields such as healthcare, home security, and automation through devices such as Roomba’s or smart speakers. Of course there are also numerous embedded systems that can be used in this technology such as sensors, wireless communications, and the automation of your home or business.

With the rapid increase in the number of connected devices and the development of new technologies such as AR,VR, and XR, the adoption of these products and services is expected to increase.

According to Statista, the global market for IoT is currently valued at around 389 billion US dollars. This value is expected to reach over a trillion dollars by 2030 reflecting the increasing number of connected devices and the technological advancements that have occurred thanks to the growth of digital twinning. It is also expected to boost the customer economy by increasing the demand for various products and services.

In 2020, the consumer market contributed around 35% of the IoT market's value. However, it is expected that this will increase to 45% by 2030. This is because the market is expected to expand with the emergence of new markets such as the automotive, security, and smartphone sectors.

The concept of the Internet of Things is a device layer that enables the connectivity of various devices that were previously not connected to the internet. It can also act as a connective link between different devices, such as tablets and smartphones.

These devices can connect using various types of wireless networking solutions and physical means, and they can also communicate with one another and the cloud. Through the use of sensors, these systems can provide users with a variety of services and features. They can be controlled and customized through a user interface, which is typically accessible through a website and app.

A typical smart bulb IoT system consists of various components such as a wireless communication interface, LED light-generating devices, and a control system. These components work together seamlessly with the user being able to access their devices through a mobile app or website. A great example of this is a Google Nest system to monitor your front door and your home thermostat, which can be purchased at almost any hardware or lifestyle store.

Image from Target

Aside from these, other IoT systems such as smart televisions, smart refrigerators, and smart speakers are also becoming more popular among consumers. These kinds of devices can be combined with a home's existing smart home technology to provide users with a variety of services and features designed to streamline and automate your home experiences. 

Of course privacy and data are two things consumers and businesses need to consider when bringing these devices into their environments. How much are you giving up in order to streamline or automate your home or business? We are already in the habit of giving up some of our privacy through smartphone use and other wearables.

One of the most common uses of IoT technology in the consumer economy is to improve customer service. Enterprises use it to improve the efficiency of their distribution channels by implementing a variety of systems, such as inventory management and product tracking. In addition, construction sites and cars are also using IoT to monitor their environments to reduce downtime and improve their overall performance.

Other industries that use IoT primarily include government facilities, transportation systems, and healthcare systems. Through the use of IoT, these organizations can improve the efficiency of their operations and increase the effectiveness of their systems. The technology can help the consumer economy by enhancing the service provided by their organizations.

The connectivity and data technology has also improved, with devices now capable of handling and storing large amounts of data. The ability to process and analyze data is becoming more sophisticated. Various factors such as the evolution of cloud technologies and the increasing capacity of storage systems have made it easier for devices to store and process data.

The increasing number of companies and organizations investing in the development of IoT devices is expected to continue to increase, and this will help them gain a competitive advantage and develop new solutions that will significantly impact the consumer economy.

Guest Post by Joshua Kennedy

When we think of the term "metaverse", the mind often drifts to images of the matrix, modern-day gaming experiences, or the movie "Ready Player One", which was a fairly good watch all things considered. However, concepts such as virtual reality (VR), augmented reality (AR), and the metaverse are all associated with informal gaming circles or the immersive experience you get at a science and technology fair.  

Image from Warner Brothers Pictures

These days, the metaverse and the accompanying technology are seeing more and more permeation into more formal sectors, like businesses and educational institutions. A great example is how businesses are using the metaverse to create virtual rooms to hold conferences and interviews in. They are literally creating a digital copy of their workplace.  

If you look at the evolution of this form of long-distance communication, we started working in offices pre-pandemic. Then came the lockdown, and we all shifted to Zoom meetings during those pressing times. So, even though the peak of the pandemic is tentatively behind us, the need for long-distance communication solutions in the workplace remains constant.

This is mostly due to the fact that we seem to have permanently adopted remote and hybrid work models, which have proved to be quite beneficial. This in turn gave rise to another trend that rose alongside the metaverse and that is automation. A good example of this is Credibled, which is an automated reference checking platform that helps streamline the back-and-forth process between employers, employees and referees. 

With that in mind, you could consider the further permeation of the metaverse as the next logical step in meeting those needs. Even with this need, there are certain gaps that we will address in this blog and speculate where it might lead us later down the road. 

There Is a Gap in Metaverse Adoption 

Most of us have heard of the metaverse but have never experienced it for ourselves.For the most part, we are only seeing VR and AR tech being used in business arenas and educational settings. But why is that? Why is it that, unlike Zoom meetings and phone calls, metaverse tech isn’t more commonly used by every-day-people? 

One of the main reasons that could be a contributing factor is that technology is still in its infancy. The level of immersiveness that we have been able to achieve so far has been great, but there is still room for improvement. And to be fair, we are far from the Matrix level of immersive. 

Another reason why there is a gap preventing the normalization of the metaverse every day is that there are a lot of misconceptions surrounding it. For the purposes of this article, we will focus on five of the biggest misconceptions.

Misconceptions When It Comes to the Metaverse  

  1. The Metaverse is for Gaming - This seems to be one of the biggest misconceptions about the nature of the technology. Yes, gaming and VR/AR tech are like bread and butter. They do go hand in hand. But the same is true for PC games, PlayStation, Xbox, and so on. Metaverse tech has a wide range of applications aside from just catering to the gaming world.
  1. The Metaverse is VR - Calling the metaverse a virtual reality is like saying your phone is the Internet. It is simply a tool to interface with the Internet. The same applies to the metaverse, where you experience it through tools like VR, AR, and XR. Why, you can even experience it on your laptop.
  1. It’s the Gateway to a Dystopian Future - Despite what movie tropes would have you believe, the metaverse does not mean we are going to get pulled into the virtual and leave the real world a wasteland. The reality (no pun intended) is far less bleak. The metaverse is simply an addition that will open up new venues in the virtual space for humans to socialize, work, create, explore, and so on.
  1. It Is a Passing Fad - To say the metaverse is a fad is like saying the advent of phones or the internet was a fad. To be fair, we are a few years away from a fully realized metaverse. Technology still needs to grow and evolve for that. Having said that, we are living in what you might call, "a primitive version" of the metaverse. At the end of the day, our needs as humans to socialize, connect, and learn won’t change. Neither will they in the realm of business. What will change is the ways in which we achieve our goals.
  1. Metaverse Will Be Monopolized - While companies like Microsoft are doing great things with XR tech and the metaverse, that doesn’t really mean that they will have a monopoly on it. Yes, they are able to scale fast and latch onto new trends, but that doesn’t guarantee a monopoly. The metaverse and its technology is part of the Web3 era. One of the core tenants of this is the decentralization of the internet through blockchain technology. This means that, by its very nature, the metaverse cannot be controlled by one entity. 

How Metaverse Tech is Meeting the Future of Work 

Image from FS Studio

Decentralization: As mentioned before, decentralization is one of the biggest ways that the metaverse will meet the future of work. Rather than looking at it as an entity that no one has control over, we can see it as a truly democratic ecosystem. It will be a landscape that has diversity and equality as its foundation. This will essentially translate to digital sovereignty for all those involved, and in terms of the inclusive workspaces that companies are working towards, this aligns quite well. 

Spatial Computing: The ability of the metaverse to replicate real-world spaces in 3D models is something that will play a huge role in the seamless transition. The intricate modeling frameworks and 3D visualizations will allow businesses to more easily adopt and operate within this space. A good example of this is how some companies are already conducting virtual interviews and conferences in the metaverse.  

Human Interface: With the growing demand for the metaverse in the workplace, so too, grows the need to interact with it. This pushes the development of tools like VR headsets, AR glasses, haptics and the like. This brings us back to the previous point of a seamless transition and ease of operation for those who take this path. What this also means is that we will have better, more immersive ways to communicate with one another in the digital realm. 

Creator Economy: Since 2014, we have seen the rise of a creator economy in the virtual space through NFTs (Non-Fungible Tokens). This has become intertwined with the cryptomarkets and blockchain technology. And with Web3 and the metaverse of the future being all about the blockchain, we might see a new form of business integration with the creator economy. 

Universal Experience: One of the biggest benefits of the metaverse is the universality that it brings to the table. In the future, the metaverse will enable people to communicate without having to learn a new language just so they can work together. Voices can be changed, languages can be translated and workplaces in the digital space can become more inclusive, diverse, and globally spread out.  

Where Is the Metaverse Heading? 

According to a Pew Research Report, 54% of experts believe that by 2040, the metaverse will be more refined and immersive. They also expect it to become a fully integrated and functional aspect of daily life for around half a billion people or more, worldwide. The other 46% think that this won’t be the case. 

Image from Pew Research

As of now, metaverse tech isn’t there yet, but still in its infancy. So, how do we bridge the gap and get it out there more? Well, everything points to one common answer: time. With time, technology will develop, and so too will the ability of the average person to access and interact with metaverse technology. 

One thing that the experts are agreeing on is that augmented reality and mixed reality applications will be on the frontier of these advances. These advances will appeal to people because they will be additive to real-world experiences. 

Why Experts Think It Will Take Off vs. Why It Won’t 

The portion of experts who think it will take off cited several reasons for it. For one, technological advancements drive profits through investments and vice versa. They also mentioned that it could see much more use in not just business sectors but also areas like fashion, art, sports, health, entertainment, and so on. 

On the other side of the pond, we have those who say it won’t take off to this degree. They cite reasons like the lack of usefulness in daily life for the average person. They also shared concerns about issues such as privacy, surveillance capitalism, cyber bullying, and so on. It was also speculated that the technology to reach more people wouldn’t be ready by 2040. 

Summing Up 

No matter how you look at it, no one can say for certain how things will go. There may be legitimate concerns surrounding the emergence of the metaverse, but at the same time, there are plenty of benefits. At the end of the day, it is no substitute for meeting someone in person, but it does serve as a close second. Just like Zoom calls were the next stage following phone calls, meeting people in the metaverse and automation are the next steps in the evolutionary ladder of communication technology. 

It all just becomes a matter of how well we balance it with the real world and the uses we put it to. When all is said and done, the metaverse is a space, but more so, it is a tool. It is a tool that has unexplored potential for all sectors and industries. 

By Bobby Carlton

Warehouse automation systems may seem like they’re a dime a dozen, however, each approach is different with some focusing on humans to manage them, many others relying on robotics and automation, and of course we’ve seen a blended approach with automation, robotics and humans working together. 

One solution is using AI to help drive automation along with other technologies such as robotics and XR. Data shows that we can improve work environments through automation, but getting everyone around the world to adapt the approach isn’t that easy. 

However, a new global initiative to create global efficiencies is a hot conversation at the moment. AI and automation are about to drastically change the way businesses (large and small) and even how governments operate through a push that will include cutting-edge technology such as natural language processing, machine learning, and autonomous systems through robotics and XR solutions.

The objective of the Artificial Intelligence Act will be to create a safer and more efficient work process that can help organizations explore “what if” scenarios and be more predictive, explore recommendations and different paths to success, and even help company leaders make important company-wide decisions.

One thing to keep in mind is that regulating the approach varies in different parts of the world from China, the European Union, and the U.S., and that as businesses invest their resources into AI and automation, they will have to ensure they comply with all of the regulations in place.

For example, the Chinese government is being a bit more forward thinking by moving AI regulations beyond the proposal stage and has already passed a regulation that mandates companies must notify users when an AI algorithm (or avatar) is involved. This means that any business in China must adopt AI and automation compliances which will impact both customers and the workforce. 

While the European Union has a much broader scope than China’s approach. For the EU, the focus of their proposed regulation is more on the risks created by AI and sorted into 4 categories. Minimal risk, limited risk, high risk, and unacceptable risk. Using AI with automation applications would help companies through human oversight and ongoing monitoring of facilities using robotics and XR solutions. 

Those companies will be required by law to register stand-alone high-risk AI systems such as remote biometric identification systems. 

Once passed, the EU would implement this process by Q2 of 2024 and companies could see hefty fines for noncompliance ranging from 2% to 6% of the company’s annual revenue. 

Here in the United States, it’s a bit more of a fragmented approach with each state creating their own idea of the AI and automation laws, which as you would guess, could end up being pretty confusing for anyone. Especially with companies having warehouses or offices in multiple states. To help create a more unified approach the Department of Commerce announced the appointment of 27 experts to the National Artificial Intelligence Advisory Committee (NAIAC). This department will advise the President and the National AI Initiative Office on a range of important issues related to AI and other technologies such as robotics, XR, and their use in automation that would be used across all states, and help tighten up the AI and automation goals in the U.S. 

They would also provide recommendations on topics such as the current state of the United State’s AI competitiveness, the state of science around AI technology, and any AI workforce issues. The committee will also be responsible for providing advice regarding the management and coordination of the initiative itself, including its balance of activities and funding.

What all of this means is that governments want their businesses to embrace and adopt new technology as part of their workforce solutions. They are very aware of the benefits with AI, XR, robotics, automation in the workforce, and how those benefits have a global impact on business, consumerism and the overall economy of a country.

At the heart of all of this is manufacturing and warehouses.

Manufacturing companies could use AI, warehouse automation, and XR to access information such as  anomaly detection and real-time quality monitoring that are latency-sensitive and then be able to create an ultra-fast response. This would allow manufacturers the ability to take action immediately to prevent undesirable consequences, streamlines productivity, increases workforce safety, and automates warehouse processes so companies are able to maintain their equipment in a timely manner to prevent any type of shut down or dangerous environment.

AI and automation would provide real-time prediction capabilities that lets you deploy predictive models on edge devices such as machines, local gateways, or servers in your factory and plays a role in accelerating Industry 4.0 adoption.

By Bobby Carlton

Advancing spatial computing and building the Enterprise Metaverse requires large-scale collaboration across the industry. Having the proper tools at your disposal is important when it comes to using AR/VR as an Enterprise solution.

Lenovo's ThinkReality hardware is actively working with a growing ecosystem of enterprise AR app developers to offer ready-to-deploy solutions for companies seeking XR technology as a solution.

The Lenovo ThinkReality headset is designed to provide a scalable, and streamlined path from proof of concept to productivity for enterprise AR/VR applications that lets companies focus on problem-solving by working across diverse hardware and software. On top of streamlining the idea of productivity, the approach lets you build, deploy, and manage enterprise applications and content on a global scale.

To help bolster the adoption of XR for Enterprise solutions and to show their commitment of supporting ThinkReality, the company recently launched the following collaborations with AR app developers:Lenovo ThinkReality announced that they will be working with TechViz, a leader in 3D visualization software, to offer a solution to visualize data in augmented reality (AR) from CAD files used in design, engineering and architecture. The specially developed version of TechViz software combined with the ThinkReality A3 PC Edition allows users to switch seamlessly from their CAD desktop application to a 1:1 scale 3D representation of their model in AR.

Image from Lenovo

While wearing AR smart glasses, engineers are able to view both their PC screen and the virtual model on display in their real-world workspace with 1080p resolution, and would have the ability to make changes in the CAD environment and check them in 3D. The ThinkReality A3 with TechViz software can display the content directly from the most commonly used CAD software without data conversion. Before this solution, engineers and designers would need separate activities to work on the model and then visualize the result with a headset.

In addition, CareAR, an augmented reality (AR) Service Experience Management and Lenovo announced a collaboration to deliver an improved and smarter service experience for ServiceNow empowered field technicians and end users. As part of this cooperation, Lenovo will integrate CareAR’s service experience management platform into Lenovo’s ThinkReality A3 smart glasses to deliver immersive visual AR powered interaction, instruction and insight.

Image from CareAR

Through the combined solution, a ServiceNow enabled field technician wearing Lenovo smart glasses can connect with an outside expert who, through CareAR technology, is able to see exactly what the technician is seeing and provide easy, step-by-step instructions that the field technician can follow along from within the smart glasses’ field of view.

Along with these partnerships, Micron turned to Lenovo's ThinkReality technology to oversee a fleet of devices to help scale its business into the future. Launching ThinkReality only took a few months, and helped with helping Micron reestablish a more efficient workflow the disruption caused by the COVID-19 pandemic.

Levovo sees their ThinkReality platform and XR technology playing a very critical role in building the foundation of running a business in today's digitally connected world and through the multiple layers of what is the metaverse. 

Lenovo ThinkReality has also recently partnered with Qualcomm’s newly launched Snapdragon Spaces program to support the development of AR applications and help grow the enterprise AR market. 

Smart devices surround us these days. From our smartphones to smartwatches, self-driving cars to smart devices in the agriculture industries, the world is booming with smart devices everywhere. In addition, the evolution of the Internet of Things, IoT, technology is leading to the sudden rise in the number of connected smart devices with edge-to-cloud intelligence.

In the early years of computing, during the 1960s, the emphasis was on improving computing power. Later during the 1980s, we saw the rise in personalized computers, the beginning of distributed computing. Finally, during the turn of the millennium, the focus has been on developing centralized data processing with the help of cloud computing. As a result, we saw the boom in cloud providers like Amazon, Microsoft, Google, and IBM.

Right now, we are in the cloud computing era. Of course, we still have personal computers like laptops and desktops at our home, tablets, smartphones, and wearables with us all the time. However, we use those devices to access centralized services like Gmail, Office 365, Dropbox, etc. Moreover, cloud-based devices like Amazon Echo, Google Home, etc., are also in exponential growth. Therefore, cloud computing is becoming a revelation in Information Technology not just for the mass consumer market but also for industrial applications.

Read more: The Value of IoT at the Edge

In traditional computing and cloud computing, data processing takes place far away from the data source. However, because of the sheer number of smart devices connected to the cloud, capturing, storing, and processing data alone is becoming increasingly inefficient. As per Gartner, there will be over 5.8 billion smart devices connected for IoT in 2020. Moreover, it will only rise in the coming years. This exponential growth of the Internet of Things is pushing computing back to the ‘edge’ of local networks close to where the data generation and collection happens.

So what does edge computing mean?

Edge computing means distributing computational operations at or near the data source instead of depending on the cloud at one of the data centers to process data. It doesn’t mean cloud computing is irrelevant, but what it means is that the cloud is coming to the edge, near you. 

Why is edge computing necessary? But, first, let us look at some of the problems that traditional cloud computing brings.


Latency is one of the fundamental and unavoidable issues that come with cloud computing. It is inevitable because latency occurs due to the limitation in the speed of light. If a computer needs to communicate with another computer that resides at the other corner of the globe, the former perceive latency. Any data that a computer transfers cannot travel faster than light apart from delays due to signal strength, traffic, and distance.

For example, a brief moment that it takes to load a web page from the moment you click on a link is basically due to the limit in the speed of light.

Voice assistant services like Google Assistant, Siri, and Amazon Alexa need to process your voice and send your voice's digital representation to the cloud with data compression.

Then the cloud has to uncompress that digital representation and process it to find the proper response. Finally, the cloud sends an appropriate response to your assistant that you use to decide “if you need an umbrella while going out.” This way, the completion time for all these processes dramatically increases due to data transfer latency between these devices and systems.

Privacy and Security

Many accept the security and privacy features of an iPhone as an example of edge computing. Apple stores biometric information like touchID and faceID in the iPhone itself. It allows Apple to offload a lot of security concerns from the centralized cloud to the users’ devices themselves.

The management aspect of edge computing is crucial for security. Poorly managed Internet of Things devices can create many security problems, as proven by the malware Mirai in 2016.


Apart from privacy and security, Bandwidth savings is another way in which edge computing will help solve the problems created due to the extremely high volume of devices connected to IoT. For example, if you have only one security camera, you will not have any problem streaming all its footage to the cloud. But if you have a dozen security cameras, uploading all of the footage from all of those security cameras will create a bandwidth problem.

However, if the cameras are smart enough to know which are essential and which are not, only important footage can be streamed to the cloud while neglecting the rest. Therefore, it will significantly decrease the bandwidth. It is why running AI on a consumer’s device instead of doing all the work in the cloud is a massive focus of tech giants like Apple and Google at the moment. Google releasing Live Caption, transcribing in the recorder app, “now playing” feature in the recent versions of android is an excellent example of edge computing targeted towards reducing bandwidth.

Google is also working towards developing Progressive Web Apps that have offline-first functionality. It means you can work on a “website” on your phone or your pc without having to connect to the internet, do some work, save your work on your device itself and sync your work with the cloud only after you have your internet connection back.

Read more: How Will AI Transform IoT Architecture?

edge-to-cloud intelligence

What is edge intelligence?

There is a subtle difference between edge computing and edge intelligence. Edge computing can be defined as a process of collecting data and performing analysis on it, all taking place close to the edge device. This processed data is then sent to the cloud for further analysis.

Edge intelligence is a step ahead of edge computing because you perform actions after analysis at the edge itself using Artificial Intelligence in edge intelligence. It deviates from cloud computing and cloud intelligence, where we send all the data over the network to the centralized data store and perform analysis and decisions.

Why implement edge intelligence?

Apart from eliminating the problems in cloud computing, the implementation of edge computing and edge intelligence has the following benefits.

● It takes a long time and is very costly to transfer vast data generated by IoT devices across large geographic areas. The edge intelligence allows for the analysis, distribution, and computing of enormous volumes of data at the edge, rather than having it ship/transfer to a central processing location. Edge intelligence allows businesses to manage and analyze data anywhere, with fast response times to queries.

● The applications of edge intelligence in telecommunications include subscriber analytics to optimize customer lifetime values, increase network monetization, deliver a seamless customer experience, customize product bundles, prevent churn, and manage capital expenditures more wisely.

● By developing personalized, data-driven experiences, SaaS services can result in greater adoption of applications and higher levels of engagement and customer satisfaction.

● With the Internet of Things, manufacturers can automate, monitor real-time, and gain insights for better predictive maintenance and uptime - resulting in improved efficiency and profits.

● Government agencies can enhance their operations, use location-based data for criminal investigations, and allocate more intelligently.

● The edge intelligence and the cloud form an ecosystem that brings together all components of the infrastructure. Edge computing enables a consistent programming experience across multiple devices and systems. With the help of a database management system (DBMS), you can replicate data from the edge to the cloud. Raima Database Manager supports almost any operating system (OS) and can even run in a barebones configuration.

● In edge computing, devices and systems are integrated and synchronized faster and more effectively by replicating databases. In addition, by using edge computing, service interruptions during the transfer of app functionality are eliminated - data is replicated between edge network databases.

● By using edge computing, data transfers to cloud data centers are reduced. The cloud can be used for other tasks by delegating some of the work of processing data. In addition to improving system efficiency, this method reduces the cost of data transfer between devices and the cloud.

edge-to-cloud intelligence

Edge Intelligence—The Future of AI

Edge computing solves two problems by merging them into one solution. On the one hand, the constraint on cloud data centers to handle increasing amounts of data is about to reach a breaking point. On the other hand, Artificial Intelligence systems consume information at such a speed that there isn’t enough. Edge databases enable applications to bring Machine Learning (ML) models to the edge. With the power of real-time databases and Artificial Intelligence (AI), edge intelligence can provide real-time insights for the improvement of many industries. For example, last-mile delivery is made faster and more efficient with features such as smart tracking and real-time route navigation. Smart systems can provide customers with customized insights as they browse the store, while fraudulent activity can be detected by facial recognition software. In addition, healthcare professionals can make better health predictions and become more aware of their health risks by using edge intelligence. Traditional cloud computing can't even begin to compare to the benefits that edge-to-cloud intelligence can provide.