FS Studio Logo

Lets Talk Simulation

By Caio Viturino and Bobby Carlton

As companies and industries uncover the potential of the metaverse and digital twinning, and leverage it to streamline their workforce, improve employee training, embrace the automation of warehouses and much more, they will need a process that would allow them to quickly and easily create 3D content. This is especially important since the creation of virtual worlds and complex content will become more prevalent for businesses moving forward.

One way of speeding up this process is through something called Neural Radiance Field (NeRF), and this process can help us create and launch 3D digital solutions that can be used in a wide variety of Enterprise case uses. However, there are some questions about the technology. 

What is NeRF? 

NeRFs are neural representations that represent the geometry of complex 3D scenes. Unlike other methods, such as point clouds and voxel models, they are trained on dense photographic images. They can then produce photo-realistic renderings which can be used in various ways for digital transformation. 

This method combines the power of a sparse set of input views with an underlying continuous scene function to generate novel views of complex scenes, and can be taken from a static set of images or something like a blender model. 

In a Medium post by Varun Bhaseen, he describes NeRFs as a continuous 5D function outputs the radiance direction (θ; Φ) at each point (x; y; z) in space, and it has a density that acts like a differential opacity to determine how much energy is collected by a ray passing through (x; y; z). 

Bhaseen explains it further with a the visual below showing the steps that are involved in optimizing a continuous 5D model for a scene. It takes into account the various factors that affect the view-dependent color and volume density of the scene. In this example, the 100 images were taken as input. 

NeRF Drums
Image from Medium/Varun Bhaseen

This optimization is performed for a deep multi-layer perceptron, without using any additional layers. To minimize the error between the views that are rendered from the representation and the observed images, gradient descent is used.

Can We Reconstruct the Environment Using Some Equipment?

We can! In addition to being able to model the environment in 6 minutes, the equipment from Mosaic can also generate high-quality 3D models.

Unfortunately, this method is very expensive and requires a lot of training to achieve a high-quality mesh. AI-based methods, on the other hand, seem to do this using a cellphone camera. Another option that could be very useful is NeRFs.

Who First Developed the Well-Known NeRF? 

The first NeRF was published in 2020 by Ben Mildenhall. This method achieved state-of-the-art results in 2020 when synthesizing novel views of complex scenes from multi-RGB images. The main drawback then was the time it took for training the network which was almost 2 days per scene, sometimes more, considering Mildenhall was using a NVIDIA V100 GPU.   

Why NeRF is Not Well Suited for Mesh Generation? 

Unlike surface rendering, NeRF does not use an explicit surface representation, instead it focuses on objects in a density field. This method, unlike surface point for rendering, takes into account multiple locations in a volume in order to determine the color. 

NeRF is capable of producing high-quality images, but the surfaces that are extracted as level sets are not ideal. This is because NeRF does not take into account the specific density levels that are required to represent the surface. 

In a paper released by Nvidia, they introduced a new method called instant NeRF, which can generate a high-quality image of a density and radiance-and-density field. Unfortunately, this method was not able to produce good meshes as well. The meshes generated through this approach did produce a decent volumetric radiance and density field, however they seemed "noisy". 

What If We Use Photogrammetry Instead?

Unlike photogrammetry, NeRF does not require the creation of point clouds, nor does it need to convert them to objects. Its output is faster, but unfortunately the mesh quality is not as good. In the example here, Caoi Viturino, Simulations Developer for FS Studio, tested the idea of generating meshes of an acoustic guitar from the NeRF volume rendering by using the Nvidia NeRF instant. The results are pretty bad with lots of "noise".

NeRF
Image by Caio Viturino

Viturino also tried to apply photogrammetry (using a simple cell phone camera) through existing software to compare with NeRF mesh output using the same set of images. We can see that the output looks better but NeRF can capture more details of the object.  

Image by Caio Viturino

Can NeRF Be Improved to Represent Indoor Environments?

In a paper released by Apple, the team led by Terrance DeVries explained how they were able to improve the NeRF model by learning to decompose large scenes into smaller pieces. Although they did not talk about surface or mesh generation, they did create a global generator that can perform this task.

Unfortunately, the algorithm's approach to generating a mesh is not ideal. The problem with NeRF is that the algorithm generates a volumetric radiance-and-density field instead of a surface representation. Different approaches tried to generate a mesh from the volumetric field, but it was for single objects only (360 degrees scan):

Can NeRF Be Improved to Generate Meshes?

It is well known that NeRF does not admit accurate surface reconstruction. Therefore, some suggest that the algorithm should be merged with implicit surface reconstruction.

Michael Oechsle (2021) published a paper that unifies volume rendering and implicit surface reconstruction and can reconstruct meshes from objects more precisely if compared to NeRF. However, the method needs to be applied to single objects instead of scene reconstruction.

Do We Really Need a Mesh of the Scene or Can We Use the Radiance Field Instead?

NeRF is more accurate than point clouds or voxel models when it comes to surface reconstruction. It does not need to perform precise feature extraction and alignment.

Michal Adamkiewicz used the NeRF to perform a trajectory optimization for a quadrotor robot in the radiance field produced by NeRF instead of using the 3D scene mesh. The NeRF environment used to test the trajectory planning algorithms was generated from a synthetic 3D scene.

Unfortunately, it is not easy to create a mesh from the NeRF environment. To load the scene into Isaac Sim, we need a mesh representation of the NeRF.

Can We Map an Indoor Environment Using NeRF?

According to a report written by Xiaoshuai Zhang (2022), not yet. “While NeRF has shown great success for neural reconstruction and rendering, its limited MLP capacity and long per-scene optimization times make it challenging to model large-scale indoor scenes.”

The goal of Zhang’s paper is to incrementally reconstruct a large sparse radiance field from a long RGB image sequence (monocular RGB video). Although impressive and promising, 3D reconstruction from RGB images does not seem to be satisfactory yet. We can observe noise in the mesh produced by this method.

What If We Use RGB-D Images Instead of RGB Images?

Dejan Azinović (2022) proposed a new approach to generating 3D reconstruction of scenes that is much better than NeRF.

The image below shows how noisy the 3D mesh generated by the first proposed NeRF is compared to the Neural RGB-D surface reconstruction.

Enter the SNeRF!

A recent study conducted by Cornell University revealed that creating a variety of dynamic virtual scenes using neural radiance fields can be done at a speed that is more than enough to handle the complexity of complex content. This is a stylized neural radiance field (SNeRF).

Led by researchers Lei Xiao, Feng Liu, and Thu Nguyen-Phuoc, the team was able to create 3D scenes that can be used in various virtual environments simply by using SNeRF to adapt to the real-world environment and then use points to create the virtual scene. Imagine looking at a painting and then seeing the world through the lens of the painting.

What Can SNeRFs Do?

Through their work, they were able to create 3D scenes that can be used in various virtual environments. They were also able to use their real-world environment as a part of the creation process.

The researchers were able to achieve this by using cross-view consistency, which is a type of visual feedback that allows them to observe the same object at different viewing angles, creating an immersive 3D effect.

They were able to create an immersive 3D effect by using cross-view consistency. This type of visual feedback allowed them to observe the same object at different viewing angles.

The Cornell team was also able to create an image as a reference style and then use it as a part of their creation process by alternating the NeRF and the stylization optimization steps. This method allowed them to quickly create a real-world environment and customize the image.

“We introduce a new training method to address this problem by alternating the NeRF and stylization optimization steps,” said the research team in their published paper. “Such a method enables us to make full use of our hardware memory capacity to both generate images at higher resolution and adopt more expressive image style transfer methods. Our experiments show that our method produces stylized NeRFs for a wide range of content, including indoor, outdoor and dynamic scenes, and synthesizes high-quality novel views with cross-view consistency.”

The researchers had to address another issue with the NeRF memory limitations, which they had to solve in order to render more high-quality 3D images at a speed that felt like real-time. This method involved creating a loop of views that would allow them to target the appropriate points in the image and then rebuild it with more detail.

Can SNeRF Help Avatars?

Through this approach, Lei Xiao, Feng Liu, and Thu Nguyen-Phuoc were able to create expressive 4D avatars that can be used in conversations. They were also able to create these avatars by using a distinct style of NeRF that allows them to convey emotions such as anger, confusion, and fear.

Currently the work being done by the Cornell research team on 3D scene stylization is still ongoing. They were able to create a method that uses implicit neural representations to affect the avatars' environment. They were also able to take advantage of their hardware memory's capabilities to create high-resolution images and adopt more expressive methods in virtual reality. 

However, this is just the beginning and there is a lot more work and exploration ahead.

If you’re interested in diving deeper into the Cornell research teams work, you can access their report here.

Jens Huang talks about the future of AI, robotics, and how NVIDIA will lead the charge.

By Bobby Carlton

A lot was announced and I did my best to keep up! So let's just jump right in!

NVIDIA CEO Jens Huang unveiled new cloud services that will allow users to run AI workflows during his NVIDIA GTC Keynote. He also introduced the company's new generation of GeForce RTX GPUs.

During his presentation, Jens Huang noted that the rapid advancements in computing are being fueled by AI. He said that accelerated computing is becoming the fuel for this innovation.

He also talked about the company's new initiatives to help companies develop new technologies and create new experiences for their customers. These include the development of AI-based solutions and the establishment of virtual laboratories where the world's leading companies can test their products.

The company's vision is to help companies develop new technologies and create new applications that will benefit their customers. Through accelerated computing, Jens Huang noted that AI will be able to unlock the potential of the world's industries.

NVIDIA

The New NVIDIA Ada Lovelace Architecture Will Be a Gamer and Creators Dream

Enterprises will be able to benefit from the new tools that are based on the Grace CPU and the Grace Hopper Superchip. Those developing the 3D internet will also be able to get new OVX servers that are powered by the Ada Lovelace L40 data center. Researchers and scientists will be able to get new capabilities with the help of the NVIDIA LLMs NeMo Service and Thor, a new brain with a performance of over 2,000 teraflops.

Jens Huang noted that the company's innovations are being put to work by a wide range of partners and customers. To speed up the adoption of AI, he announced that Deloitte, the world's leading professional services firm, is working with the company to deliver new services based on the NVIDIA Omniverse and AI.

He also talked about the company's customer stories, such as the work of Charter, General Motors, and The Broad Institute. These organizations are using AI to improve their operations and deliver new services.

The NVIDIA GTC event, which started this week, has become one of the most prominent AI conferences in the world. Over 200,000 people have registered to attend the event, which features over 200 speakers from various companies.

A ‘Quantum Leap’: GeForce RTX 40 Series GPUs

Nvidia

NVIDIA's first major event of the week was the unveiling of the new generation of GPUs, which are based on the Ada architecture. According to Huang, the new generation of GPUs will allow creators to create fully simulated worlds.

During his presentation, Huang showed the audience a demo of the company's upcoming game, which is called "Rover RTX." It is a fully interactive simulation that uses only ray tracing.

The company also unveiled various innovations that are based on the Ada architecture, such as a Streaming Multiprocessor and a new RT Core. These features are designed to allow developers to create new applications.

Also introduced was the latest version of its DLSS technology, which uses AI to create new frames by analyzing the previous ones. This feature can boost game performance by up to 4x. Over 30 games and applications have already supported DLSS 3. According to Huang, the company's technology is one of the most significant innovations in the gaming industry.

Huang noted that the company's new generation of GPUs, which are based on the Ada architecture, can deliver up to 4x more processing throughput than its predecessor, the 3090 Ti. The new GeForce RTX 4090 will be available in October. Additionally, the new GeForce RTX 4080 is launching in November with two configurations.

  1. The 16GB version of the new GeForce RTX 4080 is priced at $1,199. It features 9,728 CUDA cores and 16 GB of high-speed GDDR6X memory. Compared to the 3090 Ti, the new 4080 is twice as fast in games.
  2. The 12GB version of the new GeForce RTX 4080 is priced at $899. It features 7,680 CUDA cores and 12 GB of high-speed GDDR6X memory. DLSS 3 is faster than the 3090 Ti, making it the most powerful gaming GPU available.

Huang noted that the company's Lightspeed Studios used the Omniverse technology to create a new version of Portal, one of the most popular games in history. With the help of the company's AI-assisted toolset, users can easily up-res their favorite games and give them a physical accurate depiction.

NVIDIA Lightspeed Studios used the company's Omniverse technology to create a new version of Portal, which is one of the most popular games in history. According to Huang, large language models and recommender systems are the most important AI models that are currently being used in the gaming industry.

He noted that recommenders are the engines that power the digital economy, as they are responsible for powering various aspects of the gaming industry.

The company's Transformer deep learning model, which was introduced in 2017, has led to the development of large language models that are capable of learning human language without supervision.

Image from NVIDIA

“A single pre-trained model can perform multiple tasks, like question answering, document summarization, text generation, translation and even software programming,” said Huang.

The company's H100 Tensor Core GPU, which is used in the company's Transformer deep learning model, is in full production. The systems, which are shipping soon, are powered by the company's next-generation Transformer Engine.

“Hopper is in full production and coming soon to power the world’s AI factories."

Several of the company's partners, such as Atos, Cisco, Fujitsu, GIGABYTE, Lenovo, and Supermicro, are currently working on implementing the H100 technology in their systems. Some of the major cloud providers, such as Amazon Web Services, Google Cloud, and Oracle, are also expected to start supporting the H100 platform next year.

According to Huang, the company's Grace Hopper, which combines the company's Arm-based CPU with Hopper GPUs, will deliver a 7x increase in fast-memory capacity and a massive leap in recommender systems, weaving Together the Metaverse, L40 Data Center GPUs in Full Production

During his keynote at the company's annual event, Huang noted that the future of the internet will be further enhanced with the use of 3D. The company's Omniverse platform is used to develop and run metaverse applications.

He also explained how powerful new computers will be needed to connect and simulate the worlds that are currently being created. The company's OVX servers are designed to support the scaling of metaverse applications.

The company's 2nd-generation OVX servers will be powered by the Ada Lovelace L40 data center GPUs. Thor for Autonomous Vehicles, Robotics, Medical Instruments and More.

Today's cars are equipped with various computers, such as the cameras, sensors, and infotainment systems. In the future, these will be delivered by software that can improve over time. In order to power these systems, Huang introduced the company's new product, called Drive Thor, which combines the company's Grace Hopper and the Ada GPU.

The company's new Thor superchip, which is capable of delivering up to 2,000 teraflops of performance, will replace the company's previous product, the Drive Orin. It will be used in various applications, such as medical instruments and industrial automation.

3.5 Million Developers, 3,000 Accelerated Applications

According to Huang, over 3.5 million developers have created over 3,000 accelerated applications using the company's software development kits and AI models. The company's ecosystem is also designed to help companies bring their innovations to the world's industries.

Over the past year, the company has released over a hundred software development kits (SDKs) and introduced 25 new ones. These new tools allow developers to create new applications that can improve the performance and capabilities of their existing systems.

New Services for AI, Virtual Worlds

Image from FS Studio

Huang also talked about how the company's large language models are the most important AI models currently being developed. They can learn to understand various languages and meanings without requiring supervision.

The company introduced the Nemo LLM Service, a cloud service that allows researchers to train their AI models on specific tasks, and to help scientists accelerate their work, the company also introduced the BioNeMo LLM, a service that allows them to create AI models that can understand various types of proteins, DNA, and RNA sequences.

Huang announced that the company is working with The Broad Institute to create libraries that are designed to help scientists use the company's AI models. These libraries, such as the BioNeMo and Parabricks, can be accessed through the Terra Cloud Platform.

The partnership between the two organizations will allow scientists to access the libraries through the Terra Cloud Platform, which is the world's largest repository of human genomic information.

During the event, Huang also introduced the NVIDIA Omniverse Cloud, a service that allows developers to connect their applications to the company's AI models.

The company also introduced several new containers that are designed to help developers build and use AI models. These include the Omniverse Replicator and the Farm for scaling render farms.

Omniverse is seeing wide adoption, and Huang shared several customer stories and demos:

  1. Lowe's is using Omniverse to create and operate digital twins of its stores.
  2. The $50 billion telecommunications company, Charter, which is using the company's AI models to create digital twins of its networks.
  3. General Motors is also working with its partners to create a digital twin of its design studio in Omniverse. This will allow engineers, designers, and marketers to collaborate on projects.
Image from Lowes

The company also introduced a new Nano for Robotics that can be used to build and use AI models.

Huang noted that the company's second-generation processor, known as Orin, is a homerun for robotic computers. He also noted that the company is working on developing new platforms that will allow engineers to create artificial intelligence models.

To expand the reach of Orin, Huang introduced the new Nano for Robotics, which is a tiny robotic computer that is 80x faster than its predecessor.

The Nano for Robotics runs the company's Isaac platform and features the NVIDIA ROS 2 GPU-accelerated framework. It also comes with a cloud-based robotics simulation platform called Iaaac Sim.

For developers who are using Amazon Web Services' (AWS) robotic software platform, AWS RoboMaker, Huang noted that the company's containers for the Isaac platform are now available in the marketplace.

New Tools for Video, Image Services

According to Huang, the increasing number of video streams on the internet will be augmented by computer graphics and special effects in the future. “Avatars will do computer vision, speech AI, language understanding and computer graphics in real time and at cloud scale."

To enable new innovations in the areas of communications, real-time graphics, and AI, Huang noted that the company is developing various acceleration libraries. One of these is the CV-CUDA, which is a cloud runtime engine. The company is also working on developing a sample application called Tokkio that can be used to provide customer service avatars.

Deloitte to Bring AI, Omniverse Services to Enterprises

In order to accelerate the adoption of AI and other advanced technologies in the world's enterprises, Deloitte is working with NVIDIA to bring new services built on its Omniverse and AI platforms to the market.

According to Huang, Deloitte's professionals will help organizations use the company's application frameworks to build new multi-cloud applications that can be used for various areas such as cybersecurity, retail automation, and customer service.

NVIDIA Is Just Getting Started

During his keynote speech, Huang talked about the company's various innovations and products that were introduced during the course of the event. He then went on to describe the many parts of the company's vision.

“Today, we announced new chips, new advances to our platforms, and, for the very first time, new cloud services,” Huang said as he wrapped up. “These platforms propel new breakthroughs in AI, new applications of AI, and the next wave of AI for science and industry.”

By Bobby Carlton

The Internet of Things (IoT) is a system of devices and objects that can be connected to each other and communicate with other systems and devices without human intervention. These objects or devices usually have sensors, cameras, and RFID tags, and they can communicate with one another through a communication interface. These systems can then perform various tasks and provide a single service to the user.

The truth is that IoT is the foundation and backbone of digital twinning.

As we become more digitally connected in almost all aspects of our lives, IoT becomes a vital component of the consumer economy by enabling the creation of new and innovative products and services. The rapid emergence and evolution of this technology has led to the creation of numerous opportunities but also some challenges.

Due to the technological convergence across different industries, the scope of IoT is becoming more diverse. It can be used in various fields such as healthcare, home security, and automation through devices such as Roomba’s or smart speakers. Of course there are also numerous embedded systems that can be used in this technology such as sensors, wireless communications, and the automation of your home or business.

With the rapid increase in the number of connected devices and the development of new technologies such as AR,VR, and XR, the adoption of these products and services is expected to increase.

According to Statista, the global market for IoT is currently valued at around 389 billion US dollars. This value is expected to reach over a trillion dollars by 2030 reflecting the increasing number of connected devices and the technological advancements that have occurred thanks to the growth of digital twinning. It is also expected to boost the customer economy by increasing the demand for various products and services.

In 2020, the consumer market contributed around 35% of the IoT market's value. However, it is expected that this will increase to 45% by 2030. This is because the market is expected to expand with the emergence of new markets such as the automotive, security, and smartphone sectors.

The concept of the Internet of Things is a device layer that enables the connectivity of various devices that were previously not connected to the internet. It can also act as a connective link between different devices, such as tablets and smartphones.

These devices can connect using various types of wireless networking solutions and physical means, and they can also communicate with one another and the cloud. Through the use of sensors, these systems can provide users with a variety of services and features. They can be controlled and customized through a user interface, which is typically accessible through a website and app.

A typical smart bulb IoT system consists of various components such as a wireless communication interface, LED light-generating devices, and a control system. These components work together seamlessly with the user being able to access their devices through a mobile app or website. A great example of this is a Google Nest system to monitor your front door and your home thermostat, which can be purchased at almost any hardware or lifestyle store.

Image from Target

Aside from these, other IoT systems such as smart televisions, smart refrigerators, and smart speakers are also becoming more popular among consumers. These kinds of devices can be combined with a home's existing smart home technology to provide users with a variety of services and features designed to streamline and automate your home experiences. 

Of course privacy and data are two things consumers and businesses need to consider when bringing these devices into their environments. How much are you giving up in order to streamline or automate your home or business? We are already in the habit of giving up some of our privacy through smartphone use and other wearables.

One of the most common uses of IoT technology in the consumer economy is to improve customer service. Enterprises use it to improve the efficiency of their distribution channels by implementing a variety of systems, such as inventory management and product tracking. In addition, construction sites and cars are also using IoT to monitor their environments to reduce downtime and improve their overall performance.

Other industries that use IoT primarily include government facilities, transportation systems, and healthcare systems. Through the use of IoT, these organizations can improve the efficiency of their operations and increase the effectiveness of their systems. The technology can help the consumer economy by enhancing the service provided by their organizations.

The connectivity and data technology has also improved, with devices now capable of handling and storing large amounts of data. The ability to process and analyze data is becoming more sophisticated. Various factors such as the evolution of cloud technologies and the increasing capacity of storage systems have made it easier for devices to store and process data.

The increasing number of companies and organizations investing in the development of IoT devices is expected to continue to increase, and this will help them gain a competitive advantage and develop new solutions that will significantly impact the consumer economy.

By Bobby Carlton

Warehouse automation systems may seem like they’re a dime a dozen, however, each approach is different with some focusing on humans to manage them, many others relying on robotics and automation, and of course we’ve seen a blended approach with automation, robotics and humans working together. 

One solution is using AI to help drive automation along with other technologies such as robotics and XR. Data shows that we can improve work environments through automation, but getting everyone around the world to adapt the approach isn’t that easy. 

However, a new global initiative to create global efficiencies is a hot conversation at the moment. AI and automation are about to drastically change the way businesses (large and small) and even how governments operate through a push that will include cutting-edge technology such as natural language processing, machine learning, and autonomous systems through robotics and XR solutions.

The objective of the Artificial Intelligence Act will be to create a safer and more efficient work process that can help organizations explore “what if” scenarios and be more predictive, explore recommendations and different paths to success, and even help company leaders make important company-wide decisions.

One thing to keep in mind is that regulating the approach varies in different parts of the world from China, the European Union, and the U.S., and that as businesses invest their resources into AI and automation, they will have to ensure they comply with all of the regulations in place.

For example, the Chinese government is being a bit more forward thinking by moving AI regulations beyond the proposal stage and has already passed a regulation that mandates companies must notify users when an AI algorithm (or avatar) is involved. This means that any business in China must adopt AI and automation compliances which will impact both customers and the workforce. 

While the European Union has a much broader scope than China’s approach. For the EU, the focus of their proposed regulation is more on the risks created by AI and sorted into 4 categories. Minimal risk, limited risk, high risk, and unacceptable risk. Using AI with automation applications would help companies through human oversight and ongoing monitoring of facilities using robotics and XR solutions. 

Those companies will be required by law to register stand-alone high-risk AI systems such as remote biometric identification systems. 

Once passed, the EU would implement this process by Q2 of 2024 and companies could see hefty fines for noncompliance ranging from 2% to 6% of the company’s annual revenue. 

Here in the United States, it’s a bit more of a fragmented approach with each state creating their own idea of the AI and automation laws, which as you would guess, could end up being pretty confusing for anyone. Especially with companies having warehouses or offices in multiple states. To help create a more unified approach the Department of Commerce announced the appointment of 27 experts to the National Artificial Intelligence Advisory Committee (NAIAC). This department will advise the President and the National AI Initiative Office on a range of important issues related to AI and other technologies such as robotics, XR, and their use in automation that would be used across all states, and help tighten up the AI and automation goals in the U.S. 

They would also provide recommendations on topics such as the current state of the United State’s AI competitiveness, the state of science around AI technology, and any AI workforce issues. The committee will also be responsible for providing advice regarding the management and coordination of the initiative itself, including its balance of activities and funding.

What all of this means is that governments want their businesses to embrace and adopt new technology as part of their workforce solutions. They are very aware of the benefits with AI, XR, robotics, automation in the workforce, and how those benefits have a global impact on business, consumerism and the overall economy of a country.

At the heart of all of this is manufacturing and warehouses.

Manufacturing companies could use AI, warehouse automation, and XR to access information such as  anomaly detection and real-time quality monitoring that are latency-sensitive and then be able to create an ultra-fast response. This would allow manufacturers the ability to take action immediately to prevent undesirable consequences, streamlines productivity, increases workforce safety, and automates warehouse processes so companies are able to maintain their equipment in a timely manner to prevent any type of shut down or dangerous environment.

AI and automation would provide real-time prediction capabilities that lets you deploy predictive models on edge devices such as machines, local gateways, or servers in your factory and plays a role in accelerating Industry 4.0 adoption.

Wireless technology, specifically the 5G network, is on the rise in the global world today. Advances in these types of networks, like the wireless IoT destined to make factories smarter are bringing huge transformations in manufacturing industries. Many industries now have reached for predictive and prescriptive maintenance, self-healing production with almost non-existent downtime, remote-controlled processes, autonomous robotics and augmented reality systems. 

Because of faster network speeds, lower latency, higher data output for the connected devices, and data processing required on the factory floor along with enabling a huge number of low-power battery-powered sensors, 5G is fast becoming the future of communication in the manufacturing industry. 

Most companies today are inefficient in their operation. The use of old technologies, broken supply chains, lack of production visibility, and lack of IT integration are some of the factors that are responsible for the companies to keep them from operating at full capacity. In terms of productivity and operational efficiency, converting a manufacturing factory into a smart factory offers enormous benefits. This enables the factories to overcome these inefficiencies. To turn a manufacturing factory into a smart factory, companies are in search of new technologies to improve their performance. This is where the Industrial Internet of Things (IIoT) comes into play. 

Read more: Enabling Smarter Industrial Processes with Edge-to-Cloud Intelligence

What is IoT?

Internet of Things refers to a system of interrelated computers, machines, objects, and even people or animals assigned with unique identifiers (UIDs) that are capable of transferring data over a network without any human-to-computer interaction. IoT refers to all the objects connected to the internet and the communication between them through data transfer over the cloud. Industrial IoT, thus, refers to a network of interconnected sensors, instruments, actuators, and other devices that are networked together with a company’s industrial applications consequently facilitating improvements in the productivity and efficiency of the company. 

The success of any manufacturing industry depends on how efficiently the industry can reduce manufacturing costs and make production as effective as possible. The use of IIoT in the manufacturing process enables a company for a whole new level of efficient production. 

Challenges in implementing IoT solutions on the factory floor

As mentioned, transforming a manufacturing facility into a smart one results in enormous benefits in terms of productivity and operational efficiency. However, there are a number of challenges while implementing IoT solutions on the factory floor. We are going to discuss some of them below.

Connectivity: One of the main challenges of converting a manufacturing factory into a smart factory is to connect devices on the plant floor. Since the beginning of networking, companies have preferred wired connectivity in manufacturing because of the lower speed bandwidth of wireless networks and their inability to penetrate into the buildings made of concrete. In addition, until the invention of the 5G network, manufacturers had not seen the reliability in the previous generations of networks which could outweigh the risks involved in adopting those wireless networks. 

However, with the advent of 5G, companies are starting to realize the reliability and productivity of wireless networks in even the most demanding of applications, such as automation control and high throughput vision. 

Yet, wired network connectivity is still present in many factories even today. To deploy 5G successfully in a manufacturing environment, the collaboration between all the systems from corporate information and communication technologies (IT) to the manufacturing operational technologies (OT) is a must. 

Interoperability: The latest innovations such as IIoT, Artificial Intelligence (AI), machine learning (ML), etc. have resulted in the integration of automated devices and services into unified networks. As a consequence, the need for interoperability has increased dramatically. 

In an interconnected system, all of the components must be able to communicate with each other in an efficient manner. Interoperability is impossible to achieve without all of the components working together and hence, the full potential of IoT cannot be fulfilled.

Security: There's a growing concern that businesses aren't taking strong enough security precautions. Lack of security has resulted in various infamous security breaches in the recent past. Mirai, Stuxnet, the Jeep Hack, etc. are some of the data breaches that have made us realize the importance of security measures. 

IoT security is a concern for any device connected to the IoT. If the network is accessible, the connected devices could be hacked which causes obstruction in the production process. For manufacturers, the security of IoT can be challenging in two ways, vulnerabilities within the products as well as from the production halts caused due to security breaches. 

Read more: How Will AI Transform IoT Architecture?

wireless IoT destined to make factories smarter

Wireless IoT for Smart Factories Today

With the advent of the next industrial revolution, the Fourth Industrial Revolution (FIR) or Industry 4.0, more and more companies are moving towards emerging technology like IoT for smarter solutions to optimize their industrial processes and factories. The global market for IoT clearly reflects this rapid pace of adoption of IoT in industrial applications.

The IoT market in 2020 was 761 Billion US Dollars. Researchers predict that the market valuation for the IoT industry as a whole will cross 1,386 Billion US Dollars by 2026. This rapid rise in the valuation represents a compound annual growth rate (CAGR) of more than 10%. Furthermore, this figure is also well within expectations to increase drastically after the effects of the COVID-19 pandemic in industries subside. 

This trend of adoption is already evident with major technological and industrial players shifting towards wireless IoT technologies in their industrial processes and factories. 

Caterpillar is one of the most well-known companies to adopt Wireless IoT. Only a few years ago, the company began to install its machinery with sensors and make them capable of connecting to a network, hence enabling the users to closely monitor and optimize processes. Their digital solution Cat Connect Technologies and Services, with installation on more than half a million vehicles, collects and analyzes usage data from the machine with regards to equipment management, safety, sustainability and productivity. With this huge database, users can create predictive maintenance solutions and discover new ways to increase efficiency. 

Hortilux is another company using Wireless IoT to make its factory smarter. Just as Caterpillar, Hortilux helps customers to make better and more informed decisions with accurate data analysis. Hortilux installs its equipment with cloud-enabled sensors which connect growers to HortiSense, a software solution that analyzes growing conditions including weather forecasts. 

Faurecia is a renowned manufacturer of interiors and emissions controls for automobiles. Among its clients are Volkswagen, GM, and Ford, to name a few. Faurecia, like other component manufacturers, is undergoing a massive digital transition. The company constructed a 400,000-square-foot facility in 2016 with Industrial IoT and automation in mind.

The facility's PLC-enabled machines are all connected to a single computer, dubbed the "lake," which connects plant floor activities to their execution and reporting systems. The resulting integrated system provides accurate operational transparency, unrivaled production quality control, and seamless components traceability.

The new technology also improves the quality and speed of communication throughout the company. A stable high-speed internet connection is available on both the plant floor and in the management areas, allowing operators and management to respond promptly to any issues that may emerge.

Tesla’s Wireless Industrial IoT strategy is about looking at the factory as a product, rather than a place. Tesla solves manufacturing issues as if they are debugging software by developing solutions that draw from their diverse technical and engineering backgrounds.

In Tesla’s Gigafactory, you’ll find Autonomous Indoor Vehicles (AIVs) which improve the transfer of materials between workstations. These vehicles operate based on a complex logic algorithm, meaning they don’t require any preset path to carry out their duties. The vehicles can carry payloads up to 130 lbs., and can even charge their own battery without intervention.

Among these companies, Airbus is probably using wireless IoT at a prominent level.

How Airbus is using Wireless IoT 

Airbus is using Wireless IoT to support the assembly of aircraft. During the manufacturing process, thousands of rivets are used to attach panels to an airframe, and the panels need to be drilled and fastened in a very particular way. Riveting must be done in the right order and the correct torque settings used for torque wrenches.

Airbus drills an estimated 120 million holes a year to fasten aircraft panels. Only 25% of this activity is automated while 75% of the work is done manually.

The manual processes for drilling and fastening panels require tools to be configured with very specific settings, and panels must be fastened to the airframe in a consistent way.

If a tool operator works at a station on the aircraft production line for an eight-hour shift, but the panel being fastened takes 12 hours to finish, the work needs to be passed on to another operator. There is a potential problem at the hand-over interface between the end of the first operator’s shift and what is then communicated to the next operator, who has to carry on where the first operator left off.

If the second operator, then takes over but works in the wrong way and the work needs to be redone, this could prove very expensive. The answer is to monitor the work each operator does in near real-time so that production errors can be corrected immediately.

Today, the tools used by operators working on an aircraft’s construction have sensors. Typically, IoT applications deploy sensors at the so-called edge, onto physical devices, which then feed data back to a centralized control and feedback system, which acts like a supervisor.

The IoT application in a centralized architecture assumes it will always have a network connection. It depends on a reliable network supporting hundreds of operators using tools with sensors, all operating at the same radio frequency. The tools will all be sending data to the back-end system simultaneously, and this is likely to cause a network contention issue at some point.

But Airbus realized that such an architecture would not be practical on the factory floor. Instead of relying on a back-end server that knows everything about the process, with dumb clients at the edge, each tool has its own set of capabilities, he says. The tools are preset for the job but can be configured on the fly. The software that supports the tools provides the necessary intelligence to manage the hand-over between shifts, and ensure that production errors are rectified quickly.

So, to avoid potential errors in production, the tools themselves need to have intelligence at the IoT edge. They run a small piece of agent software that sends a 36-byte message to an HPE Edge line server, using a non-standard network protocol, which supports a very low data latency of 50 milliseconds. This enables an adjustment to be made on the tool, or the operator can be alerted about the error, very quickly, which reduces lost production time.

Airbus is an operator-driven company and technology must be deployed as an enabler. This means that digitization cannot stop aircraft production, or get in the way of operators doing their job.

wireless IoT destined to make factories smarter

Conclusion

Today, the manufacturing industry has a unique opportunity to update their facilities' wired systems to wireless for increased efficiency. The multiplicity of new applications necessitates improved industrial communication.As a result, wireless communication, especially influential ones like the Wireless IoT destined to make factories smarter is becoming more business and mission important, necessitating increasingly stringent reliability, latency, and security requirements.

Smart devices and sensors rapidly change our lives and industries, from healthcare facilities to automobile industries. However, the sheer amount of continuous data collection by the billions of smart devices and sensors that constitute the Internet of Things (IoT) can overwhelm the industries and businesses that rely on the traditional IoT architecture and how will AI transform IoT Architecture.

The solution to this problem in the meteoric rise in the world of technology today is the use of Artificial Intelligence in IoT architecture. The integration of AI in IoT builds systems that automatically gather and process data, enabling the extraction of actionable insights in real-time without any human intervention. As a result of AI-powered IoT advancements, we can now lower costs and improve productivity by using data-driven decision-making and smart automation. 

Usually, when people think of the Internet of Things, they think of smart-home devices, cars with autopilot, or some other smart devices connected to the internet. However, IoT is a lot more than that. Those smart devices are a part of IoT, but IoT is mostly about data, management, communication, processing, and much more. IoT is a system of interrelated computers, machines, objects, and even people or animals assigned with unique identifiers (UIDs) capable of transferring data over a network without any human-to-computer interaction. IoT refers to all the objects connected to the internet and the communication through data transfer over the cloud. 

Read more: How mixed reality is different from VR and AR?

“Thing” in the IoT can be a person with a heart monitor implant or a vehicle with sensors to check tire pressure which is capable of transferring data over a network.

In today’s world of information technology, business organizations are increasingly using IoT to enhance customer service, improve decision-making and increase business value. 

For example, commercial airlines use IoT to monitor the altitude, the coordinates, the airspeed, and the aircraft's speed, identify any critical problems such as engine failure and then process and analyze the data transferred by the sensors to make better decisions to make flights safer. 

Today, billions of devices are connected over the internet, and they produce and transfer trillions of bytes of data every day. To process, manage and analyze such a sheer volume of data, designing efficient IoT architecture is crucial.

Although IoT adoption is increasing rapidly, you must understand IoT architecture before deploying your network of smart devices or using AI in your existing IoT system. 

We can often describe IoT architecture as a four-stage process that oversees data transfer from the “things” into a network and finally to a data center or the cloud for processing, analysis, and storage. IoT architecture is also responsible for sending data in the opposite direction to command an actuator to take action. For instance, in the example above of any commercial airline, the data relative to the event goes through processing and analysis after an engine failure detection. Afterward, the system transfers the data back to the actuators, which immediately triggers them to take necessary actions. 

Let us look at the four stages of IoT architecture below.

Stage 1. Sensors and Actuators: Sensors and actuators are the devices that monitor or control “things.” Sensors collect data on the physical condition of the environment, such as temperature, pressure, chemical composition, distance, speed, the fluid level in a tank, etc. The data generated by sensors are converted into digital form and then transmitted to the internet gateway stage. Actuators perform actions as defined by instructions or commands sent to it through the cloud, such as adjusting the fluid flow rate, jumping over an obstacle by an industrial robot, etc. For an actuator to perform actions efficiently, very low latency between the sensor and the actuator is crucial. 

Stage 2. Data Acquisition and Internet Gateways: A data acquisition system (DAS) receives the raw data from sensors. Such data goes through conversion into digital format from the natural form.

DAS then sends the processed data through an internet gateway via wireless WANs or wired WANs. Since there can be hundreds of sensors sending raw data simultaneously, this is the stage where the volume of information is at its maximum. Thus, for efficient transmission, the data generally goes through filtration and compression.  

Stage 3. Edge or fog computing: After digitization and data aggregation, it still needs further processing to reduce data volume before sending it to the data center or cloud. Therefore, the edge device performs some analytics as a part of pre-processing. Usually, such processing will take place on a device close to the sensors because the edge stage is all about time-critical operations, which require analyzing the data as quickly as possible. 

Stage 4. Cloud or Data Center: In this stage, robust IT systems are used to analyze, manage and safely store the data. It happens in the corporate data center or the cloud. Data from multitudes of sensors are aggregated, which provides a broader picture of the IoT system so that IT and business managers can have actionable insights. At this level, the company can use specific applications to perform in-depth analysis to determine whether particular action needs to be taken. This stage also includes the storage of data for documentation as well as for further research. 

So, where does AI come into play? IoT is about sensors, actuators, and the data they transmit through internet connectivity. IoT architecture starts at the data collection stage and terminates at the stage of an “act.” Undeniably, the quality of “act” depends upon the data analysis. It is where AI plays a crucial role. 

Read more: In-Flight Peloton Classes with AR VR Could Reduce Fear of Flights

IoT provides data. But it is AI that has the power to drive smart actions. Data sent from the sensors can be analyzed with AI, which enables businesses to make informed decisions. The use of AI in IoT allows for the following benefits:

1. Enhancing operational efficiency: AI can be used in detecting patterns which provides an insight into the redundant and time-consuming processes. As a result, it enhances the efficiency of the operations. 

2. Risk management: It improves risk management by automating responses in case of events outside preset parameters. It allows for better handling of financial loss, safety, and cyber attacks. 

3. Creation of new and enhanced products and services: IoT and AI can create new products and services to process and analyze data rapidly. Examples of the new services could be chatbots and smart assistants. 

4. Increase IoT Scalability: IoT includes a massive array of sensors that gather a large volume of data. AI-powered IoT systems can analyze, filter, and compress data before transferring it to other devices.

ai transform iot architecture

Examples of integration of AI in IoT

1. Robots in manufacturing: Robots employed in manufacturing industries are implanted with sensors that enable data transmission. Those robots are further installed with AI algorithms. It saves time and cost in the manufacturing process. 

2. Self-driving cars: Self-driving cars are the best example of the use of AI in IoT. AI used in these cars can predict the behavior of pedestrians in numerous situations. The use of AI also enables these cars to determine road conditions, appropriate speed of the vehicle according to the weather, traffic conditions, etc. 

3. Smart cities: AI can build smart cities to analyze resource optimization, energy, water consumption, etc. 

4. Healthcare: Currently, IoT is predominantly being used in healthcare systems to monitor the vitals of patients remotely. With AI, smart pill technologies, virtual/augmented reality tools can be implemented for better care of the patients. 

5. Smart Thermostat solution: Nest’s smart thermostat solution is another example of AI-integrated IoT. With the integration of smartphones, the temperature can be checked and managed from anywhere without human interaction based on various variables such as work schedule and preferences of the user. 

6. Financial Services: AI in IoT enables financial institutions to replace sensitive financial data with unique and secure digital identifiers. 

Challenges

As with any technological system, the integration of AI in IoT is not without any challenges. Some of them are as follows:

1. Sensor issues which include security, power management, and heterogeneity of the sensors. 

2. Lack of technical expertise regarding the extraction of value from data. 

3. Networking issues including power consumption, lack of machine-to-machine communication, etc. 

ai transform iot architecture

Conclusion

A business can significantly benefit from the integration of AI in IoT architecture. In addition to lowering the business's production costs, it will improve service delivery, enhance the customer experience, and many more things. However, business owners must keep in mind that more data does not equate to improved business efficiency. Therefore, assessing the actual need is the initial requirement before installing new tools and devices or moving towards a particular IoT infrastructure, letting AI transform IoT Architecture. Then only can you make an informed decision on whether or not to enhance your business operations by connecting your devices to AI-powered IoT systems? 

crossmenu