By Bobby Carlton
2023 is seeing a rapid growth of Open AI's tools, such as ChatGPT and how the tool is changing education enterprise, and the world in general. It is clear that the rapid emergence and evolution of AI technologies will have a significant impact on the future of education and learning.
This past week, prominent tech industry figures, including Steve Wozniak and Elon Musk, called for a six-month moratorium on new software development of ChatGPT 5 to consider the risks associated with AI so that they can get a better understanding of how it will affect our society.
In a petition published by the Future of Life Institute, they warned that the development of AI systems that are capable of delivering human-level intelligence could threaten the well-being of people and society.
The petition urges policymakers and the private sector to work together to develop regulations and guidelines that will help protect the privacy and security of individuals as they use ChatGPT4 an AI and states, "recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control."
Another concern is the rapid emergence and evolution of AI systems has raised concerns about the potential impact on society and jobs.
Sam Altman, the founder of OpenAI, stated in 2015 that superintelligence could threaten the existence of humans in 2023. In a podcast with Kara Swisher, he said that he still feels the same way, and he has accelerated this threat with Chat GPT4.
It is safe to mention that many industries are adopting new technology such as robotics and AI that will change the future of work moving forward. Microsoft, Meta, Google, Amazon, Nvidia and many others are seeing tools such as AI playing a key role in how we work and companies evolve, but there are still some concerning questions looming over them as they enter this new area and experts would like us to think about how it impacts the following:
Copyrights and Ownership of AI Generated Content
One of the most important questions that I have been asked is how Open AI will treat the rights of content that it collects on the web. For instance, if a Chat request yields a text or image from content, is that legal or kosher?
Internal AI Content Repositories
When it comes to implementing ChatGPT and AI, the requests made to the platform will only be able to access open and public content. However, we want our employees to have the opportunity to access the internal content repository, which includes best practices and procedures.
Learning Development with AI
The potential of AI to transform the way learning is conducted is a major topic of discussion. What models can be used to develop instructional design that takes advantage of the power of machine learning?
Certification, Assessment and Credentials in the AI World
In the field of assessment and certification, one of the most critical questions that is being asked is how the various aspects of these processes will be affected by the use of AI.
Coaching, Workflow Support and Nudges with AI
What kinds of initiatives can be implemented using AI to enhance the efficiency and effectiveness of work processes? For instance, can we introduce workflow support and coaching on a personalized basis?
AI in Role Change and Replacement
One of the most critical questions that are being asked is how AI will be used to transform the way roles and positions are changed in organizations.
Many believe that it is time for global and national organizations to start facilitating conversations about the use of AI with technology innovators and other key decision-makers and step back to think about how ChatGPT5 and AI will impact the jobs of the future. The national security administration may need to extend the time it takes to implement AI to prevent it from wiping out over 50% of the jobs in the next two decades.
In Italy, the country's data protection agency ordered OpenAI to block ChatGPT AI after it found that the company collected users' data without their consent. The company noted that it disabled the feature for users in the country.
AI tools are here and it will continue to grow. What some are asking is that there is a pause in the development to allow us to identify and avoid potential issues that could affect our lives and businesses, and we should also start experimenting with the current tools to get a better understanding of how they can be used to improve our operations.
It's a bold and brave new world out there and AI is re-shaping how we work, play, socialize, and approach education. Should we listen to tech experts or should we just let AI steer our future?
You can read the Future of Life Institute AI petition here.
By Bobby Carlton
Through its Omniverse platform, which provides 3D simulation and collaboration capabilities, Nvidia has introduced new connectors that allow developers to easily connect their applications to each other using a Universal Scene Description framework.
The new connectors are designed to work seamlessly with various applications, such as Cesium, Unity, Blender and Vectorworks. Their roadmap also shows the connectors working with Blackshark.ai, NavVis, and Azure Digital Twin at a later time. This move will add to the hundreds of other connectors that are already available, such as Revit, SketchUp, Archicad, and 3ds Max.
Announced during Nvidia's GTC 2023 keynote , Nvidia CEO Jensen Huang said “The world’s largest companies are racing to digitalize every aspect of their business and reinvent themselves into software-defined technology companies." Jensen added, “NVIDIA AI and Omniverse supercharge industrial digitalization. Building NVIDIA Omniverse Cloud within Microsoft Azure brings customers the best of our combined capabilities.”
Through the upcoming release of Omniverse Kit 105, which is scheduled to arrive in the next couple of months, the company will introduce new features that will allow developers to create 3D models that are dynamically distributed across different surfaces. According to Richard Kerris, the VP of the platform development, the new subsurface scattering shader will allow them to perform various functions such as splitting and refracting light.
"When light hits an object, depending on what that object is, a light can be refracted or split or shattered through the different types of surfaces," said Kerris, adding "So when light hits marble or it hits something like skin, it doesn’t just bounce off of it, there’s actually parts where the light goes in, and it scatters around, but it’s very computationally hard to do."
Two years ago, Nvidia became the first company to implement real-time ray tracing. Through the new subsurface scattering feature, the company will continue to provide the industry with the first truly real-time ray tracing.
The company also introduced new features that will allow developers to create large 3D models that are dynamically distributed across different surfaces. These new features include the ability to transfer data between different regions and the ability to optimize their assets.
Through its partnership with Microsoft, Nvidia has been able to bring the company's Omniverse Cloud to the Microsoft Azure platform. The next step is to make it available in the Microsoft 365 ecosystem. This will allow teams to use the platform to create and manage 3D models. According to Kerris, the ability to create and manage these models will allow participants to get a deeper understanding of what's happening in the team.
“Each of them will have their own experience in that 3D environment, collaboratively,” says Kerris.
The ability to integrate the Omniverse platform into teams will allow them to easily create and manage 3D representations in the same way we can in a 2D web experience. According to Kerris the ability to do this will allow participants to get an improved understanding of the virtual world around them. This eliminates the need for local processing.
According to Kerris, users will be able to access the cloud in the same way they would if they were using a browser. The company also announced that its Omniverse Cloud will be able to connect to the Microsoft Azure IoT ecosystem. Through this partnership, users will now be able to receive real-world sensor inputs from the platform.
Another big announcement was that Nvidia is very focused on bringing ChatGPT in as part of the Omniverse experience. Kerris explains that end users will be able to use ChatGPT and instruct it to write code which they can then drop into Omniverse. This means everyone can be a developer.
“You’ll have an idea for something, and you’ll just be able to tell it to create something and a platform like Omniverse will allow you to realize it and see your vision come to life,” said Kerris.
Through ChatGPT, developers can now use AI-generated data to create extensions for Omniverse, such as Camera Studio, which can generate and customize cameras.
In addition, Nvidia introduced the Nvidia Picasso, which is a cloud service that allows software developers to create AI-powered 3D and image applications. According to Kerris, this will allow them to create models that are based on a specific keyword and send them to Omniverse.
The company also introduced its third-generation OVX computing system, which is designed for large-scale computational twins running in the Omniverse Enterprise platform.
During the last moments of his GTC keynote, Huang said "Omniverse can unify the end-to-end workflow and digitalize the 3 trillion dollar and 14 million employee automotive industry."
The impact of all of this will surely reshape how every single industry operates that includes food, medical, manufacturing, entertainment, and more, moving forward as more companies become automated and look for solutions to streamline workflow.
Omniverse is leaping to the cloud. Hosted in Azure, we partnered with Microsoft to bring Omniverse cloud to the worlds industries."
By Bobby Carlton
Digital twins are powerful tools that connect real-world data with digital assets, allowing engineers and designers to visualize and analyze complex systems in an interactive manner. They help organizations make informed decision-making through sales and marketing insights, analysis, 3D visualization, simulation, and prediction.
A digital twin is created by importing various conceptual models or scanning physical objects in the real world. It can then be used to visualize and analyze the data in combination with the information from the Internet of Things and enterprise databases. Its powerful 3D graphics technology can create interactive and lifelike representations of complex systems.
A digital twin is a representation of the forces, movements, and interactions that an object can experience in the physical world, allowing users to interact with it in real time. It can be used to simulate what-if scenarios, as well as visualize the outcomes of any situation instantly on different platforms, such as mobile devices, computers and virtual reality headsets.
The complexity of a digital twin deployment varies depending on the stage of the project. Its creation and use can be complex, as it involves importing and analyzing data from various sources. For instance, a digital twin can be used to create a product configuration or a representation of a vast network.
The benefits of a digital twin are numerous, such as its ability to provide customers with improved access to data. It can also help them make informed decisions and reduce their maintenance costs. Having a better design from the beginning can help a project run more smoothly.
The design industry has greatly benefited from the use of digital twins, as it has allowed multi-user communication and collaboration. Preconstruction companies have also gained the ability to manage their trade transactions seamlessly.
The construction industry has also greatly benefited from the use of digital twins, as it has allowed them to reduce their errors and accidents. When used for operations and maintenance, digital twins can help improve the efficiency of a project by reducing downtime and improving the quality of work.
People are making decisions in real time, which is significantly changing how they interact with data. The ability to visualize and analyze complex operations in 3D has made it possible to enhance how we interact with our assets. This has led to a paradigm shift in how we operate and build our physical spaces and will lead us into Industry 4.0.
Data is a valuable commodity, but it is only as good as how well it can be utilized to make informed decisions. Having the necessary tools and resources to analyze and visualize it is very important for businesses.
Getting the most out of the data collected by an organization is not as challenging as it used to be, as it now requires less effort to process and analyze it. Having the right tools and resources can help businesses make informed decisions.
One of the biggest challenges that businesses face when it comes to using data is the ability to visualize and analyze it. Currently, most of the data collected by organizations is stored in various databases and spreadsheets.
As we move towards Industry 4, products, factories, processes, cities, and buildings will no longer be merely objects in the physical world, but will be accurately represented by digital twins. We will be able to experience the next evolution of the internet and the connected world through 3D.
The rise of the digital twins has led to various opportunities for businesses, such as 3D marketing. This technology will allow them to create and deliver immersive experiences in hybrid and cross-digital environments.
Aerospace tasks are intrinsically complex. End products like aircraft and spacecraft are massively expensive to design and build, making it all the more imperative to get work done right the first time in order to avoid costly delays. From design and engineering all the way through to assembly and maintenance, digital twins improve decision-making by allowing teams to visualize and interact with computer-aided design (CAD) models and other datasets in real-time 3D.
Top use cases of digital twins in aerospace
Boeing reimagines aircraft inspection and maintenance
Boeing created an AR-powered aircraft inspection application using a digital twin of one of its planes. The twin enabled this aerospace industry leader to generate over 100,000 synthetic images to better train the machine learning algorithms of the AR application.
At the start of a project, architects produce design materials, including renderings and models, to allow clients to evaluate and approve the design. The problem is there’s no shared, collaborative environment with stakeholders to make decisions in real-time. Communicating design intent during traditional reviews is a difficult process. Static 2D and 3D models cause details to be lost in translation, renderings aren’t flexible enough, and not everyone is on the same page. Digital twins solve these problems so there’s no more costly mistakes.
Top use cases of digital twins in architecture
How SHoP Architects use real-time 3D digital twins to envision skyscrapers before they're built
Award-winning architecture firm SHoP Architects and JDS Development Group, a real estate development, construction and acquisition firm, are utilizing real-time data with Unity to make decisions faster with every project stakeholder. See how a digital twin of The Brooklyn Tower, a 93-story, 1,073-foot skyscraper in New York City, saves time and money and reduces the project’s carbon footprint.
In the automotive industry, digital twins are used to simulate and test new design concepts before they are built, optimize production processes, and even predict how a vehicle will perform in different conditions. The top benefit of using digital twins for automotive OEMs is the ability to save time and money by identifying and addressing potential issues before they occur. As the industry continues to embrace this technology, it plays an increasingly important role across every workflow in the automotive lifecycle, from design and manufacturing to marketing and maintenance.
Top use cases of digital twins in automotive
Volvo Cars revolutionizes the vehicle production lifecycle
Discover how Volvo Cars embraced digital twin technology to improve design-engineering communication and collaboration, reduce reliance on physical prototype vehicles, and create more immersive and effective buying experiences.
Faced with rampant supply chain delays, labor shortages, and inflated material costs, the stakes for builders are at an all-time high. Bad data and poor decision-making can lead to expensive delays and rework. Digital twin and AR technology allow the construction industry to optimize project data, streamline collaboration, and better visualize projects from design through to operations and maintenance. By using AR to bring valuable BIM data to the field, contractors are able to capture and communicate design errors in just a few clicks, allowing stakeholders to resolve issues quickly and avoid costly rework.
Top use cases of digital twins in construction
DPR Construction leverages AR to empower field teams
Learn more about how DPR, an ENR Top 10 Contractor, is integrating AR and immersive tech into the project lifecycle to bring valuable BIM data to the field in real-time to improve team performance and reduce rework.
Using AR to empower productivity
Energy companies generate a wealth of data, especially as operations are increasingly outfitted with Internet of Things (IoT) sensors, high-definition cameras with artificial intelligence (AI) capabilities, and more. Digital technologies like real-time 3D can visualize this data to provide right-time insights, better-informing decisions around production, maintenance, safety and security, and optimization.
Top use cases of digital twins in energy
Zutari improves design of large-scale renewable energy sites
See how Zutari, a South African engineering consultancy, is using Unity’s real-time 3D development platform to automate large-scale solar photovoltaics (PV) projects to reduce the time required to develop design-level insights and decrease costs.
Using renewable energy for a sustainable future
Digital twin technology helps builders, planners, and operators across cities worldwide better understand and optimize these spaces for public use. By using advanced, interactive models and live IoT data, stakeholders are able to simulate traffic flow, mobility patterns, and even the effects of climate change and shifting landscapes surrounding key infrastructure like airports, roads, and transportation hubs. From individual facilities to smart cities, digital twins are helping owners, operators, and policy-makers manage large volumes of valuable data that will allow them to better equip our infrastructure for future demands.
Top use cases of digital twins in infrastructure
Making cities smart with digital twins
According to ABI Research, more than 500 cities will deploy digital twins by 2025. Read more about how global industry leaders within the smart city movement are leveraging Unity to bring urban digital twins to life
Building smarter cities with digital twins
The use of real-time 3D, extended reality (XR), and AI technologies are accelerating at a rapid pace in civilian, defense and intelligence applications. New technologies are being deployed rapidly and putting challenges on government agencies and contractors that need to stay at the forefront of cutting-edge development. Digital twins help reduce the risk, time and cost of designing, developing, deploying and supporting cutting-edge applications in simulation and training and beyond.
Top use cases of digital twins in government
Rebuilding Tyndall Air Force Base with digital twin technology
The reconstruction of Tyndall Air Force Base in Florida after Hurricane Michael provides an opportunity to imagine what modern installations require and to rapidly undergo digital transformation. Learn how Tyndall’s digital twin is used to increase efficiency across planning, construction progress, operations, and maintenance.
Luxury interactive shopping is on the rise, complementing premium in-store experiences. Many luxury brands have been preparing for the future of retail for many years by creating 3D marketing experiences. Investing in this new way of selling can reduce costs and increase revenue.
Top use cases of digital twins in luxury goods
Globe-Trotter takes luxury shopping to new heights
Knowing traditional ways of selling products like photographs or rendered images wouldn’t be enough to turn shoppers into buyers, Globe-Trotter, a luxury travel accessories brand, delivered a more immersive experience to help their customers feel confident in purchasing high-priced custom luggage sight unseen.
How Globe-Trotter took luxury shopping to new heights
As emerging trends such as the fourth industrial revolution (4IR) continue to gain traction, manufacturers are using digital twin technology to transform their product lifecycle. From faster time-to-market in product development to increased productivity among frontline workers, many manufacturers are already reaping the benefits. Over 80% of companies who implemented immersive technologies identified improvements in their ability to innovate and collaborate in their production, manufacturing, and operations work phases, according to a Forrester Consulting study commissioned by Unity.
Top use cases of digital twins in manufacturing
SAP shapes the future of work with Unity
Discover how SAP sees AR, VR, and mixed reality (XR) as the next user experience frontier to reinvent field and factory operations.
How SAP uses XR to reinvent business operations
Spurred on by the pandemic, the need for retailers to leverage digital twins for design, planning, operations and more has increased exponentially. The importance of engaging customers online likewise increased overnight, and retailers looked to this technology to create immersive virtual experiences to continue connecting with shoppers. Savvy retailers are embracing digital twins to enhance processes, connect with their customers in new and profound ways, and deliver compelling digital and in-store user experiences.
Top use cases of digital twins in retail
eBay launches AI-enabled 3D display feature for sneaker sellers
Discover how the global commerce leader is bringing interactivity to their platform with the launch of their 3D TrueView feature for sneakers.
By Bobby Carlton
Due to the fast-paced nature of many operations in industries such as agriculture, food processing, and waste processing, machine builders are looking to develop solutions that can improve the accuracy and speed of their manual-based applications. While this may not be a major issue for most fixed machinery solutions, flexibility is often lacking.
Despite the various concerns that people in these industries have about the potential advantages of robots, they are still not able to fully understand their capabilities. For instance, many professionals still believe that robots are too expensive and cannot handle complex tasks. However, with the emergence of robotic technology, these same concerns no longer exist.
Today's high-speed pick and place applications rely on a conveyor belt to move parts or products from one point to another. In some cases, a product may need to be packaged in a single box to accommodate multiple types. This configuration can change daily due to the number of variants offered by the manufacturer. New ones are often needed to accommodate the changes in market conditions.
The use of fixed machinery can prevent the system from adapting to the changes in the product or the assortment. This can be a costly strategy as it requires separate machines to process different types of products. In addition to this, the setup may also occupy a lot of floor space.
When machinery offers flexibility, it can be a complex mechanical system with numerous failure points. While operators swap out parts, they have to adjust the setup to run the next variation.
A pick-and-place application that uses manual sorting may involve a large team of individuals who stand next to the conveyor belt. They pick and place products into boxes. This type of work is very taxing on the employees and can lead to high turnover and low employee satisfaction. Due to the lack of skilled workers, the pressures of maintaining these types of applications are becoming too high.
The advantages of robots in terms of their performance are numerous, such as their ability to seamlessly integrate with other automation systems. As opposed to having employees do manual tasks, robots can perform better in terms of accuracy, speed, and work around the clock. Human workers may have a hard time keeping up with the speed of the conveyor belt, and they may make mistakes as a result.
To help with bringing robotics into these types of environments, industries count on robotic sim work to improve robotics in in several ways:
Overall, robotic simulation provides a powerful tool for improving the design, development, testing, and deployment of robotic systems. The repetitive nature of some tasks, such as handling dirty and otherwise unsavory materials, can also be a factor that drives people to look for new jobs. In addition, robots are designed to endure harsh environments, which are not ideal for human workers. In terms of their labor-related benefits, such as the reduction of turnover, robots can provide continuous operation.
With the ability to easily adapt to different product configurations, robots can provide a new level of flexibility. For instance, they can switch between different programs with just a touch of a button. This eliminates the need for developers to create new programs and allows them to quickly change the system's overall design. The software's modular process architecture also lets users easily change the configuration of the pick-and-place procedure.
Although robots can be easily added to any production line, they can also be used in combination with other technologies to provide a fully integrated solution. This includes the ability to monitor and control various aspects of the operation.
One of the most common technologies that can be used with robots is a vision system. This type of system allows them to easily identify and categorize products. In addition, it can perform inspections on a variety of products to detect defects.
A vision system can also help ensure that all of the products in an assortment are present and accounted for by capturing the bar codes on individual products. In dynamic applications, such as picking from moving parts, it can help the robots determine which parts are ideal for picking next by displaying the optimal position on the screen.
Although robots can run at high speeds, some applications require them to perform at a high rate of throughput. This type of system can be used with multiple robots to share the load and ensure that the tasks are completed efficiently. For instance, if a combination box-packing operation requires a high throughput, a second robot can help manage the tasks that the first one cannot handle.
An increase in the number of robots can help improve the efficiency of your pick process and increase the flexibility of your operations. In addition, they can help each other gather information in order to ensure that all of the items are picked correctly.
Caio Viturino, who is a simulations developer at FS Studio, and has done an extensive amount of work in robot sim, said "Robotic simulations are being used more frequently as a means of training and testing mobile robots before deploying them in the real world. This is known as sim2real. For instance, we could create a 3D model of a warehouse and then train various robots in that environment to plan routes, recognize objects, and avoid collisions with dynamic obstacles."
One of the most important advantages of integrating a parallel robot with other automation technologies is that it can be controlled completely by a single PLC. This type of control allows the robots to adapt to the changes in the flow of products. If a conveyor is driven by a servo motor, its motion can be synchronized with the robots in real time. This type of control can also help the automation technologies adjust their own motion depending on the changes in the throughput.
Viturino explains "Robots will not be a replacement for the human labor force but will aid in difficult or repetitive tasks." To help him with his work Viturino focuses on the following tools:
Pybullet - An easy to use Python module for physics simulation, robotics and deep reinforcement learning based on the Bullet Physics SDK. With PyBullet you can load articulated bodies from URDF, SDF and other file formats.
Isaac Sim - A scalable robotics simulation application and synthetic data generation tool that powers photorealistic, physically-accurate virtual environments to develop, test, and manage AI-based robots.
Isaac Gym - provides a basic API for creating and populating a scene with robots and objects, supporting loading data from URDF and MJCF file formats.
For end users who are looking to move beyond traditional fixed-machine solutions and implement a high-speed picking and place operation, robots can help them improve their efficiency and increase their flexibility. Most of the time, robots are designed to work at high speeds with stable components.
As part of a factory's connected technology, today's most advanced robots can communicate with various devices, such as vision systems and conveyors, to respond to changing configuration needs, and robotic sim provides important information on making all of this work.
"There are other technologies, particularly in autonomous vehicles, such as passive thermal cameras. Despite it, the technology is restricted by armies and governments, and the cost is high. However, it may be a promise for the future."
As we come to the end of our conversation one thing Viturino brings up is he believes that simulation allows us to develop, test, and go beyond imagination, without fear of damaging robots and stuff, which can cost a lot of money or dismissal and an unpayable fine, depending on the damage haha. After we've tested our ideas in the simulation, then we're ready to deploy the software in the hardware.
As for Viturino and his work in robotics and AI, and closing the gap of what's possible now and the future of what we hope for, he believes that NVIDIA is working to develop ever-more accurate simulations through the use of their PhysX library, which is now available as an open-source version 5.1. As a result, the gap between simulation and reality will close more and more, increasing the reliability of robotic applications.
"We are in an era where we must be bold and creative to overcome the limits already reached, with agility and teamwork."
You can learn more about Caio and his work by checking out his Github page.
By Bobby Carlton
During CES 2023, Nvidia unveiled some incredible new features for its Isaac Sim software that will allow researchers and developers to better train and improve AI robots for various tasks that include areas such as manufacturing, agriculture, retail, and more.
According to Nvidia, the development of AI-based robots requires that they be placed in realistic environments. With the latest version of Isaac Sim, which is now available, developers can now test their models across different operating conditions.
The company's Isaac platform is composed of various tools such as the ROS module, which runs on the robots and the cuOpt software for route optimization. It also includes the Sim-ready assets, a toolkit for training models, and the TAO optimization system.
“The Isaac robotics platform is designed to accelerate the development and deployment of all manner of robots, and we have a number of software tools and SDKs that address different parts of this solution,” said Gerard Andrews, product manager for Nvidia’s robotics platform, during a CES briefing.
NVIDIA’s tools are built on the foundation of its AI suite and Omniverse, which is a platform that enables the creation and operation of digital twinning applications.
These include new tools and assets for logistics and warehouse operations, such as a conveyor belt utility and a behavior simulation tool for testing safety systems. It additionally has a variety of research tools, such as the Isaac Gym and the Isaac Cortex.
The company's goal is to provide researchers and developers with the necessary tools and resources to improve and develop AI models for various tasks. According to Andrews, the use of simulation will allow them to create a virtual proof of their creations.
Despite the company's efforts in simulation, Andrews noted that the company's work still remains to be done. Some of the factors that will be contributing to the development of new tools include improving the capabilities of its existing tools, such as Isaac Sim, and creating new ones that are specifically designed for different tasks.
“Closing the sim2real gap means the more that the robot performs in simulation like it’s expected to perform in the real world then you are going to get more use cases, more utility, and more value, so we spent a lot of time focusing on how to make our simulations more realistic for that robot user or robot developer,” said Andrews.
He noted that the company also focused on making its tools more flexible and modular. These factors allow the company to provide researchers and developers with the necessary tools and resources to improve and develop AI models for various tasks.
In a recent article on the Nvidia blog site Senior PMM at NVIDIA Robotics, Erin Rapacki wrote about how companies can optimize robot route planning using NVIDIA cuOpt for Isaac Sim.
In Rapacki’s article, she looks at how cuOpt API from NVIDIA enables operations researchers to create real-time fleet routing. It can be used to solve various routing problems, such as job scheduling, robotic route planning, and dynamic rerouting.
The extension for the Isaac Sim simulation environment from NVIDIA includes the cuOpt engine. This component is integrated with the company's Omniverse application.
"Mailroom workers pick up mail and parcels from different stations and deliver them to various recipients. They know that some envelopes are time-sensitive so they use their knowledge to plan routes with the shortest possible delivery time.
This mail delivery puzzle can be mathematically addressed by using techniques from operations research, a discipline that deals with applying analytical models to improve decision-making and system efficiency. The mathematical science behind operations research is also highly applicable to the process modeling and management of robotics, industrial automation, and material handling systems."
For logistics professionals, real-time optimization problems are often encountered, such as the travel salesman issue (TSP), vehicle routing problem (VRP, and pickup and delivery problem (PDP).
The more academic version of the travel salesman problem is known as the PDP and VRP. It involves asking a question about the shortest route that can take between each of the destinations, “given a list of destinations and distances between each pair of destinations, what is the shortest possible route that visits each destination exactly once and returns to the original location?”
The use of the travel salesman problem in logistics applications can help reduce the time it takes to move materials from one place to another. For instance, it can be used to improve the efficiency of a manufacturing facility's transportation network.
In addition, robotics companies can use cuOpt in their planning processes for the deployment of their robots and continuous operation. For instance, during the planning phase of a project, the facility's process layout can help predict the throughput requirements. This process helps with a successful project ROI, according to the author.
The extension for Isaac Sim from NVIDIA allows continuous operation of the robot fleet while it's inside the facility. It can be used to route the vehicles according to various system variables, such as the traffic conditions, obstacles, and peak demand for throughput.
Before, companies used a lower-fidelity simulation called discrete event simulator, to design their routing and material handling processes. With the help of cuOpt, they can now use a real-time solution for the planning and implementation of their robots. This component can be used to solve various routing problems, such as the transportation of vehicles and the scheduling of jobs.
McKinsey stated that executives are increasing their investments in automation and digital technologies to improve their organizations' efficiency. “More than 60 percent of our respondents reported that they have either implemented or are scaling up digital and automation solutions.”
For instance, if a company builds mobile robots or robotic forklifts, it can model how they can move material with varying timeliness compared to people or conveyor belts. To fully understand the system's systemic differences, it's important to analyze the entire movement of an object from its origin location to its destination.
To transform existing processes into robotic operations, a company can use the cuOpt extension for Isaac Sim. This component can be utilized to analyze the various steps involved in the design and implementation of their robots, and improve their efficiency, which is outlined below by Rapacki in her article on optimizing robot route planning with cuOpt for Isaac Sim.
Redesign of brownfield facilities:
Real-time analytics and rerouting:
To help us understand how this works, Rapacki gives us two examples. One in manufacturing and the other in warehousing.
A manufacturing process involves the timely delivery of parts to the downstream steps of a facility. If the parts arrive late, the factory might not be able to produce as many products that day.
Getting the materials to their destination quickly is a critical component of a manufacturing process, and inefficient route planning can lead to delays.
In warehouses, traffic and floor obstacles can delay the movement of mobile robots. They need dynamic rerouting to react to variable conditions, such as when the route is obscured. If the robots get stuck or slow down, they can act as a constraint or bottleneck and affect the entire operation.
The continuous movement of a material is a critical component of a company's operations, and it's important that the robots are always working in the right context. Having the necessary data streams can help floor managers improve the efficiency of their operations.
With the cuOpt extension, a company can easily implement a variety of optimization techniques and improve the efficiency of its operations. It's built on a patent-pending engine that can evaluate and analyze multiple solutions.
The ability to connect to the performance of NVIDIA's hardware is a key component of the cuOpt extension. With the ability to create thousands of configurations and environments in a short time, a company can easily improve the efficiency of its processes.
The ability to customize system parameters such as speed of delivery, budget, and robustness can help a company identify the optimal layout for its operations. For instance, in the warehouse and material handling industry, there are specific needs for efficiency and optimization.
One of the most critical factors that a company can consider when it comes to optimizing its operations is the right operational decisions. With the ability to make dynamic decisions, a company can improve its processes and maximize its output. Through the cuOpt extension, users and robotic companies can benefit from the ability to take action immediately.
This will have a significant impact on the work we do here at FS Studio. For example here is a list of tools we've used with current and past projects. Future digital twin projects will absolutely take advantage of the cuOpt extension.
NVIDIA's goal is to make its tools more modular and flexible, and focus on making its simulations more realistic for both developers and researchers. As the number of robots deployed on the market continues to increase, the company's efforts will continue to be focused on making its tools more capable of handling the challenges of these new robots.
One of the main factors that contributed to the development of the company's simulation tools is the need to include people in their simulations as workers increasingly interact with robots. This capability allows people to perform certain tasks, such as pushing carts or stacking packages.
“We’re excited about people simulation – the ability to drop characters into the environment and issue commands to those characters and let them take part in a complex event-driven simulation where you can test the software on the robots,” said Andrews.
In the company's initial release, the tools have a variety of predefined behaviors that allow people to perform certain tasks, such as going to a certain location and avoiding obstacles.
One of the most important factors that the company considered when it came to developing its simulation tools was the need to make them more accurate when it comes to rendering data from sensors. Through the use of NVIDIA RTX technology, the company was able to provide its Isaac Sim with a physically accurate representation of the data collected by the sensors.
“We improved our sensor performance, and specifically for LiDAR, we have ray tracing, which provides accurate performance where the sensor data generated in the simulator starts to mimic and mirror what you’ll get from the real-world sensor.”
According to NVIDIA, ray tracing can provide a more accurate representation of the sensor data in various lighting conditions. It can also support rotating and solid state configurations. Several new models for LiDAR, such as Slamtec, Ouster, and Hesai have been added.
The company's latest release of its simulation tools includes new 3D assets that can be used to build physically accurate environments. These assets can help speed up the process of creating complex simulations.
The latest version of Isaac Sim also comes with new features for researchers working on complex robot programming and reinforcement learning. These include the Isaac Gym and the Isaac Cortex. A new tool called Isaac ORBIT allows researchers to create functional simulation environments for motion planning and robot learning.
Developers of robot operating systems can now use Isaac Sim's upgrades for Windows and ROS 2. According to NVIDIA, this will allow them to create complex simulations of the software.
NVIDIA's focus on the cloud has grown as it allows users to access the latest version of its software and its applications more easily. Andrews noted that this allows the company to benefit from the scalability and accessibility of the cloud.
The availability of Isaac Sim in the cloud allows researchers working on robotic projects to collaborate more easily, and it can help them train and test virtual robots faster. Developers of Isaac replicator software can now create large datasets that can be used to create simulations of real-world environments. They can then use the company's platform to implement route planning and fleet task management.
The company's product, known as replicator, is built on the Omniverse technology platform and can be used to create synthetic data models. According to Andrews, it can help researchers train AI models by providing them with a way to supplement their existing data sets.
“We believe simulation is the critical technology to advance robotics and it will be the proving ground for robots,” said Andrews. “We have numerous customers that are working with us that have shared how they have been able to use Isaac Sim so far.”
According to NVIDIA, over a thousand companies and over a million developers have used various parts of the Isaac ecosystem to develop and test virtual robots. Some of these include companies that have used Isaac Sim to develop physical robots.
Use case examples range from Telexistence’s beverage restocking robots and Sarcos Robotics’ robots that pick and place solar panels in renewable energy installations to Fraunhofer’s development of advanced AMRs and Flexiv’s use of Isaac Replicator for synthetic data generation to train AI models.
To begin using the NVIDIA cuOpt for Isaac Sim extension, use the following resources:
By Bobby Carlton
Everyone seems to see how ChatGPT is disrupting the Internet and now Google is in danger of being disrupted as well. You.com is an app that combines ChatGPT and Google into a single AI powered search engine. Since OpenAI released its ChatGPT platform, there has been much speculation about it being the killer app, and according to the New York Times, Google has declared a "code red" due to how powerful ChatGPT is proving to be.
ChatGPT is a promising technology that has the potential to redefine how we interact with digital data, and can be used for various applications, such as online search. You can also use ChatGPT to do things such as write code, write lyrics to a song, a blog post, and even a movie script.
It's not clear if ChatGPT will be able to dethrone Google on its own, at least not right now. There are still many issues that need to be resolved before large language models can compete with search engines. Even though the technology will eventually mature, Google Search is still expected to gain from the growing number of LLMs (large language models).
ChatGPT can answer questions with ease, and it's almost like you're speaking to a person who has been studying for hundreds of years. Its output is grammatically correct and fluid, and it can mimic various styles of speech.
Unfortunately, ChatGPT isn't absolutely perfect. Some answers are often not correct at all or states completely incorrect facts. This is because ChatGPT is an advanced engine that tries to predict what will happen next based on your chat history and prompt. Even though its answers may seem plausible, it still doesn't get things right.
One of the biggest challenges that ChatGPT faces is the truthfulness of its output. Currently, it's not possible to tell if its answers are true or not. This is a major issue that could prevent the large language model from becoming a viable alternative to search engines.
Certain search engines such as Google, do provide links to sources you can verify but ChatGPT doesn't. Instead, it provides a blank page that doesn't contain any references to the websites that it links to.
One of the possible solutions that could be used to solve this issue is by adding a mechanism that links the various parts of the output to web pages. However, this would require a deep learning-based approach, and it would require accessing a search engine index.
What You.Com does is combine the AI power of ChatGPT with the reliability of Google, and it's this that will completely disrupt the Google model. Retraining a large language model is also required in order to add new knowledge.
Although it might not be necessary for every update, it's still significantly more expensive than modifying or adding records in a search engine. Doing so multiple times a day is required to stay up-to-date with the latest news.
ChatGPT is likely to have around 175 billion parameters, based on GPT 3.5. Since there are no specific hardware components that can fit the model, splitting it up and running it across various processors is a significant challenge.
The search engine operators of the LLM need tools and mechanisms to ensure that web sources are reliable. These traces show the components of the search engine. The speed at which large language models respond to queries is also an issue that affects the performance of these platforms. For instance, ChatGPT takes several seconds to respond to queries.
Unlike other search engines, which need to analyze their entire dataset in order to find the right records, search engines do not need to do this. Instead, they rely on their algorithms to find the most relevant records at fast speeds.
On the other hand, an LLM can run information through a deep neural network every time it receives a prompt. Although the size of this system is not as large as that of a search engine database, it still has a lot more computation than querying indexes. Due to the nature of neural networks, it is not possible to parallelize the operations of the model. As the training corpus of the system grows, it will also have to be larger in order to generalize its knowledge base.
The business model of a search engine is also a challenge that an LLM-based model will face. For instance, Google has built an ad empire through its search engine generating billions of dollars annually even with low click-through rates.
Through its ability to collect and analyze user data, Google can customize its ads and search results. This makes its business more profitable and efficient. Besides this, the company also has other products that allow it to enhance the digital profiles of its users, such as YouTube, Chrome, Gmail, and Android.
At the moment, we are seeing a huge wave of AI powered applications taking over our social media feeds. This past summer we saw a number of people use Lensa AI to create cool portraits of themselves.
The company has a huge advantage over its competitors due to its control over both the advertisers and the content seekers. By collecting and analyzing user data, Google can improve its search results and provide relevant ads. ChatGPT is a potential search engine, but it doesn't yet have a business model. According to a back-of-the-envelope estimate, it costs around $3 million monthly or $100,000 a day for a million users.
Imagine that every day, around 8 billion people search for content on Google. Add the costs of training the model and manually tuning it, and the cost of doing business would be around $3 million. Unfortunately, the training and running of large language models such as ChatGPT can be very expensive for tech companies. This makes it difficult for them to develop profitable products.
One of the possible ways to improve the profitability of an LLM is by delivering it as a paid API, similar to GPT-3 and Codex. However, this isn't the traditional model of search engines. Another option would be to integrate the model into Microsoft's Bing search engine. This would allow it to compete with Google Search.
Based on the characteristics of ChatGPT and other similar programs, it's clear that they will eventually become a complementary part of the online search industry. They will likely help existing search engines gain a stronger competitive advantage.
For now ChatGPT seems to have caught the attention of Google, and now they have to deal with an incredibly powerful AI powered search engine with You.com.