By Bobby Carlton
Raythink, a Chinese company that specializes in the development of AR head-mounted displays for the transportation and automotive sectors, debuted its new HUD solution at the 20th Shanghai International Automobile Industry Expo.
The company showcased its new product at the event, which utilizes Raythink’s OpticalCore picture generation module. It can project an image that’s 3D to the naked eye.
The company's HUD technology projects various driving-related information onto the windshield, such as speed, directions, and assistance features, in real-time. This eliminates the need for drivers to look down or turn their heads.
Displaying navigation and instrument cluster
According to the company, their AR HUD utilizes laser beam scanning to source its light. This technology allows it to achieve a wider field of view (FoV more than 20°) and provides a higher contrast and long-range imaging (VID more than 15 meters), enabling the projection to cover three lanes as you drive.
This will deliver a deep integration of real scenes, clear picture with no ghosting, and resolve distortion and other problems that could typically occur with AR solutions. Integrated data sources such as ADAS, DMS, instrument clusters, and infotainment systems come together to provide stability and low-latency output AR real-life fusion images. With professional personalized UI design, Raythink can display basic driving information, AR navigation, ADAS warnings, POI and more for a more personalized human/machine interaction experience allowing you to customize applications and graphics according to your needs.
Design and craftsmanship
Multifocal range imaging
Raythink noted that its new solution will be used in various AR-based applications, such as in-car micro projections and AR-HUDs. It is expected to be mass-produced in the second quarter of 2024. They also noted that it will be increasing its production capacity this year to accommodate the growing demand for AR-HUDs. It will also be developing new immersive models for use in smart cars.
In April last year, Raythink was able to pass various quality certification tests, such as IATF16949, ISO9001, and other certifications. The company noted that it has already received 10 orders for its AR-HUDs, which will be used in over 400,000 cars worldwide.
During the Shanghai Auto Show, Raythink also partnered with AI Speech, a Chinese AI startup. This partnership further strengthens the company's already existing collaborations with other tier-one suppliers. Some of these include Aptiv, a UK-based mobility service provider, and AliOS, a smart system manufacturer.
Raythink provides more details about its AR head-mounted display technology for automotive applications on its website.
By Bobby Carlton
2023 is seeing a rapid growth of Open AI's tools, such as ChatGPT and how the tool is changing education enterprise, and the world in general. It is clear that the rapid emergence and evolution of AI technologies will have a significant impact on the future of education and learning.
This past week, prominent tech industry figures, including Steve Wozniak and Elon Musk, called for a six-month moratorium on new software development of ChatGPT 5 to consider the risks associated with AI so that they can get a better understanding of how it will affect our society.
In a petition published by the Future of Life Institute, they warned that the development of AI systems that are capable of delivering human-level intelligence could threaten the well-being of people and society.
The petition urges policymakers and the private sector to work together to develop regulations and guidelines that will help protect the privacy and security of individuals as they use ChatGPT4 an AI and states, "recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control."
Another concern is the rapid emergence and evolution of AI systems has raised concerns about the potential impact on society and jobs.
Sam Altman, the founder of OpenAI, stated in 2015 that superintelligence could threaten the existence of humans in 2023. In a podcast with Kara Swisher, he said that he still feels the same way, and he has accelerated this threat with Chat GPT4.
It is safe to mention that many industries are adopting new technology such as robotics and AI that will change the future of work moving forward. Microsoft, Meta, Google, Amazon, Nvidia and many others are seeing tools such as AI playing a key role in how we work and companies evolve, but there are still some concerning questions looming over them as they enter this new area and experts would like us to think about how it impacts the following:
Copyrights and Ownership of AI Generated Content
One of the most important questions that I have been asked is how Open AI will treat the rights of content that it collects on the web. For instance, if a Chat request yields a text or image from content, is that legal or kosher?
Internal AI Content Repositories
When it comes to implementing ChatGPT and AI, the requests made to the platform will only be able to access open and public content. However, we want our employees to have the opportunity to access the internal content repository, which includes best practices and procedures.
Learning Development with AI
The potential of AI to transform the way learning is conducted is a major topic of discussion. What models can be used to develop instructional design that takes advantage of the power of machine learning?
Certification, Assessment and Credentials in the AI World
In the field of assessment and certification, one of the most critical questions that is being asked is how the various aspects of these processes will be affected by the use of AI.
Coaching, Workflow Support and Nudges with AI
What kinds of initiatives can be implemented using AI to enhance the efficiency and effectiveness of work processes? For instance, can we introduce workflow support and coaching on a personalized basis?
AI in Role Change and Replacement
One of the most critical questions that are being asked is how AI will be used to transform the way roles and positions are changed in organizations.
Many believe that it is time for global and national organizations to start facilitating conversations about the use of AI with technology innovators and other key decision-makers and step back to think about how ChatGPT5 and AI will impact the jobs of the future. The national security administration may need to extend the time it takes to implement AI to prevent it from wiping out over 50% of the jobs in the next two decades.
In Italy, the country's data protection agency ordered OpenAI to block ChatGPT AI after it found that the company collected users' data without their consent. The company noted that it disabled the feature for users in the country.
AI tools are here and it will continue to grow. What some are asking is that there is a pause in the development to allow us to identify and avoid potential issues that could affect our lives and businesses, and we should also start experimenting with the current tools to get a better understanding of how they can be used to improve our operations.
It's a bold and brave new world out there and AI is re-shaping how we work, play, socialize, and approach education. Should we listen to tech experts or should we just let AI steer our future?
You can read the Future of Life Institute AI petition here.
By Bobby Carlton
According to the Association of Advanced Automation (A3), the North American market for robotics set new records in terms of both the number of robots and their value in 2022. It reported that the orders placed by companies in the region increased by 18% and 11% respectively.
The automotive sector was the main contributor to the growth of the robotics industry in North America. It reportedly ordered over 24,000 robots in 2022. That equates to more than 50% of sales coming directly from the automotive industry.
“Although labor-shortage and supply chain issues impact nearly all industries in North America, automakers’ public commitment to move to electric vehicles (EVs) has set in motion a resurgence of robot orders in this market,” stated Jeff Burnstein, president of A3 in a company post.
He noted that the increasing number of automotive component suppliers and original equipment manufacturers (OEMs) who are investing in robots has helped to accelerate the development of EVs.
In 2022, A3 noted that the number of robots ordered by North American companies increased by 24% during the first nine months of the year. Even though the fourth-quarter orders dropped, the total for the year was 11% higher than in 2021.
The automotive industry is a major manufacturing sector in the US, employing around 9 million people and generating over a trillion dollars annually. It’s natural that the industry would adopt robotics to improve its efficiency and productivity.
Robots are commonly used in car manufacturing facilities to perform various tasks, such as welding, assembly, painting, and material handling. They can be found in every part of a plant, though their specific functions vary depending on the location.
Outside of car plants, robots are also commonly used to carry out other tasks. These include moving materials, inspecting parts, and spraying paint on vehicles. It’s widely believed that by 2025, around 75% of new vehicles will be made using robots on production lines
However, the increasing popularity of e-commerce and the shift in consumer demand caused some companies to delay their purchases of robots in late last year. According to Burnstein, Amazon's decision to build fewer warehouses might have caused some companies to rethink their plans to adopt automation.
Despite the improving labor market, Burnstein noted that the shortage of workers still persists. This is why many companies are still looking to automate their operations to increase their workforces to help in very specific areas.
During the production of cars, welding is a common process. In the past, it was mainly done by humans. Today, it’s being given more to robots.
There are various types of robots that are used to perform the welding process on the assembly line. For instance, large industrial robots are known to spot weld on various parts of vehicles. On the other hand, smaller collaborative robots are commonly used to fix small parts.
Compared to humans, robots are more efficient when it comes to welding. They can precisely and repetitively work on parts on the assembly line, which eliminates the need for tiring or distracted workers.
Aside from being more efficient, robots are also safer when it comes to welding due to the sudden exposure to dangerous conditions.
Painting, Coating, Sanding, and Sealing
Getting the job done properly on a car is a challenging task, especially since it involves doing it over and over again as vehicles are brought from the assembly lines.
While there are still people working in the painting department at car factories, robots are starting to take over. With the help of robotic arms, painters can now get the job done without error and without exposing themselves to toxic substances.
The robots follow carefully programmed paths, which helps them to be more efficient and reduce waste.
Material handling robots are commonly used in the manufacturing process to move various materials. They can carry out the task over and over again, keeping the assembly line moving. Moving materials can also be done by robots, which can be used to transport them from one place to another.
The safety and efficiency of logistics robots are some of the most important factors that contribute to their usefulness. While working with heavy loads can be very dangerous, robots can safely carry out the task over multiple times.
Advantages of Robotics
Aside from these, robots also have other advantages that make them an ideal asset for the car manufacturing industry. One of the most important factors that contributes to the efficiency and quality of assembly line work is the use of robotic vision. This technology allows machines to perform more precise work by using a camera and laser array.
Industrial robots are also very productive, as they can work on a continuous basis and without tiring. They can precisely carry out repetitive tasks with minimal errors and no interruptions.
After all of these advantages, it seems like it’s only a matter of time before robots completely replace workers in the car manufacturing industry. Contrary to popular belief, robots are not replacing people in the car manufacturing industry.
Robots Still Need Humans
In addition to their efficiency and procedural advantages, robots require the support of humans to operate properly. This requires work through robotic simulations, being programmed and controlled. Also, robots will need regular maintenance and repairs. They must be designed and installed properly, and they have to be reprogrammed when a new product is introduced.
As robots continue to take over dangerous and repetitive tasks in the car industry, the need for people with the necessary skills continues to grow. Part of that will come from increased robot adoption in multiple industries and the evolution of easier to use robotics.
The growth of the robotics industry and how a robot is used was also supported by the increasing number of applications in various industries, such as agriculture and food services.
These include tasks such as picking and cooking food, and installing drywall. Despite the slow growth of the non-automotive sector's robot orders, Burnstein noted that it is clear that companies are still looking to adopt automation as a key component of their operations.
“We look forward to seeing more unique and increasingly easy-to-use robots that all industries can benefit from,” said Burnstein.
By Bobby Carlton
Digital twins are powerful tools that connect real-world data with digital assets, allowing engineers and designers to visualize and analyze complex systems in an interactive manner. They help organizations make informed decision-making through sales and marketing insights, analysis, 3D visualization, simulation, and prediction.
A digital twin is created by importing various conceptual models or scanning physical objects in the real world. It can then be used to visualize and analyze the data in combination with the information from the Internet of Things and enterprise databases. Its powerful 3D graphics technology can create interactive and lifelike representations of complex systems.
A digital twin is a representation of the forces, movements, and interactions that an object can experience in the physical world, allowing users to interact with it in real time. It can be used to simulate what-if scenarios, as well as visualize the outcomes of any situation instantly on different platforms, such as mobile devices, computers and virtual reality headsets.
The complexity of a digital twin deployment varies depending on the stage of the project. Its creation and use can be complex, as it involves importing and analyzing data from various sources. For instance, a digital twin can be used to create a product configuration or a representation of a vast network.
The benefits of a digital twin are numerous, such as its ability to provide customers with improved access to data. It can also help them make informed decisions and reduce their maintenance costs. Having a better design from the beginning can help a project run more smoothly.
The design industry has greatly benefited from the use of digital twins, as it has allowed multi-user communication and collaboration. Preconstruction companies have also gained the ability to manage their trade transactions seamlessly.
The construction industry has also greatly benefited from the use of digital twins, as it has allowed them to reduce their errors and accidents. When used for operations and maintenance, digital twins can help improve the efficiency of a project by reducing downtime and improving the quality of work.
People are making decisions in real time, which is significantly changing how they interact with data. The ability to visualize and analyze complex operations in 3D has made it possible to enhance how we interact with our assets. This has led to a paradigm shift in how we operate and build our physical spaces and will lead us into Industry 4.0.
Data is a valuable commodity, but it is only as good as how well it can be utilized to make informed decisions. Having the necessary tools and resources to analyze and visualize it is very important for businesses.
Getting the most out of the data collected by an organization is not as challenging as it used to be, as it now requires less effort to process and analyze it. Having the right tools and resources can help businesses make informed decisions.
One of the biggest challenges that businesses face when it comes to using data is the ability to visualize and analyze it. Currently, most of the data collected by organizations is stored in various databases and spreadsheets.
As we move towards Industry 4, products, factories, processes, cities, and buildings will no longer be merely objects in the physical world, but will be accurately represented by digital twins. We will be able to experience the next evolution of the internet and the connected world through 3D.
The rise of the digital twins has led to various opportunities for businesses, such as 3D marketing. This technology will allow them to create and deliver immersive experiences in hybrid and cross-digital environments.
Aerospace tasks are intrinsically complex. End products like aircraft and spacecraft are massively expensive to design and build, making it all the more imperative to get work done right the first time in order to avoid costly delays. From design and engineering all the way through to assembly and maintenance, digital twins improve decision-making by allowing teams to visualize and interact with computer-aided design (CAD) models and other datasets in real-time 3D.
Top use cases of digital twins in aerospace
Boeing reimagines aircraft inspection and maintenance
Boeing created an AR-powered aircraft inspection application using a digital twin of one of its planes. The twin enabled this aerospace industry leader to generate over 100,000 synthetic images to better train the machine learning algorithms of the AR application.
At the start of a project, architects produce design materials, including renderings and models, to allow clients to evaluate and approve the design. The problem is there’s no shared, collaborative environment with stakeholders to make decisions in real-time. Communicating design intent during traditional reviews is a difficult process. Static 2D and 3D models cause details to be lost in translation, renderings aren’t flexible enough, and not everyone is on the same page. Digital twins solve these problems so there’s no more costly mistakes.
Top use cases of digital twins in architecture
How SHoP Architects use real-time 3D digital twins to envision skyscrapers before they're built
Award-winning architecture firm SHoP Architects and JDS Development Group, a real estate development, construction and acquisition firm, are utilizing real-time data with Unity to make decisions faster with every project stakeholder. See how a digital twin of The Brooklyn Tower, a 93-story, 1,073-foot skyscraper in New York City, saves time and money and reduces the project’s carbon footprint.
In the automotive industry, digital twins are used to simulate and test new design concepts before they are built, optimize production processes, and even predict how a vehicle will perform in different conditions. The top benefit of using digital twins for automotive OEMs is the ability to save time and money by identifying and addressing potential issues before they occur. As the industry continues to embrace this technology, it plays an increasingly important role across every workflow in the automotive lifecycle, from design and manufacturing to marketing and maintenance.
Top use cases of digital twins in automotive
Volvo Cars revolutionizes the vehicle production lifecycle
Discover how Volvo Cars embraced digital twin technology to improve design-engineering communication and collaboration, reduce reliance on physical prototype vehicles, and create more immersive and effective buying experiences.
Faced with rampant supply chain delays, labor shortages, and inflated material costs, the stakes for builders are at an all-time high. Bad data and poor decision-making can lead to expensive delays and rework. Digital twin and AR technology allow the construction industry to optimize project data, streamline collaboration, and better visualize projects from design through to operations and maintenance. By using AR to bring valuable BIM data to the field, contractors are able to capture and communicate design errors in just a few clicks, allowing stakeholders to resolve issues quickly and avoid costly rework.
Top use cases of digital twins in construction
DPR Construction leverages AR to empower field teams
Learn more about how DPR, an ENR Top 10 Contractor, is integrating AR and immersive tech into the project lifecycle to bring valuable BIM data to the field in real-time to improve team performance and reduce rework.
Using AR to empower productivity
Energy companies generate a wealth of data, especially as operations are increasingly outfitted with Internet of Things (IoT) sensors, high-definition cameras with artificial intelligence (AI) capabilities, and more. Digital technologies like real-time 3D can visualize this data to provide right-time insights, better-informing decisions around production, maintenance, safety and security, and optimization.
Top use cases of digital twins in energy
Zutari improves design of large-scale renewable energy sites
See how Zutari, a South African engineering consultancy, is using Unity’s real-time 3D development platform to automate large-scale solar photovoltaics (PV) projects to reduce the time required to develop design-level insights and decrease costs.
Using renewable energy for a sustainable future
Digital twin technology helps builders, planners, and operators across cities worldwide better understand and optimize these spaces for public use. By using advanced, interactive models and live IoT data, stakeholders are able to simulate traffic flow, mobility patterns, and even the effects of climate change and shifting landscapes surrounding key infrastructure like airports, roads, and transportation hubs. From individual facilities to smart cities, digital twins are helping owners, operators, and policy-makers manage large volumes of valuable data that will allow them to better equip our infrastructure for future demands.
Top use cases of digital twins in infrastructure
Making cities smart with digital twins
According to ABI Research, more than 500 cities will deploy digital twins by 2025. Read more about how global industry leaders within the smart city movement are leveraging Unity to bring urban digital twins to life
Building smarter cities with digital twins
The use of real-time 3D, extended reality (XR), and AI technologies are accelerating at a rapid pace in civilian, defense and intelligence applications. New technologies are being deployed rapidly and putting challenges on government agencies and contractors that need to stay at the forefront of cutting-edge development. Digital twins help reduce the risk, time and cost of designing, developing, deploying and supporting cutting-edge applications in simulation and training and beyond.
Top use cases of digital twins in government
Rebuilding Tyndall Air Force Base with digital twin technology
The reconstruction of Tyndall Air Force Base in Florida after Hurricane Michael provides an opportunity to imagine what modern installations require and to rapidly undergo digital transformation. Learn how Tyndall’s digital twin is used to increase efficiency across planning, construction progress, operations, and maintenance.
Luxury interactive shopping is on the rise, complementing premium in-store experiences. Many luxury brands have been preparing for the future of retail for many years by creating 3D marketing experiences. Investing in this new way of selling can reduce costs and increase revenue.
Top use cases of digital twins in luxury goods
Globe-Trotter takes luxury shopping to new heights
Knowing traditional ways of selling products like photographs or rendered images wouldn’t be enough to turn shoppers into buyers, Globe-Trotter, a luxury travel accessories brand, delivered a more immersive experience to help their customers feel confident in purchasing high-priced custom luggage sight unseen.
How Globe-Trotter took luxury shopping to new heights
As emerging trends such as the fourth industrial revolution (4IR) continue to gain traction, manufacturers are using digital twin technology to transform their product lifecycle. From faster time-to-market in product development to increased productivity among frontline workers, many manufacturers are already reaping the benefits. Over 80% of companies who implemented immersive technologies identified improvements in their ability to innovate and collaborate in their production, manufacturing, and operations work phases, according to a Forrester Consulting study commissioned by Unity.
Top use cases of digital twins in manufacturing
SAP shapes the future of work with Unity
Discover how SAP sees AR, VR, and mixed reality (XR) as the next user experience frontier to reinvent field and factory operations.
How SAP uses XR to reinvent business operations
Spurred on by the pandemic, the need for retailers to leverage digital twins for design, planning, operations and more has increased exponentially. The importance of engaging customers online likewise increased overnight, and retailers looked to this technology to create immersive virtual experiences to continue connecting with shoppers. Savvy retailers are embracing digital twins to enhance processes, connect with their customers in new and profound ways, and deliver compelling digital and in-store user experiences.
Top use cases of digital twins in retail
eBay launches AI-enabled 3D display feature for sneaker sellers
Discover how the global commerce leader is bringing interactivity to their platform with the launch of their 3D TrueView feature for sneakers.
By Bobby Carlton
German carmaker Audi has unveiled a concept vehicle that uses AR glasses to show 3D content in the real world. It also eliminates the need for traditional control panels by allowing users to interact with digital features through the glasses.
Along with the AR feature the vehicle, which is called the Activesphere, has a range of over 600 kilometers and no local emissions. It also features an 800v charging system and can transform into pickup truck on demand.
Yes, we've made it to the age of transforming cars! Sort of.
The company claims that the AR features will be able to customize the content for both the passengers and the drivers. For instance, drivers will be able to access navigation information while the passengers are surfing the web.
Oliver Hoffmann, a member of the company's technical development board, said that the concept vehicles show the company's vision for the future of premium mobility and in-car virtual tools.
The concept vehicles are equipped with four AR headsets, which are designed to provide the driver and the passenger with a variety of information and services. According to the company, the system can identify the user's focus and provide more detailed data.
“The sphere concept vehicles show our vision for the premium mobility of the future. We are experiencing a paradigm shift, especially in the interior of our future Audi models. The interior becomes a place where the passengers feel at home and can connect to the world outside at the same time," said Oliver Hoffmann, Member of the Board of Management for Technical Development at Audi, adding "The most important technical innovation in the Audi Activesphere is our adaptation of augmented reality for mobility. Audi dimensions creates the perfect synthesis between the surroundings and digital reality.”
The controls for various functions are located in front of the vehicle's elements. For instance, the audio and AC controls are positioned over the speakers. Through the use of AR technology, the vehicle can also display 3D topography images and other traffic information while in off-road mode.
Along with in-car assistance and entertainment, the Audi Activesphere passenger can take the headset out of the vehicle and use to navigate a bike trail or find the ideal descent while skiing down a mountain. The company noted that information about the car, its battery range, and the charging stations nearby can be accessed both inside and outside of the vehicle. It also provides weather forecasts and other advance warnings.
The vehicle's interior features various notable features, such as an autonomous driving mode that can deactivate the steering wheel and the dashboard. It also has a next-gen dashboard that can serve as an x-large soundbar. Additional consoles containing four AR headsets are located above the center console.
Of course Audi is just one of many car manufacturers looking at how they can bring XR technology and make it part of the driver and passenger experiences. BMW actually puts drivers into VR as they test drive their vehicles, and Holoride announced their new system that lets your passengers use VR for in-car entertainment through an HTC Flow VR headset.
For more information and photos of Audi's Activesphere concept car, click here.
By Bobby Carlton
The development of self-driving cars requires immense amounts of data. Engineers must analyze and label all of this information in order to train their neural networks. Through this data, they can then test and validate the systems that are designed to drive autonomous cars.
One of the biggest factors is the accuracy of the data, and simulation becomes a crucial tool in this process, as accuracy is often the determining factor in its success.
To help researchers collect realistic and physically accurate data, they are turning to simulations through the use of the NVIDIA Drive Sim platform, which is built on Omniverse. This solution is designed to provide engineers with a complete end-to-end simulation solution, and allows them to train their neural networks and perform motion control simulations.
AV sensors can be categorized as follows:
Through the use of the Nvidia Drive Sim platform, we can now confidently deliver accurate lidar models that match the real world. In a recently published whitepaper, Nvidia talked about the process that enables them to achieve this goal.
The image below shows the active sensor data that's moving through the Drive Sim pipeline. The first step is to create a representation of the data that's sent to the Omniverse World model. The second step is to send the data to the NVIDIA RTX sensor API plug-in.
With this tool engineers are able to collect important and accurate data that will have a significant impact in many industries.
You can read Nvidia's Drive Sim/LiDAR validation white paper here.