The 3D Model of a Tesla gigafactory is one of those most incredible things we built at FS Studio.
Technologies like AI, Machine Learning, Perception Engine, and Imaging systems are paradigms capable of bringing disruptive changes across a broad spectrum of industries.
FS Studio uses these state-of-the-art technologies to provide various services and build incredible innovations related to mixed Reality, i.e., AR & VR, Spatial Computing, embedded device development, IoT, WebXR, Simulations, and AI Hand Tracking.
How was the 3D Model of the Tesla Gigafactory Built?
The intricacies involved in this massive endeavor to generate 3D digital twins of a site with only drone footage are immense. The traditional approach of building a digital twin with a mathematical and calculated approach is rendered useless with this new approach from FS Studio used to construct the 3D Model of the gigafactory.
Along with the power of AI and video photogrammetry, this project used various pipelines and models to build the 3D Model from the daily Tesla gigafactory construction footage of the gigafactory by various third parties on Youtube.
FS Studios is using various technologies and techniques in this project. Videogrammatry is one of them. Videogrammatry is a technology used to obtain different information about a real-world object or environment through only video footage. For example, we can use videogrammatry to derive accurate data on measurements and distance of any object, usually with aerial footage of the object or the site.
FS Studio uses videogrammatry to analyze different drone footage to compute additional data necessary to build the model, such as the tesla factory layout data and size data about the tesla building.
In this project, the developers are using Tensorflow JS along with Three.js. Tensorflow JS is an open-source library that developers can use for different machine learning processes like training, testing, and running models entirely in a browser.
The developers are using TensorflowJS along with Three.js for creating the 3D Model along with representing the Model in 3D on a 2D web browser. Developers also use these frameworks for 3D manipulation and representation of the Model, right in the browser.
On top of building the Tesla gigafactory 3D Model, the team also made hand-tracking technology that can control and manipulate this Model. The hand tracking system uses two models—one for detection of the palm while another for skeleton tracking. There are about 25 hand tracking points and combinations of both models that enable full hand tracking.
In addition to this, the tracking system also uses Hand Landmark Model. These two models run simultaneously to track the hand and enable different features the system has to offer. Some of the available features are pinch to zoom & scale and moving hands to control and navigate the Tesla gigafactory 3D model.
To enable these functions and visualizations, the system is using HoloLens 2 from Microsoft. Through the use of Microsoft’s cloud computing service, Azure, devices like Front Facing Vertical Cavity Surface Emitting Laser (VCSEL), Time of Flight Sensor (ToF Sensor), and a normal camera connect to enable hand tracking along with its representation in Augmented Reality (AR) in the HoloLens 2.
VCSEL is a laser diode that emits an optical beam or a light beam from its surface perpendicularly or vertically. It is a semiconductor-based laser diode. VCSEL is more advantageous than traditional Edge Emitting Lasers (EEL) or Light Emitting Diodes (LED) that emit light beams from sides or top and side of their surfaces, respectively. Due to vertically producing light beams, VCSEL can be fabricated on a wafer that allows for more efficient, controlled, and easy production with low costs.
On the other hand, the ToF sensor uses infrared light to extract depth information by detecting the reflection of the light signal it emits. In this project, ToF is used in combination with VCSEL and an ordinary camera for touchless sensing and gesture recognition through the power of AI.
Together with integrating these different technologies in a single system, FS Studio successfully produced a 3D model of the Tesla gigafactory in Texas and hand tracking and gesture recognition for manipulation and control of the 3D Model right in the web browser.
The Vision behind the Mission
The director of Emerging Technologies at FS Studio, Tony Rogers, was behind this vision to reconstruct the Tesla Giga factory in the 3D digital world. This vision came into fruition due to his fascination for the daily drone construction footage of the gigafactory posted almost daily on Youtube. Along with the digital twin, hand tracking for 3D model manipulation with functions like zoom, scale, and translation are also integrated with the system.
Elon Musk’s Tesla makes waves across the electric vehicle (EV) industry and R&D, global supply chains, battery production, and construction through their gigafactories. Their gigafactory, located in Austin, Texas, is an automotive manufacturing facility planned to be Tesla’s leading production site.
Among many Tesla gigafactory production, their Tesla CyberTruck & Tesla Semi is a primary product in this mega factory. Apart from this, the Tesla Texas factory will be a battery production facility & battery warehouse along with the production facility for their Model 3 and Model Y cars.
It is a massive project that will allow Tesla to produce a whopping 250 Giga Watt Hours of battery capacity production. The gigafactory size will be about a 4 – 5 million square foot facility, while the gigafactory cost will be about 400 million US Dollars.
Therefore among many tesla operations, this gigafactory construction is one of the most important ones. Hence, this is an excellent site for the endeavor of FS Studio to build a digital twin of a site/factory by only using the drone footage publicly available on Youtube.
Benefits and Advantages of this Technology
With hands-free technology, the future of the 3D web looks very exciting. Through the technology of this kind, we can have endless possibilities and innovations that will enable users to use technologies like Virtual Reality & Augmented Reality seamlessly across various applications and platforms. Combining this with 3D models and simulations, we can have excellent applications and possibilities as embedded and separate systems and right in the web browser.
When we pair this technology with other technologies like drones controlled with the help of AI and ML, we can design and create 3D digital twins very efficiently and rapidly all over the world. Therefore, this type of technology comes out to be very advantageous for construction companies.
They can view and analyze different sites and construction progress without traveling to every corner of the area since the 3D Model covers every part of the site where shooting aerial footage is possible. Therefore, processes like site monitoring, inspection, supervision, etc., would be very efficient and fast, especially for large construction projects.
Mixed in with advanced drone technology controlled through AI and ML algorithms, this tech would enable companies to generate digital twins of various sites and objects almost daily for a regular survey of the sites/environment without ever having to visit the site. It would empower rapid assessment and scanning of areas and numerous possibilities regarding designing while redefining the workflow.
This type of technology is a massive advantage for construction companies and other parties from different industries.
For instance, the vendors who provide maintenance and repair services. For different large-scale maintenance or repair projects, not only does this tech reduce the time required to inspect and analyze the damage or maintenance needed, it makes the process much safer and efficient.
Different rescue missions also require scanning large or unsafe areas with proper planning and knowledge of the damage or disaster, or destruction to execute the task safely. Thus, this tech would enable rapid scanning of sites with substantial safety margins for humans; it would also speed up rescue processes, a massive achievement in this field.
Across the industry, every project/mission that requires scanning and inspection of sites along with its digital twin will benefit tremendously from this type of tech.
Tony Rogers, the man who came with the vision for this project of making a digital twin of the Tesla Gigafactory at Austin, Texas, believes that the future will be of hands-free technology that is seamlessly integrated with technologies like Augmented Reality (AR) and Virtual Reality (VR) with Artificial Intelligence (AI) and Machine Learning (ML) at its core.
With the advent of these types of emerging and cutting edge technologies, FS Studio is working hard to develop and provide similar innovative solutions to prepare businesses and industries for the future of efficient & rapid product development and R&D technologies.