FS Studio Logo

Part of the FS Studio team was in NYC this week to work with recruiters in multiple industries by showing them how XR technology can help them source new talent and keep employees engaged with meaningful training, and how a virtual hub can connect remote teams.

While in NYC, we took a moment to head on over to Brooklyn to join an ARHouseLA NYC meetup to hangout with AR and VR creatives, and to check out the massive 40,000 sq. feet XR art space, ZeroSpace, a facility that features a fixed-install LED XR Stage, a Vicon Motion Capture Stage, and rentable warehouse space for film/photo shoots and live event production.

XR
XR Stage

Not only do they have an amazing motion capture stage, but they also have this massive XR stage that produces really amazing XR footage.

Stage Dimensions: 13’ (h) x 38’ (w) x 24’ (d)

Check out our tweet below showing video of it in action and its scale.

Elena Piech, an XR/Web3 Producer at ZeroSpace gave attendees a tour and talked about the work being done there and how they use the space for TV, film and corporate events saying, "the space is designed to spark creativity, and lets TV and film studio unlock their ideas."

Of course we've seen large productions such as Disney's The Mandalorian and Warner Bros. The Batman use virtual sets to be able to control the environment and speed up the filming process by simply switching virtual locations with a simple click on the computer using Unreal Engine.

According to a VRScout article, creatives used Unreal Engine’s new production tool to manipulate the entire scene, including all the special fx, and it can be changed live on set in real-time. In one example provided by Unreal Engine during a commercial shoot, a rock in a scene needs to be moved to help with the camera shot. To do that, the filmmakers simply just pick up the rock and move it, virtually, through a device such as an iPad.

Creators also have the power to change things such as lighting with a simple fingertip gesture. Slide your finger up, down, left, or right and the lighting angles change in a way that will impact the CG environment as well as the actors and props; with just a few simple gestures you can instantly change the time of day from sunrise to nighttime.

By Bobby Carlton

The Internet of Things (IoT) is a system of devices and objects that can be connected to each other and communicate with other systems and devices without human intervention. These objects or devices usually have sensors, cameras, and RFID tags, and they can communicate with one another through a communication interface. These systems can then perform various tasks and provide a single service to the user.

The truth is that IoT is the foundation and backbone of digital twinning.

As we become more digitally connected in almost all aspects of our lives, IoT becomes a vital component of the consumer economy by enabling the creation of new and innovative products and services. The rapid emergence and evolution of this technology has led to the creation of numerous opportunities but also some challenges.

Due to the technological convergence across different industries, the scope of IoT is becoming more diverse. It can be used in various fields such as healthcare, home security, and automation through devices such as Roomba’s or smart speakers. Of course there are also numerous embedded systems that can be used in this technology such as sensors, wireless communications, and the automation of your home or business.

With the rapid increase in the number of connected devices and the development of new technologies such as AR,VR, and XR, the adoption of these products and services is expected to increase.

According to Statista, the global market for IoT is currently valued at around 389 billion US dollars. This value is expected to reach over a trillion dollars by 2030 reflecting the increasing number of connected devices and the technological advancements that have occurred thanks to the growth of digital twinning. It is also expected to boost the customer economy by increasing the demand for various products and services.

In 2020, the consumer market contributed around 35% of the IoT market's value. However, it is expected that this will increase to 45% by 2030. This is because the market is expected to expand with the emergence of new markets such as the automotive, security, and smartphone sectors.

The concept of the Internet of Things is a device layer that enables the connectivity of various devices that were previously not connected to the internet. It can also act as a connective link between different devices, such as tablets and smartphones.

These devices can connect using various types of wireless networking solutions and physical means, and they can also communicate with one another and the cloud. Through the use of sensors, these systems can provide users with a variety of services and features. They can be controlled and customized through a user interface, which is typically accessible through a website and app.

A typical smart bulb IoT system consists of various components such as a wireless communication interface, LED light-generating devices, and a control system. These components work together seamlessly with the user being able to access their devices through a mobile app or website. A great example of this is a Google Nest system to monitor your front door and your home thermostat, which can be purchased at almost any hardware or lifestyle store.

Image from Target

Aside from these, other IoT systems such as smart televisions, smart refrigerators, and smart speakers are also becoming more popular among consumers. These kinds of devices can be combined with a home's existing smart home technology to provide users with a variety of services and features designed to streamline and automate your home experiences. 

Of course privacy and data are two things consumers and businesses need to consider when bringing these devices into their environments. How much are you giving up in order to streamline or automate your home or business? We are already in the habit of giving up some of our privacy through smartphone use and other wearables.

One of the most common uses of IoT technology in the consumer economy is to improve customer service. Enterprises use it to improve the efficiency of their distribution channels by implementing a variety of systems, such as inventory management and product tracking. In addition, construction sites and cars are also using IoT to monitor their environments to reduce downtime and improve their overall performance.

Other industries that use IoT primarily include government facilities, transportation systems, and healthcare systems. Through the use of IoT, these organizations can improve the efficiency of their operations and increase the effectiveness of their systems. The technology can help the consumer economy by enhancing the service provided by their organizations.

The connectivity and data technology has also improved, with devices now capable of handling and storing large amounts of data. The ability to process and analyze data is becoming more sophisticated. Various factors such as the evolution of cloud technologies and the increasing capacity of storage systems have made it easier for devices to store and process data.

The increasing number of companies and organizations investing in the development of IoT devices is expected to continue to increase, and this will help them gain a competitive advantage and develop new solutions that will significantly impact the consumer economy.

By Bobby Carlton

When it comes to creating that perfect balance of realism and cartoony avatars that can be used across multiple VR platforms, Ready Player Me is the leader. Very easy to make and only takes minutes, a Ready Player Me avatar can be personalized with clothing, fashion accessories, and even outfits from popular movies, and that could be a big deal for Enterprise adoption.

Last week, the company announced that they raised $56M led by a16z to help grow their business and give people the ability to connect people to the metaverse in a more meaningful way. This is obviously a huge leap for pushing avatar technology, but it also means a big step for the metaverse as more people and companies explore the potential of these virtual worlds.

Creating your own 3D avatar is incredibly simple. Absolutely no coding skills are needed to create one which you can then import into platforms such as Spatial,  Mozilla Hubs, VRChat, and others with ease by copying and pasting a generated code made by the software.

You may think that avatars are something you would only use in socialVR platforms or in games, but there is a big push to bring this type of virtual representation into work environments. Ready Player Me has already positioned themselves into Enterprise solutions by lining themselves up with dozens of partners to use their avatar technology for corporate training, team building, and even having avatar creation being part of the onboarding steps for new employees.

As companies establish their digital twins in platforms like Mozilla Hubs, MeetinVR, Glue, Virbela, and others, avatars are how we represent ourselves as employees in VR, and it helps create a diverse workforce in both the real world and in the metaverse. Employees expect inclusion, culture, and heritage to be things that are represented at work.

Last year saw 24 companies adopting Ready Player Me avatars for employee representation in the metaverse, and with this new round of fundraising, the company looks to push that number even higher. 

FS Studio VR Hub
Image from FS Studio

Timmu Tõke, CEO of Ready Player Me believes that being able to represent your individual heritage in the metaverse, whether it’s for meeting up with friends for a concert or being part of a client meeting is important for all of us.

The thought is that your skin tone, your hair, the shape of your eyes, and how you dress all make up who you are and is part of the story behind you the person, and you the employee. 

In an interview with GamesBeat, Tõke talked about how his company will bring that representation and consistent identity across all experiences saying, “We’re doing cross-game answers for the metaverse, as we saw that people spend a lot of time in virtual worlds.” Tõke added, “The metaverse is not one app, or one game or one platform. It’s a network of thousands of different virtual worlds. So it makes sense for users to have an avatar to traverse across many different virtual worlds.”

Image from Ready Player Me

“You have to build the network out for diversity as a developer tools company,” said Tõke in a interview with VRScout. “That’s where we spend most of our time.”

The metaverse is expanding each day with more social experiences and more companies and industries uncovering its potential for everything from connecting consumers through a metaverse portal, marketing goals, B2B, employee training and recruitment and how we can improve things such as automation, robotics, infrastructure, warehouse management, and so much more. 

Earlier in the year Ready Player Me announced a partnership with the AR company 8th Wall that would allow you to bring the Ready Player Me avatars into any 8th Wall AR experience using A-Frame, which potentially could bring more personalization into AR training initiatives such as on-the-fly training or reskilling. It could also have an impact on how companies approach marketing, recruitment and onboarding.

Tõke realized that we’re not totally there yet but the metaverse is gaining a lot of momentum. “Based on our rapid growth rate (40% month on month), I think it is fair to say the VR industry is booming right now, and expanding quicker than many people realize. Like any new technology, however, its success largely depends on how quickly it is adopted by consumers, and in that respect we still have some way to go.”

You can create your own individual custom avatar at readyplayer.me.

By Bobby Carlton

Warehouse automation systems may seem like they’re a dime a dozen, however, each approach is different with some focusing on humans to manage them, many others relying on robotics and automation, and of course we’ve seen a blended approach with automation, robotics and humans working together. 

One solution is using AI to help drive automation along with other technologies such as robotics and XR. Data shows that we can improve work environments through automation, but getting everyone around the world to adapt the approach isn’t that easy. 

However, a new global initiative to create global efficiencies is a hot conversation at the moment. AI and automation are about to drastically change the way businesses (large and small) and even how governments operate through a push that will include cutting-edge technology such as natural language processing, machine learning, and autonomous systems through robotics and XR solutions.

The objective of the Artificial Intelligence Act will be to create a safer and more efficient work process that can help organizations explore “what if” scenarios and be more predictive, explore recommendations and different paths to success, and even help company leaders make important company-wide decisions.

One thing to keep in mind is that regulating the approach varies in different parts of the world from China, the European Union, and the U.S., and that as businesses invest their resources into AI and automation, they will have to ensure they comply with all of the regulations in place.

For example, the Chinese government is being a bit more forward thinking by moving AI regulations beyond the proposal stage and has already passed a regulation that mandates companies must notify users when an AI algorithm (or avatar) is involved. This means that any business in China must adopt AI and automation compliances which will impact both customers and the workforce. 

While the European Union has a much broader scope than China’s approach. For the EU, the focus of their proposed regulation is more on the risks created by AI and sorted into 4 categories. Minimal risk, limited risk, high risk, and unacceptable risk. Using AI with automation applications would help companies through human oversight and ongoing monitoring of facilities using robotics and XR solutions. 

Those companies will be required by law to register stand-alone high-risk AI systems such as remote biometric identification systems. 

Once passed, the EU would implement this process by Q2 of 2024 and companies could see hefty fines for noncompliance ranging from 2% to 6% of the company’s annual revenue. 

Here in the United States, it’s a bit more of a fragmented approach with each state creating their own idea of the AI and automation laws, which as you would guess, could end up being pretty confusing for anyone. Especially with companies having warehouses or offices in multiple states. To help create a more unified approach the Department of Commerce announced the appointment of 27 experts to the National Artificial Intelligence Advisory Committee (NAIAC). This department will advise the President and the National AI Initiative Office on a range of important issues related to AI and other technologies such as robotics, XR, and their use in automation that would be used across all states, and help tighten up the AI and automation goals in the U.S. 

They would also provide recommendations on topics such as the current state of the United State’s AI competitiveness, the state of science around AI technology, and any AI workforce issues. The committee will also be responsible for providing advice regarding the management and coordination of the initiative itself, including its balance of activities and funding.

What all of this means is that governments want their businesses to embrace and adopt new technology as part of their workforce solutions. They are very aware of the benefits with AI, XR, robotics, automation in the workforce, and how those benefits have a global impact on business, consumerism and the overall economy of a country.

At the heart of all of this is manufacturing and warehouses.

Manufacturing companies could use AI, warehouse automation, and XR to access information such as  anomaly detection and real-time quality monitoring that are latency-sensitive and then be able to create an ultra-fast response. This would allow manufacturers the ability to take action immediately to prevent undesirable consequences, streamlines productivity, increases workforce safety, and automates warehouse processes so companies are able to maintain their equipment in a timely manner to prevent any type of shut down or dangerous environment.

AI and automation would provide real-time prediction capabilities that lets you deploy predictive models on edge devices such as machines, local gateways, or servers in your factory and plays a role in accelerating Industry 4.0 adoption.

By Bobby Carlton

Advancing spatial computing and building the Enterprise Metaverse requires large-scale collaboration across the industry. Having the proper tools at your disposal is important when it comes to using AR/VR as an Enterprise solution.

Lenovo's ThinkReality hardware is actively working with a growing ecosystem of enterprise AR app developers to offer ready-to-deploy solutions for companies seeking XR technology as a solution.

The Lenovo ThinkReality headset is designed to provide a scalable, and streamlined path from proof of concept to productivity for enterprise AR/VR applications that lets companies focus on problem-solving by working across diverse hardware and software. On top of streamlining the idea of productivity, the approach lets you build, deploy, and manage enterprise applications and content on a global scale.

To help bolster the adoption of XR for Enterprise solutions and to show their commitment of supporting ThinkReality, the company recently launched the following collaborations with AR app developers:Lenovo ThinkReality announced that they will be working with TechViz, a leader in 3D visualization software, to offer a solution to visualize data in augmented reality (AR) from CAD files used in design, engineering and architecture. The specially developed version of TechViz software combined with the ThinkReality A3 PC Edition allows users to switch seamlessly from their CAD desktop application to a 1:1 scale 3D representation of their model in AR.

Image from Lenovo

While wearing AR smart glasses, engineers are able to view both their PC screen and the virtual model on display in their real-world workspace with 1080p resolution, and would have the ability to make changes in the CAD environment and check them in 3D. The ThinkReality A3 with TechViz software can display the content directly from the most commonly used CAD software without data conversion. Before this solution, engineers and designers would need separate activities to work on the model and then visualize the result with a headset.

In addition, CareAR, an augmented reality (AR) Service Experience Management and Lenovo announced a collaboration to deliver an improved and smarter service experience for ServiceNow empowered field technicians and end users. As part of this cooperation, Lenovo will integrate CareAR’s service experience management platform into Lenovo’s ThinkReality A3 smart glasses to deliver immersive visual AR powered interaction, instruction and insight.

Image from CareAR

Through the combined solution, a ServiceNow enabled field technician wearing Lenovo smart glasses can connect with an outside expert who, through CareAR technology, is able to see exactly what the technician is seeing and provide easy, step-by-step instructions that the field technician can follow along from within the smart glasses’ field of view.

Along with these partnerships, Micron turned to Lenovo's ThinkReality technology to oversee a fleet of devices to help scale its business into the future. Launching ThinkReality only took a few months, and helped with helping Micron reestablish a more efficient workflow the disruption caused by the COVID-19 pandemic.

Levovo sees their ThinkReality platform and XR technology playing a very critical role in building the foundation of running a business in today's digitally connected world and through the multiple layers of what is the metaverse. 

Lenovo ThinkReality has also recently partnered with Qualcomm’s newly launched Snapdragon Spaces program to support the development of AR applications and help grow the enterprise AR market. 

Computer vision techniques are used to make decisions from pictures and other data.

For example, computers can analyze traffic patterns on streets to know how long it will take for you to get from one side of town to another based on current travel speeds and distance traveled by car or bus.

The field of computer vision has grown into an integral part of our future, and there are many valuable applications derived from this study that will help shape it as we know it today! 

From seeing people walk by us on the street to understanding what is happening in a live video feed from another country, computer vision techniques will help make this possible. 

These five groundbreaking computer vision innovations will change how you see the world:

1) 3D Object Recognition

2) Smart Cameras That Know What To Focus On

3) The Future Of Facial Recognition

4) Seeing Around Corners

5) Seeing Through Objects And Beyond Sightlines.

Image Classification

Image classification might be one of the best-known computer vision techniques.

One of the biggest problems we often encounter is classifying visual data. However, image classification systems come in handy for identifying and categorizing items.

Image classification uses computer vision, with labeled images useful for training machines or testing them on new tasks before adapting quickly without wasting time learning unnecessary information.

For example, you are given a set of images in one category and tasked with predicting how accurate your predictions will be. There are lots to overcome, like changing scales or viewpoints. 

Lighting conditions may also change from a photograph taken an hour ago when it was sunny outside, but now there's darkness.

How do we create computer vision algorithms that will correctly classify images into their proper categories?

There is an interesting data-driven approach for resolving this problem. Instead of worrying over what each image category should look like on a code level, researchers give the machine learning many examples from various classes and let it know how they are different visually by studying them individually.

Researchers then measure which classifications were correct with accuracy levels close enough not to misclassify any objects or persons in one group when matched against another based on visual appearance alone.

Other examples include labeling an x-ray as cancer (binary), classifying handwritten digits from different handwriting styles into categories like script versus informal inscription. 

Assigning namesakes of photographs based on identifying features present within the face itself.

Semantic Segmentation

Semantic segmentation is the process of understanding what each pixel in an image means. For example, it's not enough to detect a car or person; you need all boundaries marked so that these objects can be appropriately classified.

In other words,  objects are differentiated from one another by their location on-screen as well as color recognition features such as category membership.

Read more: Industrial Applications of Computer Vision and Why it Matters?

To be more specific, semantic segmentation attempts to understand each pixel's part in a given image. 

Thus, it is not enough to detect people or cars; you also need information on where all boundaries are so your model can make accurate dense predictions for those entities.

Semantic segmentation identifies and labels objects in an image, such as those found in self-driving cars' environments. The goal of this process would require understanding how each object relates with other pieces within its surroundings.

Deep learning has contributed significantly, providing incredible breakthroughs when applied across computer vision tasks like image classification or speech recognition/object detection.

There are many applications where semantic segments get put into practice, including robotic navigation systems, which rely heavily upon identifying potential obstacles before they become serious safety hazards.

Semantic segmentation also helps in mining data for insights, actionable intelligence, and real-time decision-making with automated computer vision systems.

Semantic segmentation also helps in individual objects that may not be easily recognizable by a human alone in video or image fragments.

Computer Vision Techniques

Object Detection

Though some people think object detection and image classification are the same, there are significant differences.

Object detection is the next logical step in computer vision from image classification. With object detection, you can accurately tell what objects are present and where they are within an acquired photograph or video clip!

Image recognition only outputs a class label for an identified object, and image segmentation, where it looks at individual pixels to determine what's happening within them.

What separates this task from others is that you aren't looking specifically at one particular thing but instead trying to track moving objects' relative position across time/space by counting everything around them with detection.

Read more: Incorporating 3D Artificial Intelligence with AR /VR Technology in Industrial Tech

The most common approach here is to find that class in the image and localize it with a bounding box. 

You can do this by looking up an object detection model on Google Play or Apple Store, such as YOLOv5 if you won't see what all of these buzzwords mean for yourself!

Object Tracking

Tracking a specific object or multiple objects in the scene can be helpful for autonomous driving systems. 

It is because they allow vehicles to follow their target without getting distracted by other things going on around them, such as people walking and cars passing through an intersection- all while maintaining control over speed, so you don't crash into anything!

Object detection has been used traditionally with video technology, where it allows us to observe what we're looking at more clearly.

There are many different ways that this same process could come into play when applied toward self-driving cars from companies like Uber(and Tesla).

Object Tracking methods are mainly of two categories according to the observation model.

One of the object tracking methods is Generative Method (or PCA). It uses an algorithm that describes apparent characteristics of objects to minimize reconstruction error, while this may sound complicated.

There is no need for understanding as it simply consists of using Principal Component Analysis on your data set, giving you back something like 100 variables corresponding with how much variation each pixel contains across its color map.

Hence, they are easier for facial recognition software, etc., to recognize landmarks.

Computer Vision Techniques

Image Reconstruction

Sometimes, photos start to fade over time, and the colors become less vibrant. The datasets will usually include current photo databases so that models can learn how corrupted the images are before they become an issue for restoration purposes.

 In the future, we will see an increase in computer vision models for business challenges.

Like one of IBM's projects uses AI to help people find their misplaced eyeglasses. It analyzes images from thousands of feet away and identifies universal shapes.

This passage highlights various aspects of artificial intelligence, including applying it across different industries and fields. Companies leading research into this field like Pinterest are investing heavily into its applications.

With computer vision, there are still lingering security concerns since it is notorious for its black-box decision-making. 

It means that users can become wary of machines using data and predicting their every move to determine things like credit risk or health status - all from one point in time!

Nonetheless, with rapidly developing AI protection standards, we will have remediated these privacy issues before long, though.

We've seen how computer vision techniques can help us identify and interpret data in various industries, from medicine to finance.

It means that technology is getting ever more sophisticated at processing information about us--and we're just beginning to scratch the surface on its potential applications!

The future is here, and it's almost too good to be true. We'll all have access to a whole different world of possibilities with these new computer vision techniques.

crossmenu