By Bobby Carlton
By utilizing AR glasses, individuals who experience hearing loss or those who are deaf can now read speech in real-time using closed captioning through AR glasses. XRAI Glass has launched a suite of solutions that allow users to experience the world through AR, according to an article at Auganix.com.
The software, which is called XRAI Glass, converts audio into a version of conversation that can be displayed on the user's AR glasses. It can also recognize the voice of the speaker and translate conversations in nine different languages.
To use the app, users need to own a set of Nreal Air AR glasses, which are tethered to a mobile device. Through its partnership with Nreal, XRAI Glass was able to provide users with a device that can be used to view conversations. After pairing the app with their Nreal Air glasses, individuals were able to see the real-world enhanced with the help of digital captions.
On top of real-time audio transcription, the XRAI Glass app also includes the following features:
Through a command of "Hey XRAI" people can access an AI-powered personal assistant and ask questions such as what is the weather like in their area or ask for information, much like Siri or Alexa. The answer to your question will then automatically be displayed on your glasses where only you can see it.
You'll also be able to recall a conversation from the previous day by saying "Hey XRAI, what was I asked to pick up yesterday?”
With translation available in multiple languages, XRAI Glass can also help you transcribe and subtitle conversations in nine of the world’s most spoken languages that include English, Mandarin, French, German, Italian, Japanese, Korean, Portuguese, and Spanish, and the company plans on rolling out more languages in the near future.
In a previous announcement, the company revealed that its software would soon be able to detect variations in pitch, accents, and voice tones, which plays a huge part in how we communicate, and will have an impact with Web3.0 experiences.
Dan Scarfe, the founder and CEO of XRAI Glass, said that the company was thrilled to announce the availability of its technology worldwide. He noted that the company's goal is to provide a solution that will help people with hearing loss connect with their communities. Through its partnership with various organizations, such as DeafKidz International, the company was able to test the product and learn from its users.
Due to the capabilities of XRAI Glass, the company has been able to help more people than it initially thought possible. For instance, people with neurodiversity, who have difficulty understanding sound and speech, have been able to benefit from the technology, and as we become more digitally connected, IoT is changing our personal and work lives.
Another way to see how XRAI Glass and Nreal's partnership would assist is in work environments that are incredibly loud with background noise, but verbal communication is still important. You could have workers using the glasses as a secondary layer of communication to make sure everyone is getting the correct information. You can also access the information through an app on your Android device.
This is important as more and more industries are turning to technologies such as digital twinning, AI, AR, and VR to be more productive and efficient.
Through its software, XRAI Glass is able to record conversations so that users can easily recall past interactions. The company offers various subscription plans, such as the Essential, Premium, and Ultimate. The basic plan, which is free, provides users with a basic screen duplicate mode and unlimited transcription.
The Premium plan, which is priced at £19.99 a month, includes 30 days of conversation history and unlimited transcription, and it also comes with a variety of additional features such as 3D support and translation into nine additional languages.
The Ultimate plan, which costs £49.99 a month, comes with everything that the Premium plan has. It also includes unlimited conversation history, cloud-enhanced transcription, and a personal AI assistant.
For more information about XRAI Glass, check out the company's website.
By Bobby Carlton
During Qualcomm's annual event, the Snapdragon Summit, the company unveiled its latest technology, the AR2 Gen 1. This new platform paves the way for the next generation of AR glasses that are lightweight, less bulky, will look more stylish, and could be used in our day-to-day lives and at work.
According to Hugo Swart, a marketing manager at the company, the AR2 Gen 1 platform represents the first of its kind for the development of thinner and lighter AR glasses that would look more like normal glasses that we see today. He noted that the creation of these types of day-to-day wearable glasses is unlike something like the Quest or HTC VR headset.
One of the biggest challenges designers face when it comes to creating wearable technology is the power consumption. Through its multi-chip design, which is capable of delivering a 2.5x increase in AI performance, the company was able to reduce the power consumption of its products. This could allow them to create glasses that are more accurate and lightweight.
The AR2 Gen 1 platform from Qualcomm is designed to split the computational load between the three co-processors in the frames. This allows the company to provide a more efficient and powerful solution for developing AR glasses. It features an AR processor that's capable of handling various features, such as graphics and visual analytics. It can also support up to 9 cameras for monitoring your surroundings.
Unfortunately, according to the company, the new AR2 Gen 1 chipset won't be able to deliver the same level of performance compared to the current generation of virtual reality headsets. For instance, while it will allow users to get more accurate scanning and depth sensing, it won't be able to provide them with the same level of detail.
To ensure that the next generation of AR glasses will be a success, the company is relying on the support of various devices, such as smartphones and computers. Some of the chipsets used in the upcoming AR2 Gen 1 will be able to process graphics using Wi-Fi 7, which will allow them to connect to a network at speeds up to 5.8Gbps. This feature will help reduce the latency and provide a more natural and responsive experience.
The AR2 Gen 1 platform will also allow eye tracking to support security features, such as iris authentication. This could allow users to use their devices' cameras to unlock their AR glasses, and even be used for other features depending on how the AR glasses are used.
Before the company started working on the next generation of augmented reality glasses, it had already worked on various products, such as the NREAL Light and the A3 from Lenovo. During a briefing with reporters, the company's marketing director, Chadd Swart, noted that the company's current efforts have not been able to provide the same performance when it comes to the battery life of the devices.
Getting involved in the ecosystem allows tech companies to provide their customers with the best possible experience. This is also beneficial for Microsoft as it allows them to develop new products and expand their reach beyond their HoloLens program. For instance, earlier in the year, the company partnered with Intel to use the AR2 Gen 1 chipset in future products.
Another company that recently introduced a new AR headset that's powered by the AR2 platform is San Francisco-based startup company, Niantic Labs. The company's Outdoor AR headset is designed to be light and portable.
It features a sleek and modern design, and it's powered by the AR2 platform. The device, which weighs only 0.5 pounds, comes with some impressive specs.
Qualcomm also unveiled new S3 and S5 Gen 3 chipsets that will allow users to access the latest technology. Some of these features include spatial audio, which will allow users to monitor their head movements, and adaptive noise cancellation, which will allow games to utilize this new technology.
Although the company's next generation of AR glasses will have various features, it's not yet clear if the technology can deliver the same level of performance. With that in mind, it's possible that the innovation that the company has brought to the table could lead to a new era of technological change.
You can learn more about Qualcomm’s Snapdragon A2 Gen 1 platform by clicking here.
By Bobby Carlton
The warehouse is a critical part of the supply chain. It is the place where inventory is stored and orders are fulfilled, and in recent years, there has been a growing trend of introducing technology such as VR/AR/MR, digital twins, real-time simulation, 3D AI, automation and robotics into the warehouse in many industries. This trend has been driven by the need to improve efficiency and productivity while reducing costs and keeping human employees safe.
Robotics and automation can help to improve accuracy in picking and packing orders. For example Amazon is turning toward robotics to assist their employees by taking on more of the tedious and repetitive tasks found in their own warehouses, where they can also help to reduce the time it takes to fulfill an order. In some cases, they can even help to reduce the amount of inventory that needs to be stored in the warehouse.
There are many different types of robots that can be used in the warehouse. The most common type is the articulated robot. These robots have a series of joints that allow them to move freely around the warehouse and are often used for tasks such as picking and packing orders.
Another type of robot that is often used in warehouses is the gantry robot. These robots are mounted on a fixed frame and move along a set path. These robots are typically used for tasks such as loading and unloading trucks.
You'll also find line follower robots used in many warehouses. Simply put, these robots use a line to guide them through their daily tasks such as delivering product to bins or sending product off for shipping.
Robots are not the only form of automation that is being used in warehouses. There are also a number of automated storage and retrieval systems (AS/RS) that are being deployed. These systems use a variety of technologies such as XR, lasers, sensors, and conveyors to automate the movement of inventory within the warehouse.
These systems use a variety of technologies such as lasers, sensors, and conveyors to automate the movement of inventory within the warehouse.
What is important to note here is that the warehouse industry is in the midst of a major transformation. Thanks to advances in technology such as AR and VR, robotics, digital twinning, real-time simulation and 3D AI, warehouses are becoming increasingly automated, with robots and other automated systems taking on an ever-increasing share of the workload. This shift is being driven by a number of factors, including the need for greater efficiency, accuracy, and safety.
One of the most significant benefits of automation is increased efficiency. Automated systems can work around the clock, without breaks or vacations, and can complete tasks much faster than human workers. In addition, automated systems are less likely to make mistakes than human workers, which can lead to significant savings in terms of time and money.
Another benefit of automation is improved safety. Automated systems can eliminate or reduce many of the hazards associated with traditional warehouse work, such as lifting heavy objects or working with dangerous chemicals. In addition, automated systems can be designed to meet or exceed all relevant safety standards.
Finally, automation can help to improve the overall accuracy of warehouse operations. By eliminating human error, automated systems can help to ensure that inventory is always accurate and that orders are filled correctly, which have a positive impact on industry 4.0 goals. This can lead to happier customers and fewer returns.
Automated systems and robotics are becoming increasingly common, as they offer a number of benefits over traditional manual labor. These benefits include increased efficiency, improved safety, and enhanced accuracy. As the cost of automation decreases and the benefits continue to increase, it's likely that we'll see even more warehouses turning to technologies such as XR and digital twinning to improve how automation and robotics fit into the warehouse environment in the years to come.
By Bobby Carlton
Caio Viturino works here at FS Studio as a Simulations Developer and is incredibly focused and passionate about how Robotics and Artificial Intelligence will change everything from warehouse automation to having an impact on our everyday life. His robotics journey started when he was as an undergraduate in Mechatronics Engineering between 2010 and 2015.
Along with his amazing work here at FS Studio, Viturino is also a PhD student at the Electrical Engineering Graduate Program at the Federal University of Bahia (UFBA) in Salvador, Brazil, supervised by Prof. Dr. André Gustavo Scolari Conceição and a researcher at the Laboratory of Robotics at UFBA.
With the current state of how industries are looking at robots and AI to play a crucial role in how we work and socialize, we thought it would be important to learn more about what it is that he does and dig into some questions of technologies and where he thinks it is heading.
During our interview Viturino first explains how he ended up on this path with robotics saying, "Shortly after my bachelor's degree, I started a master's degree in Mechatronics Engineering in 2017 with an emphasis on path planning for robotic manipulators. I was able to learn about new robotic simulators during my master's, like V-REP and Gazebo, and I also got started using Linux and Robot Operating System."
In 2019 Viturino started a Ph.D. in Electrical Engineering with a focus on robotic grasp. He primarily used ROS (robot operating system) to work with UR5 from Universal Robots and Isaac Sim to simulate the robotic environment. "In my work, I seek to study and develop robotic grasping techniques that are effective with objects with complex geometries in various industrial scenarios, such as bin picking."
At first, Viturino was hired as a consultant here at FS Studio in July of 2022 to work on a project for Universal Robots using Isaac Sim. After the conclusion of this work, he was hired to work on projects involving artificial intelligence and robotics that are related to scenario generation, quadruped robots, and robotic grasping.
He tells me that he primarily uses the following for most of his research:
Pybullet - An easy to use Python module for physics simulation, robotics and deep reinforcement learning based on the Bullet Physics SDK. With PyBullet you can load articulated bodies from URDF, SDF and other file formats.
Isaac Sim - A scalable robotics simulation application and synthetic data generation tool that powers photorealistic, physically-accurate virtual environments to develop, test, and manage AI-based robots.
Isaac Gym - provides a basic API for creating and populating a scene with robots and objects, supporting loading data from URDF and MJCF file formats.
I asked Viturino about his current work on PyBullet, Isaac SIM, quadrupeds learning to walk. Why is this work important to him and why are robotics important in general?
"Robots will not be a replacement for the human labor force but will aid in difficult or repetitive tasks," said Viturino. Just recently Amazon announced their new AI powered robot called Sparrow, designed to do exactly what Viturino is saying here.
He then tells me that for these robots to perform these tasks, it is necessary to develop their cinematic and dynamic models, and test route planning algorithms so that the robot can go from point A to point B while avoiding static and dynamic obstacles, among other difficult tasks.
These algorithms will require time and significant investment to implement in real-world scenarios. Robotic simulators will lower these costs and risks by enabling all of these algorithms to be tested in simulation before being implemented on actual hardware.
In a previous post on the FS Studio blog, Viturino and I talked about NeRFs. One question I had for him was how will NeRFs and robotics combined change the world of automation, and is there a way to speed up the creation of robotic SIM?
"Robotic simulations are being used more frequently as a means of training and testing mobile robots before deploying them in the real world. This is known as sim2real. For instance, we could create a 3D model of a warehouse and then train various robots in that environment to plan routes, recognize objects, and avoid collisions with dynamic obstacles."
One thing to mention is that the process isn't that simple. Yes, NeRFs can help a lot in this regard since we may easily and quickly obtain a 3D model of the surrounding area but modeling an environment can take a lot of time and money.
When asked about his passion and how robotic grasp objects, trajectory planning and Deep Learning. Viturino tells me that Deep Learning enables the practical and effective use of several algorithms that are only effective in specific environments or situations. For instance, a classic robotic grasping algorithm needs physical properties of objects, such as mass and dynamic and static attributes, to work. These properties are impossible to obtain when considering unknown objects.
Artificial Intelligence allows robots to perform grasping tasks without worrying about physical properties that are difficult to obtain. These algorithms are getting better at working with any object and in every environment.
However, there is a lot to be explored in order to find a complete solution for all robotic problems, or to put it another way, a unique algorithm that plans routes, executes preening, identifies obstacles, among other things, in the style of DeepMind. In addition, the computational performance or reliability of these algorithms still limits their practical use. Viturino explains that the gap between industry and academia has been closing significantly over the past few years.
When we think of modern day robots that can be used in our normal everyday life, we think of things such as an iRobot Roomba vacuum to keep our floors clean or something like Piaggio My Gita robot that follows you around and does things like carry groceries or your computer. But truthfully we all would love for the day where we can have our own astromech droid like R2-D2 to be our on-the-fly problem solver and companion throughout the day. I asked Viturino about this. How far are we from this?
"I think we have a gap where the pieces still don't fit together completely. Imagine that each component of this puzzle is a unique algorithm, such as an algorithm for understanding emotions, another for identifying people, controlling the robot's movements, calculating each joint command, and determining how to respond appropriately in each circumstance, among others."
According to Viturino, the pieces still need to be very carefully built and be developed so they can be assembled and then have them all fit together perfectly. "I think we won't be too far from seeing such in sci-fi movies given the exponential growth of AI in the last decade." Granted, we won't get something like R2-D2 anytime soon, but you could paint a My Gita robot to look like R2!
But it does take me to my next question. I personally own an Anki Vector AI robot. He's been in my family since the beginning and we've all come to really love Vector's presence in the house. I wanted to know Viturino's thoughts on more robotics like Vector, Roomba and My Gita becoming more popular as a consumer product.
He explains that this greatly depends on how well-received this type of technology is accepted by the general public. The younger generation is more receptive to this technology. Price and necessity are also important considerations while purchasing these robots.
Viturino then says that the robotics community will need to demonstrate that these robots are necessary, much like our cellphones, and are not just a novelty item for robotics enthusiasts like us. This technology should be democratized and easily accessible to all.
A company in Brazil by the name of Human Robotics is heavily focused on building robots for commercial use in hotels and events as well as domestic use, such as caring for and monitoring elderly people. However, he doesn't think the population is completely open for this technology.
He's right, there's still some hesitation on using robots for daily tasks, but there is some traction.
Viturino focuses on one part of this question, saying that he thinks that as artificial intelligence advances, we will use simpler sensors. Today, it is already possible to create a depth image with a stereo RGB camera. Or perhaps synthesizing new views from sparse RGB images (NeRF). But he believes that the day will come when we will only need a single camera to get all data modalities.
"There are other technologies, particularly in autonomous vehicles, such as passive thermal cameras. Despite it, the technology is restricted by armies and governments, and the cost is high. However, it may be a promise for the future."
As we come to the end of our conversation one thing Viturino brings up is he believes that simulation allows us to develop, test, and go beyond imagination, without fear of damaging robots and stuff, which can cost a lot of money or dismissal and an unpayable fine, depending on the damage haha. After we've tested our ideas in the simulation, then we're ready to deploy the software in the hardware.
As for his work in robotics and AI, and closing the gap of what's possible now and the future of what we hope for, he believes that NVIDIA is working to develop ever-more accurate simulations through the use of their PhysX library, which is now available as an open-source version 5.1. As a result, the gap between simulation and reality will close more and more, increasing the reliability of robotic applications.
"We are in an era where we must be bold and creative to overcome the limits already reached, with agility and teamwork."
You can learn more about Caio and his work by checking out his Github page.
By Bobby Carlton
BMW has created a way for drivers to actually be behind the steering wheel of a moving vehicle while wearing a VR headset designed to enhance your driving experience, and it puts the automaker on a new path as they dive into how XR technology, and the rise in passenger experiences and self-driving vehicles are becoming a reality.
The German automaker recently unveiled a new way for people to experience their M2 vehicle through a VR headset that has you actually driving the car. The M2 Project, which is part of the automaker's "M Mixed Reality" initiative, allows people drive a car that includes your foot on the gas pedal, braking, steering, turn signals, even the radio! Except instead of seeing the real-world, you're driving through a futuristic city. Check out the video below!
It sounds totally sketchy to be behind the wheel of any car while wearing something over your face that cuts out your real-world environment, but this wasn’t designed for normal streets; BMW designed their VR experience to be used on the company’s test track. The VR software can adapt and re-create the virtual course of locations around the world.
With that said, you could absolutely see something like this for your passengers!
Thanks to computer vision and simultaneous localization and mapping (SLAM), the car has some safety triggers built in to make sure you don't get into an accident. As an added layer of safety, a BMW employee also rides in the passenger seat to watch the road and press an extra brake pedal in front of them.
One company that sees the potential of VR and passenger economy is Holoride. They first announced their work back in 2019 showing how passengers could access their VR experiences as part of in-car entertainment. Since then, they have improved the experience using HTC's Flow VR headset and just recently announced the launch of their in-vehicle VR entertainment system in Germany. Select Audi vehicles can purchase the Holoride Pioneers’ Pack, which includes everything you need to transform your car into an “always-in-motion virtual space” where you can play games, browse the web, and more.
The inspiration for the BMW project came from the company's digital city, which was known as M Town. Alex Kuttner, the engineer who developed the VR experience, said that the BMW fans wanted to visit the city if it was a real place.
“M Town is a mindset,” Kuttner said in an official BMW press release. “It’s a town where everything is possible, and that was the moment I realized we aren’t only here for selling products. We’re here for selling emotions and experiences. These two things combined in mixed reality are only the start of something really great in the future.”
Almost two years ago, the company started working on the mixed-reality project, which was initially supposed to be used for the M5 model. However, instead, it was designed for the M2. According to the company's executives, the project could also be used to help drivers in racing competitions and training courses.
Frank van Meel, the company's CEO, said that the goal of the project was to give employees a chance to explore new ideas without having to think about the business case for each new innovation.
“I think the interesting thing is now we have an answer, and the question is, what is the question to this answer?” van Meel said. “There are so many ideas. We haven’t found the final answers, but we’re working on all of these kinds of ideas.”
Although the experience isn't yet available for the general public, the company invited several prominent gamers and content creators to participate in the development of the virtual reality experience. One of these was Cailee, a popular Twitch streamer and member of the G2 esports team. She said that she had previously tried playing video games in VR, but this was the first time that she actually used it. She believes that other games could also benefit from the technology.
“It’s just the most insane experience I’ve ever, ever had,” she said. “I play Rocket League, I’ve sunk so many hours into it and everything, but I really cannot describe the experience that I had in Munich.”
BMW hosted a demo of the mixed-reality experience in Lisbon. For those who were able to try it, the results were impressive, with everyone stating how the virtual course matched the actual driving experience, from speeding up, slowing down, and turns.
The first lap of the course featured a variety of obstacles that people had to avoid. To help "gamify" the experience, drivers had the task of collecting coins along the way. On the second and third laps, a timed element demanded that drivers accelerate, which provided them with the real feeling of racing. The suspension of reality allowed drivers to feel more comfortable with the way they drove.
According to David Hartono, the creative tech director of Monogrid, BMW's interest in gaming is evidenced by the company's decision to turn the vehicle's internal display into a controllers. Last month, BMW partnered with AirConsole to allow players to play games using the company's in-car display. He noted that the company's use of VR technology could help reinforce its image as an innovative and pioneering company.
Sean MacPhedran, a senior director at SCS, a digital agency based in California, praised the BMW's mixed-reality experience, saying that it was a step up from the traditional methods of driving luxury cars. It also highlighted the company's capabilities in a more consumer-friendly manner.
“With BMW doing so much work with mixed reality and industrial 4.0, it’s hard to telegraph that to a consumer,” MacPhedran said. “A consumer doesn’t care about all the stuff you’re doing to make a car that much better. It kind of reminds me of how they show the car in the wind tunnel, but now they’re doing this to show how advanced a car is.”
Several car brands have started experimenting with VR and AR in the past couple of years. In 2017, Lucid, a luxury electric vehicle startup, opened its New York City showroom to allow people to explore its virtual models. In 2022, Porsche and Audi announced that they would be partnering with the startup Holoride to develop in-car VR systems that would be used to give passengers an incredible immersive experience.
While Nissan turned to AR to show potential car buyers how safe their cars are and used VR as a fun tongue-in-cheek way to start a conversation about recruiting mechanics.
Along with the auto industry, we are also seeing aviation look at VR technology to change the way their customers experience air travel.
According to Mike Ramsey, an analyst at Gartner, car companies are constantly looking for ways to keep up with the technological advancements that are happening in the industry. However, he noted that BMW was one of the first companies to invest in both virtual and augmented reality systems. Despite the company's early involvement in the technology, he believes that the company's use of VR is more about brand building and setting their targets on the rise of XR and passenger experiences.
“It’s one of those things that every single car company is investing in but nobody has figured out what the business value is,” he said. “Augmented reality, virtual reality, all of these technologies. For a company that has a performance-oriented orientation, they’re going to look at that as a way to expand their brand beyond the physical to wherever you travel, so to speak.”
According to Heiko Wenczel, the director of the UE business at Epic Games, the BMW experience was built using real-time sensors and interactions with the car's surroundings. He said that being able to test and experience something in virtual reality is very beneficial for developing real-world products.
“You can translate that into any part of the manufacturing and automotive world,” Wenczel said. “Like when you design you get real-time feedback automatically, like what that is, and the human scale of designing cars and like understanding what mobility will be in the future needs that kind of interaction in real-time.”
Instead of rushing to develop something, companies often have to find a reason for their actions and how they can make money from it. According to van Meel, it's important to start with a low-budget project to avoid investing too much money. Although BMW wouldn't provide the exact amount of money that it spent on the project, van Meel noted that the company's budget was relatively lean.
“If you take a step back and you say, well, it’s not finished yet, but I can see a lot of creativity and a lot of potential that is still a little bit unclear,” said van Meel. “You just should let it happen if it’s not insanely expensive, of course, because then you need to make decisions right away.”
With the more automakers looking at XR technology as part of the in-ride experience and focusing on passenger economy, and companies like Einride with their driverless technology, the automotive industry is entering a brand new phase.
By Bobby Carlton
During the Delivering the Future Conference in Boston, Amazon showed off Sparrow, a new robot that will one day play a crucial role in helping Amazon workers by handling some of the more tedious and mentally draining tasks found in a warehouse environment.
According to Amazon, Sparrow, uses artificial intelligence and computer vision to move products before they're put into a crate. In a video shown at the event, the robotic arm was shown picking up a board game, a bottle of vitamins, a set of sheets, and other typical items that you'd find in a company warehouses, and then placing those items in crates.
Sparrows arms have been designed specifically to pick up boxes that are generally uniform in shape. However, according to Jason Messinger, the company's robotic manipulation manager, Sparrow does have the ability to handle items that have varying sizes and curvature.
To grab items, the robot uses suction cups that are strategically placed on the arm where it can then firmly pick up products in the same way an octopus uses their own suctions cups on their tentacles to grab a fish or an object.
“This is not just picking the same things up and moving it with high precision, which we’ve seen in previous robots,” said Messinger.
In 2012, Amazon acquired robotics company, Kiva Systems for $775 million and since then they have been adding more robotics to their warehouse infrastructure. Over time Kiva evolved and became Amazon Robotics, which is Amazon's in-house incubator of robotic fulfillment systems.
With global initiatives like the Artificial Intelligence Act pushing warehouses to create a safer and more efficient work process through AI, robotics, automation and XR technology, it only makes sense that Amazon, the second largest employer in the U.S. behind Walmart, embraces and adopts more robotics technology for their workflow.
Along with Sparrow, Amazon also showed off a fleet of other new robot models. They also featured a variety of other innovations that Amazon believes will help improve the efficiency and effectiveness of its operations.
Using robots like Sparrow isn't about replacing the human workforce. As a matter of fact, Amazon is still very much invested in their human workers. In a recent post on the Amazon blog, the company talked about their employees saying:
The design and deployment of robotics and technology across our operations have created over 700 new categories of jobs that now exist within the company—all because of the technology we’ve introduced into our operations. These new types of roles, which employ tens of thousands of people across Amazon, help tangibly demonstrate the positive impact technology and robotics can have for our employees and for our workplace. Supporting our employees and helping them transition and advance their career into roles working with our technology is an important part of how we will continue to innovate.Amazon blog
The company even offers an Amazon Mechatronic and Robotics Apprenticeship, a 12-week classroom apprentice program, which is covered by Amazon, is followed by 2,000 hours of on-the-job training and industry-recognized certifications, helping their employees learn new skills and pursue in-demand, technical maintenance roles in robotics.
The company's vision is to use robotics like Sparrow to reduce its reliance on front-line workers by implementing more automation in its fulfillment centers. This will allow Amazon to improve the efficiency of its operations and reduce their dependency of relying on a labor force. According to a recent Recode report the company is worried that they may run out of workers to hire by 2024.
In June, Amazon unveiled its first fully autonomous robot, which can work alongside warehouse workers. It also introduced other systems that can move packages. The company acquired Cloostermans, which develops warehouse machinery and robotics.
According to Amazon, about 75% of the items that the company's customers receive through its delivery process are handled by robots like Sparrow. So next time you're up late at night scrolling through Amazon and you decide to finally buy those VR accessories, there's a good chance a robot picked the item for you.