FS Studio Logo



In this article, the team at FS Studio shares their ideas on increasing the speed and efficacy of training and implementing robots to increase workplace efficiency and productivity. This article explore possible solutions like an AR application using the Magic Leap to demonstrate the potential for using AR as a platform for setting up, training, and working with workplace robots.



Workplace robots are undoubtedly the future of increasing organizational efficiency and productivity; however, there are several problems that developers and managers face when it comes to training these robots at scale and acclimating robots to their workplace environment.

Most of the problems arise because robots are inherently spatial. They move, their arms can change shape and position, and they are generally designed to manipulate or move things in different ways. Current computing methods are not inherently spatial and we struggle to adapt them to the unique problems of giving instructions to machines that include inherently spatial aspects.



3D environments are a commonplace way of training robots, and one of the significant cost and labor factors is modeling the 3D environment to match the real environment the robot will be used in.



Often, this stage is left until after delivery of a robot and accompanies a lengthy setup period where the robot is unproductive, and the workspace is also tied up as the robot is trained in place.



Even after the robot is programmed and trained, it takes a long period of time to acclimate the robot to its workplace environment. In-place training of the robot also requires the robot to be set up in the work environment, which means the robot is not productive. This also means that the workplace is also not able to be productive until training is done.




Possible Solutions & Our Full Vision

Our main premise is that sufficiently sophisticated AR technology is well suited to interacting with and controlling robots because it is inherently spatial. Spatial computing for AR is therefore inherently well suited to working with spatial machinery.


Imagine ordering an industrial robot, and receiving an AR headset with a pre installed app designed for your robot. You put the headset on, and it guides you through the process of evaluating your robot environment, identifying safety issues, laying out where the robot will work and where it can’t go, and even training a virtual version of the robot on specific jobs it will do, all without even having the real robot uncrated.


Even once the real robot is in use, the operator would still use the AR headset to give instructions to the robot. Instructions given to robots are usually complex and are once again spatial in nature. Advanced robots also use visual SLAM techniques for machine vision, so there is a good match with how AR works.


Being able to simply look at things, point at them, pick them up and put them down and have the AR headset see and interpret what you are doing relative to the environment goes a long way toward allowing for a system that can then instruct the robot to do the same kind of thing.


Trying to describe this kind of interaction using flat screens, mice and keyboards is frustrating, error prone, and non intuitive. AR headsets bridge the world between robotics and humans like no other technology can do.




We postulate that AR technologies - such as the Magic Leap or Hololens - have the potential to streamline the entire process of training and implementing workplace robots. These technologies make it possible for an organization to do a lot of prep work way before the robot even arrives on the site. The business imperative behind this is that it decreases the amount of time that a workplace area will be closed off for robot training - for some organizations, even a few days of downtime can be a make or break.


This technology could allow the operator to walk around the work environment wearing an advanced AR headset, and a 3D mesh corresponding to the environment will be created in real time and visualized in the headset, and the operator can set up robot workflows and other tasks directly in the AR headset.


For example, the operator can point at various locations on the floor or walls to identify path markers or workstation locations, and a virtual version of the robot can then be deployed in AR to follow the path markers or positioned to set up manipulation tasks at the workstations.


This method eliminates the need for a 3D environment to be created by a 3D artist, and this would enhance the accuracy of the 3D model since it will be updated in real time on location. This first step in the process would decrease the workplace downtime and increase the efficacy of training the robot.




Robots and workers are increasingly used together in pairs, so it’s important to have tools to keep them from coming into conflict. It is already common practice to identify areas within a robotic workspace where robots should never go, but the tools to mark out such areas work on an abstract 3D model of the workplace and don’t allow good visualization of the result.


This technology would allow workers to highlight such areas simply by looking directly at the area in the AR headset and directly drawing the no go zone on the floor (or on the ceiling).


With the hand and finger tracking features of Magic Leap, it is even possible to directly manipulate the robot arm with just the operators hand motions using IK to position the robot, in a similar way to how current robots like Baxter allow training with a real robot. Direct positioning of the robot arm is one of the most intuitive ways for humans to train robots.




It should also be possible to transfer the mesh and other information from an AR app into a VR app for further refinement and training by workers who are not on site. Using a cloud based approach to storing the environment model, a mix of AR and VR workers can be supported without regard to if they are using VR or AR.




Other features like image recognition, gaze tracking and voice recognition could also make setting up a robotic workstation much simpler and more intuitive by letting an operator work with a fully functional virtual version of the robot, and then transferring the training to the real robot later once it arrives.


The result would be much less disruption of the workplace the robot is being introduced to, and a much quicker time to the robot being productive in its new location.


Deconstructing Current Implementation


The current proof of concept is implemented using Magic Leap ML1 headset and the Unity game engine. Conceptually, it consists of three main objects - The world, the robot, and the user.

The World


The app is based on one of the Magic Leap examples that show how ‘meshing’ works. Meshing is the process of using the front facing cameras on the ML1 and stereo photogrammetry to create a 3D representation or ‘mesh’ of the users environment that virtual objects can then interact with.


This is ideal for the purposes of a robotic simulator since the robots we want to simulate have their own sensors for detecting features in the real world environment and we can simulate these sensors using the mesh we’re getting from the Magic Leap AR platform.


The Robot


A Fetch robotics robot is created in the AR environment using the URDF format and textures and models from the Fetch robotics website.


The textures were modified slightly to avoid using very dark colors since these tend to become transparent in AR because it uses additive display technology to allow seeing the real world through the AR headset.


The robot Physics model uses the parameters for mass, damping coefficients, maximum speeds, etc. from the URDF definition to create PID control loops and simulations for each of the robot limbs so they move at realistic rates in the simulation. The robot includes a sophisticated state machine to allow it to idle, move, be manipulated using IK and other functions as needed.


The Robot includes a pair of machine vision cameras on the robot ‘head’ as well as a laser range finder aimed with the head, and a LIDAR based range finder in the base that scans a 120 degree arc in front of the robot. In our AR simulations, these laser range finders actually ‘work’ and allow the virtual robot to detect the real world geometry, making it possible to teach navigation based tasks using the 3D data for the actual environment.


Using a third party SDK for Unity called Final IK, it is possible to ‘grab’ the end effector using the Magic Leap controller, and position it arbitrarily in space with the arm following using inverse kinematics. IK is still very coarse and the model doesn’t check for self intersection or collisions with the environment.


Our robot also includes a UX panel floating above it that is used to view the status of the robot, and to initiate or play back sequences of motions it has been trained for.


The User


The user has two main components. There is the headset and associated children in the hierarchy, and the controller they use to point with or grab things. There are also a few other objects in the AR scene used to display help.


The Headset


The headset tracks the movements of the ML1 as the user moves around in the environment, The ML1 is a 6DOF device so position as well as rotation is tracked. The headset contains the cameras used for stereo photogrammetry, so environment meshing takes place in a cone in front of the user and they need to move around and look in different directions to fill in the AR mesh.


The headset also contains speakers, and we use positional audio so sounds come from the objects they should come from such as the controller or the robot.


The Controller


The controller is used like a standard laser pointer to select and manipulate things that are far away, and as a spatial reference for determining proximity to the end effector to initiate IK positioning.


When used as a laser pointer, the user can point at and interact with the robot UX control that contains familiar 2D UI elements like buttons.


The can also point at any of the joints on the robot, and use the thumbpad to rotate the joint directly using forward kinematics.


We use the haptics features of the controller to provide feedback to users.


Next Steps



The most important thing would be to rework the inverse kinematics the robot is using so that the robot arm would never self intersect with other parts of the robot.



Another important feature would be to use the simulated LIDAR data to implement pathfinding and waypoint navigation that takes the real world mesh into account and avoids collisions.



Allowing the user to do IK on the robot arm using their hands only without the controller would be very desirable.



A lot of work will need to be done to turn our proof of concept into a viable robot training app; however, we unequivocally believe that this methodology will increase efficacy, safety and productivity and will also reduce the time it takes to train and set up a workplace robot.


Recently, the leading B2B ratings and reviews company Clutch announced the most highly-rated software development services companies in 5 major cities in the U.S., including San Francisco, CA.

We’re excited to announce that our team at FS Studio was highlighted as a leading software developer based in SF in 4 major categories! :

Emerging technology and advanced software design is our speciality.  If there’s a tough technical nut to crack, that’s what we do best. If you’re looking to create an extra special AR/VR experience, we have done it.

Clutch’s analysts speak to each company’s client references about the business challenges, solutions and results during their time working together. The research firm uses a scoring methodology with factors including clients, market presence, and industry recognitions, strengthening their evaluation. The platform then forms a Leaders Matrix for a particular industry segment and identifies firms with the ability to deliver.

Assessing our ability in software development, Clutch did their assessment and round-up of client feedback. Here are some comments from our client reviews on our Clutch profile:

Custom Development For Boutique Marketing Agency

“FS Studio has deeper technical chops than other development shops, so we trust them when it comes to front - and back-end development. We think their ability to scale resources according to project requirements is remarkable for a development agency that works with creative agencies. A lot of the other development agencies have far less bandwidth.”


Platform Development for LeapFrog Enterprises

“The fact that LeapTV made it into the marketplace last year, with all of the functionality that we had wanted it to have, is a testament to not only their raw development ability but also the teamwork and collaboration that they bring to their projects, the level of communication that they bring to their projects.”


We’re really proud of our team for this recognition by Clutch and the fact that we were featured in 4 segments. We look forward to continuing our success and being recognized again next year!

VR and AR development across multiple platforms has become a big part of our professional portfolio.  We've done VR game development with support development work for integrating specialized peripheral hardware, we've done educational VR titles, and we've done real-time computer vision marker detection and tomography for AR applications.

Being a part of the VR/AW Association is opening up a huge network of collaborators and markets for us and we're proud to be new members!

Our Space Battle opens up a new frontier in the exploration for the most exciting interactive toys

What’s the future of play? How are advances in engineering changing the world of game design and toys? In an age of touch screens, 3D graphics and augmented reality, it goes without saying that our children are already assimilated to the connected world. I recently spoke to a curator for a children’s museum who reiterated the importance of promoting interactivity in design for educational curriculum to keep up with the pace of entertainment and other media that increasingly make fractal claims on our children’s attention.

“If they can’t touch it and have the object respond,” she told me, “as far as they’re concerned, it’s a poster.”

Every day at FS Studio, we try to tackle the challenges of the next generation toy. Every day we play Geppetto to our clients’ Pinnochio, trying to conjure something unique and alive out of a magical block of talking wood (in this case, plastic and silicon chips).

Introducing SPACE BATTLE, a smart toy we’re excited to debut on the Artik platform. When we prototyped SPACE BATTLE, we wanted to enhance a toy that by all appearances looked like a regular spaceship toy but with the latest capabilities in AI. Equipped with an optical sensor, SPACE BATTLE uses computer vision to respond to its environment. We also wanted to introduce an engaging storyline that facilitates participation and wonder. In other words, the toy plays with you.

We wanted SPACE BATTLE to have all the autonomy and portability of a regular toy that a child can carry around anywhere regardless of connectivity. The results of our experiment are a window into the ever expanding world of “enchanted” or smart devices. The evolution from static “dumb” toys to a future of interactive toys, toys that are fully embedded with sophisticated deep learning capabilities and our own proprietary mix of technology and fun that allows them to perform complicated tasks while running completely offline (no Internet required!).

How did we do it?

The challenge, of course, was optimization. For the toy to be feasible, it had to run a real-time image recognition process that works on low-powered devices. That’s where the Artik comes in. Its quadcore processors were more then capable. We were able to equip SPACE BATTLE with a convolutional neural net, or “deep net,” that allowed the rocket ship to come alive in yours hands.

If tech talk makes your eye glaze over, feel free to jump to the end, if you want to get a peek into the magic that made this happen, read on!

Starts with the Dataset

So what’s happening under the proverbial hood, lets start with the dataset creation. We undertook the task of “training” our neural network with tens of thousands of images. This was done with a blend of both real images and a synthetically generated image dataset. The real images are absolutely the best source of data for training the neural net, you get natural lighting, backlighting, shadows, and all the subtleties that you can miss in a synthetically generated dataset. However the synthetic dataset allowed us to create a vast amount of data to augement these real images and this helped train the neural net with offset images, rotated images, varied image sizes, and many many more backgrounds. These synthetic images were created in a 2D image environment, we are looking to creating tools that use OpenGL and 3D environments for synthetic image creation, which get us closer to real images especially when it comes to things like lighting, camera placement, and foreground occlusion.

The key to modern deep learning techniques is the use of Stochastic Gradient Descent (SGD). Then we tweak, tweak, tweak the training hyperparameters, use of mini-batch sizes, number of training iterations, steps sizes, alpha, on and on and on (we also tweak the network similarly).

How do we verify our results of all this tweaking, two ways. We set aside a portion of our training data for verification. This gives us a measure of confidence in our accuracy, but to be sure using the training data and getting good results against those is no guarantee that we will get similar results in the real-world. That’s our final litmus test, getting out there and brute force manual testing in the real world. We get this into the hands of as many folks as we can and see what our results are, this is decidedly qualitative but it’s the best overall measure of our success.

DeepNet Design

So the secret sauce to this whole endeavor is in the DeepNet network design. Since we’re working on a constrained system, Artik 10 (Cortex A-series quad-core), running image recognition in real-time, we had to constrain the DeepNet’s size, both in depth and network width. What we’ve found with this process is that there’s an element of alchemy and heuristics both in network design as well as dataset creation. But in the end you can prune an aweful lot of the network and still get great results in accuracy and hit the performance requirements needed to run in realtime on an embedded system.

We played with how many convolutional kernels (width of the network), less kernels means less “features” it can discover, but we’ve found that unintuitively, you can actually get better results. The number of layers or the depth of the CNN is also drastically reduced to what you’d find in a larger server based or unconstrained solutions. The size of the convolution kernel is also surprisingly small as well.

On top of all of this, we have an Artificial Neural Net (ANN) for the final classification. For the activation functions, we use Rectification for the CNN and for the ANN fully connected layers, we will often play around with various activation functions, this is highly dependent on the application itself.

The Result

The result? We have a highly accurate real-time image recognition system that works offline on highly constrained devices. If you want to add hand signals to a camera to control camera functionality, do you want to add event triggers to low-cost toys, you name it!



Yup, the Hololens finally arrived and it was worth the wait. It doesn't have a ton of content yet, of the platforms we have so far --HTC Vive, Oculus, and Hololens--the Oculus has the most mature content and the Vive seems to have the most by virtue of Steam.

Played around with the Hololens setting up holograms around our conference room, fun. The first thing we noticed is that the field of view of the projected items gets a little cut-off, but the overall AR is impressive.

Oculus has definitely taken a page out of the Apple playbook for packaging.  The box and the way the product is presented to the new buyer is impressive.  It's a great way to welcome a new user and get them acclimated to the brand as a premium brand.  It helps you feel like you got your money's worth and respected as a customer even before you turn it on.

The materials are premium materials, the box is held closed with a magnet, the arrangement inside is superb, it's just a sweet sweet experience.


By Ollie Barder, originally posted on Forbes.com

There tends to be a perennial meme that emanates from games publishing management, specifically that games are too hard to learn. They still mistakenly think games are a form of passive entertainment and that you can make a game for everyone. This is indeed a strange assessment of the medium and almost entirely wrong.

Gaming is ancient. While videogames are a new branch of the medium, gaming itself dates back to the outset of human civilization.

This is to do with what gaming itself offers. Unlike other media, gaming doesn’t passively affect us. It’s far more direct in terms of what it does.

Specifically gaming taps into the low level cognitive processes that facilitate our ability to parse the world around us and that also allow us to learn new things. This is often why popular games are quite abstract and even illogical. We crave the ability to understand ambiguous and undefined concepts and that is a powerful subconscious driving force.

This is also something that various studies have investigated over the years. With the bulk of these producing results that indicate gaming fulfills an important role in our cognitive development as well as improving our ability to learn new things.

It also helps to explain the difficult dichotomy in modern videogame development; the technical (logical) side of building the game with the instinctive (subconscious) side of implementing the actual design.

The important thing to take away here though is that games need to be learned and that this act of learning is integral to the appeal of gaming itself.

Now I am not advocating that games be functionally impenetrable. There are design best practices that should be built upon when it comes to implementing and communicating new game systems and mechanics to the player.

My problem is that I don’t think that people publishing management have any idea of what those best practices should be.

To give a more personal anecdote, when I started out in the industry over a decade ago I used to get called into exec meetings to “drive” various games. This would mean I would play them so that a room of execs could sit back and watch. During these meetings various execs would ask for sweeping changes to these games based purely on what they saw, rather than from actually playing the games in question.

Now this was over ten years ago and things might have changed. However as we’re hearing the same tired narrative from people in publishing about games being too difficult makes me think it’s sadly still business as usual.

When Gabe Newell said that most gamers have more of a clue than those in games publishing, he was depressingly and still is entirely correct.

Games are cognitive firmware, trying to “solve” that either through functional standardization or violent simplicity will result in another unsustainable graphical arms race.

We already know how that panned out at the end of the last console generation.

What’s more, it’s clear with the diminishing returns of the recent AAA releases, with The Order: 1886 being notable, is that functional standardization has had its day. Pretty visuals are no longer enough to justify a game to consumers; they require more functional variety again.

Much of this misunderstanding directly stems from Newell’s correct appraisal; games need to be played before then can be correctly understood. The unique sense of satisfaction from surmounting a difficult gaming challenge is integral to the medium’s appeal.

Unfortunately it seems people in publishing would rather be working in another medium. Or at the very least change gaming to be something they find to be more palatable, even if it flies in the face of their customers.

There are a few companies that still get that games are meant to be a form of functional abstraction, with Nintendo (shown above) being notable, but it still seems that gaming is badly misunderstood by many tasked with investing in new properties.

This is a huge problem, not only from a creative standpoint but from a business one too. If publishers are unable to deliver the kind of games that people require then that’s a huge loss for all involved.

Games aren’t too hard, in fact they are easier than they have have ever been. The problem we have is that the decision makers in publishing need to have a better understanding of what games are as well as what they fundamentally offer to people. Simplifying games will reduce their appeal, as people want new cognitive firmware not just rehashes.

Interested in creating a game? Contact us for custom software or application design.

iOS Developers have been receiving emails from Apple warning them of the impending forced migration to Xcode 5 and iOS7 SDK for weeks now.  Large development houses have most likely been building against and using the iOS7 SDK for months now as they push the feature set and update the look and feel of their apps.  What, however, does this forced migration mean for independent iOS developers or small to mid-size businesses with apps in the marketplace?  Is your app going to disappear?  Is there a grace period?  Let’s look at some of the specifics around what Apple will do.

Apple’s exact words were:

“Make sure your apps work seamlessly with the innovative technologies in iOS 7. Starting February 1, new apps and app updates submitted to the App Store must be built with Xcode 5 and iOS 7 SDK.”

Apple wants as many of it’s users on iOS 7 and using it’s latest hardware as possible, which means having as many apps built for iOS and the iPhone 5 as possible.   The route Apple is taking is to restrict what they accept in their App Store.  For comparison, Google does not perform any review of the apps submitted to Google Play.  This simplifies things for developers while creating a different app store experience than that of the Apple App Store.

Apple will only accept new applications that were built using Xcode 5 and use the iOS 7 SDK.  In addition, they will only accept app updates that have been built using Xcode 5 and the iOS 7 SDK.  There is no grace period in this, after February 1 these are the new rules.  That being said, Apple will not remove apps from the App Store.  Apple has been very profuse and verbose with information on migrating an app from iOS 6 to iOS 7.  For example, they offer a design guide which includes a transition guide.

 The short answer is your app will survive.  The long answer is if you want your app to succeed you will need to transition to iOS 7 sooner than later.  Most apps that have not transitioned to use iOS7 offer a lower quality user experience.  Sometimes this comes in the form of a reduced view frame, black boxes on the sides of your app, but most of the time it’s more subtle...the app looks outdated or not as clean as other apps.  The subtle differences are not always readily perceptible to the end user, but they impact the experience the user has and in the end how likely that user will return to use your app again.

 For businesses, a reduced user experience can lead to a decrease in that user’s activity or engagement with the business and therefore result in reduced revenue.  All of this can be avoided with some fairly straightforward maintenance of your app(s) as updates to iOS come out.  Most iOS applications do not need a complete overhaul to comply with Apple’s impending requirements and to use the iOS 7 SDK.  From a cost perspective, this update is fractional compared to the overall cost of designing and developing the app in the first place and can usually be completed fairly quickly, a matter of days not weeks or months.

A mobile application should be thought of as a dynamic extension of your business, not a one-time investment that sits on “the shelf” and magically draws customers in.

Today’s mobile users are expecting updated content and an updated look and feel.  If you view updates to iOS as an opportunity to improve the user experience you deliver and not as a hassle to just comply, you will retain more customers and create more engagement from your existing customer base.

 Giving your application an aesthetic update is also a fractional cost compared to the original development of the application.

This is part of the maintenance of your app.  Just like you need to fix bugs in the application you need to update the outward appearance and usability.  If you do not have an in-house development team to handle these fixes and updates we offer a variety of services from a one-time migration, to the design of a graphical update to long term service and maintenance contracts fixing bugs, migrating to OS updates and general app maintenance. Contact us for more information.

 The bottom line is you should give you application the same attention you want your users to give it.  Your customers will follow your lead, as you ignore it they will too.

Well the day I've been anticipating for literally (and I use that word literally) years, may FINALLY be coming.  There's some scuttlebutt that the next gen Apple TV and AppleTV OS will allow for games and game controllers.  This is the type of motion I think Apple needs to move on to re-energize not just the consumers, but the developer community.  Not that the Apple iOS developer community isn't energized, it's just that we always want more.  More interesting platforms, more interesting frameworks, more interesting form factors, more more more please!


(Read the article here)

The question of what technology stack to use for a new project can quickly become a religion debate.  It is not easy to find objective reviews of various options available to developers.  Here is one good objective (from what I can see) overview of the development technology options available to web developers today: https://matt.aimonetti.net/posts/2013/08/27/what-technology-should-my-startup-use/

The article will make most sense to you IF you have actually developed or at least studied seriously the technologies mentioned.  FS Studio's takeaway from the article (and from my own 25 years of experience) is that there is no holy grail that is best for every project and every developer.  Obviously, some technologies are more popular at any given time than others.  Some are derided as old fashioned and not hip.  Is Ruby on Rails better than ASP.NET?  If you love Ruby and don't care about performance or multi-threading, it is a great choice.  Is ASP.NET better than RoR?  If you don't hate working with Microsoft technologies and want better performance than RoR or Django (both of which are based on slower languages).

In the real world, what technology stack you choose to develop your application with may be 100% determined by technological merits alone.

Some non-technology factors to consider as you ponder technology options are:

Those are important questions that need to be considered in conjunction with any discussions about technological merits of different platforms, languages, or frameworks.  For example, in the San Francisco Bay Area, where FS Studio is located, Microsoft and its technologies are very much derided, and most clients and investors tend to support open source alternatives like Ruby on Rails, PHP, Node.js, Go, or pretty much anything that is not associated with Microsoft.  This cultural attitude has impact on the types of skills that are easily available in the area, and often clients will state specifically what technologies they prefer.  Those are big factors in what technology I may want to build my next web application in.

What technology are you using to build your next application in?  Why? Do you need expert developers who have experience developing applications in a variety of programming languages? Contact us for your application development needs.