Augmented reality (AR) is a computer graphics and machine vision technology that spatially registers and overlays computer-generated graphics onto scenes of the real world. Boeing recently looked at the use of AR as a tool to help get design intent to the builder so the product can build right the first time and every time. By spatially registering and overlaying 3-D models, text instructions, callouts, etc., products can be built faster and with fewer errors.
Virtual reality (VR) is a technology that allows a user to interact with a completely computer-simulated environment. This virtual environment is an effective method to simulate many possible scenarios and multiple configurations without the need to build physical artifacts. It is also an effective method to transfer information to personnel without the need to procure and fabricate physical artifacts.
AR and VR together represent disruptive technologies that offer a compelling display medium to enhance manufacturing technician's tasks. Integration of user interfaces such as 3-D head mounted, or 2-D flat screen displays, allow an effective man-machine interface that merges virtual content with the real world.
There is a need for a reliable process with the capacity to analyze and quantify the relationships and cost/benefit trade-offs for various infrastructure components within its manufacturing and service delivery. Part of the assessment of an AR/VR approach is to ensure a high degree of continuity of operations, especially on unique jobs where a lapse of time between mission critical tasks creates a high learning curve for technicians. It also provides a rich digital thread from requirements analysis to disposal.
The collaboration of VR and AR allows digitization and application of the processes in a virtual world with AR enhancing the “real” environment based upon the integration of approved virtual prototypes for use in manufacturing. It also supports a surge production requirement opportunity by rapidly organizing, developing, and delivering quality products to meet customer needs and allowing the builder to build either identical or unique parts right each time.
On the shop floor of Boeing’s satellite production facility in El Segundo, CA, areas of persistent rework were identified and various methods of focusing human cognition on desired task instructions investigated as a technology to reduce the risk of rework. AR provided synthetic vision to shop floor technicians to improve their situational awareness and allowed them a hands-free, point-of-use information domain to view virtual visibility of hazards that would otherwise be unobserved.
The use of virtual reality technologies such as stereoscopic visualization and intuitive 3-D geometric data interaction has been limited to formal product reviews due to the cost and limited availability to these advanced capabilities and resources between customers and suppliers and biological/ergonomic issues.
There is a need for all participants in the product definition and review process to have ready access to these capabilities in their daily desktop product and manufacturing process definition activities while working with and maintaining configuration controlled authority data. This is required to give all decision makers the ability to maintain product configuration control while performing interactive collaborative product development.
AR systems assist manufacturing teams that are responsible for intervening and correcting challenges as they occur on the production line. This can be applied in peripheral infrastructure such as cabling, jigs/tools, and other utilities. It also can extend to an information technology backbone that can deliver 3-D models and metrics in a feedback loop for quality assurance and part inspection.
On the Intelsat Program, AR is used to show the position of waveguides while nearby wire harnesses are installed as a part of a pilot project on the satellite program. When harnesses are installed prior to waveguides, the harness may infringe space reserved for the waveguide. Once a wire harness is tied up, it can become rigid and difficult to move. This creates rework in the next cell forward when the technician is restricted to space and must reschedule the rework in the production line.
A common component to any AR system is the process of tracking the viewer's position, to enable software algorithms to calculate the correct position of augmenting graphics to make them appear to be connected to the real world. In the work Boeing researchers have done so far, the real world was viewed using a high-definition camera and displaying the live camera image on a monitor, meaning that the position of the camera needed to be tracked relative to the viewed scene. The researchers used two camera tracking methods to investigate their feasibility.
First, they used a computer vision, marker-less tracking approach where software was used to track points in every frame of video and calculate the cameras position from that frame, with respect to the world scene. This method had the advantage of not requiring any infrastructure to be installed. However, it relied on the work piece having a physical appearance that was close to when the system was trained. As hardware is assembled (or disassembled), the physical appearance may change, and track performance degrades.
The second method used was an optical motion capture system consisting of multiple IR light sources and cameras mounted around the perimeter of the work cell and reflective markers attached to the AR camera. The motion capture system observed the markers from multiple viewpoints and calculated the position of the AR camera in 3-D space to a high degree of accuracy.
This method had the advantage of not relying on the camera image to determine position and was able, therefore, to withstand changes in the hardware appearance and persons temporarily blocking the camera view. The camera can be positioned or moved anywhere within the tracking volume without regard for the field of view, and it can be thought to act as a window into the designer's models, providing views of the digital data that are registered to the real world.
While using a motion capture system to perform camera tracking has its advantages, it also presents new challenges that must be addressed if overlaying graphics are to be registered accurately. In the case of computer vision tracking, the camera pose is usually calculated in the same coordinate frame as the interest points being tracked (which is almost always the work piece). If the work piece moves, then so does its coordinate frame, which will automatically affect the camera pose in that frame.
In this situation, the location of the work piece and its coordinate frame in the real world (production footprint) is not relevant. This allows for a fairly straightforward overlaying of graphics and models that also reside in that coordinate frame. When using a motion capture system to track camera pose, the resultant transformation matrix is given in the motion capture system frame of reference. In addition, the coordinate frame of the work piece is no longer known and its location in the real world becomes highly relevant, meaning the location of the work piece must also be tracked.
Once the pose of the camera and the work piece is known, all the information needed to calculate the coordinate transform from the camera to the work piece is available.
This article is based on SAE technical paper 2011-01-2656 by Paul Davies and Lorrie Sivich, Boeing.