Tango: Reality Computing Technology by Google

Tango is an augmented reality computing technology platform developed by Google. It uses computer vision to enable smartphones to detect their position relative to the world around them without using GPS or other external signals.

Insight categories: Augmented Virtual Reality


Tango is an augmented reality computing platform developed by Google. It uses computer vision to enable mobile devices, such as smartphones and tablets, to detect their position relative to the world around them without using GPS or other external signals.

Right now, these devices support Tango:

  • Development Kit tablet
  • Lenovo Phab2 Pro smartphone
  • Asus ZenFone AR

In this blog, we’ll review the platform’s components, key concepts, and use cases.

Tango Components

All Tango-enabled Android devices have the following components:

Motion tracking camera: Tango uses the wide-angle motion tracking camera (sometimes referred to as the “fisheye” lens) to add visual information, which helps to estimate rotation and linear acceleration more accurately.

3D depth sensing: to implement Depth Perception, Tango devices use common depth technologies, including Structured Light, Time of Flight, and Stereo. Structured Light and Time of Flight require the use of an infrared (IR) projector and IR sensor.

Accelerometer, barometer, and gyroscope: the accelerometer measures movement, the barometer measures height and the gyroscope measures rotation which is used for motion tracking.

Ambient light sensor (ALS): the ALS approximates human eye response to light intensity under a variety of lighting conditions and through a variety of attenuation materials.

Key Concepts of Tango

Motion Tracking
Motion Tracking allows a device to understand its motion as it moves through an area. The Tango APIs provide the position and orientation of the user’s device in full six degrees of freedom (6DoF).

Tango implements Motion Tracking using visual-inertial odometry, or VIO, to estimate where a device is relative to where it started.

Tango’s visual-inertial odometry supplements visual odometry with inertial motion sensors capable of tracking a device’s rotation and acceleration. This allows a Tango device to estimate both its orientation and movement within a 3D space with even greater accuracy. Unlike GPS, Motion Tracking using VIO works indoors.

Area Learning
Area Learning gives the device the ability to see and remember the key visual features of a physical space: the edges, corners, and other unique features, so it can recognize that area again later.

To do this, it stores a mathematical description of the visual features it has identified inside a searchable index on the device. This allows the device to quickly match what it currently sees against what it has seen before without any cloud services.

Depth Perception
Depth Perception gives an application the ability to understand the distance to objects in the real world.

Current devices are designed to work best indoors at moderate distances (0.5 to 4 meters). This configuration gives good depth at a distance while balancing power requirements for IR illumination and depth processing.

First, the system utilizes a 3D camera, which casts out an infrared dot pattern to illuminate the contours of your environment. This is known as a point cloud. As these dots of light get further away from their original source (the phone), they become larger. The size of all the dots are measured by an algorithm and the varying sizes of the dots indicates their relative distance from the user, which is then interpreted as a depth measurement. This measurement allows Tango to understand all of the 3D geometry that exists in your space.

The Tango APIs provide a function to get depth data in the form of a point cloud. This format gives (x, y, z) coordinates for as many points in the scene as are possible to calculate. Each dimension is a floating point value recording the position of each point in meters in the coordinate frame of the depth-sensing camera.

Tango API Overview

This is the current Tango application development stack:


Tango Service is an android service running on a standalone process. It uses standard Android Interprocess Communication to support apps written in Java, Unity, and C. Tango Service performs all of the main Tango technologies, such as motion tracking, area learning, and depth perception. Applications can connect to Tango Service through the APIs.

Use Cases of Tango

Indoor Navigation
Tango device can be used to navigate precisely through a shopping mall, or even find a specific item at the store when that information is available.

VR and AR gaming with multiple users
Using Tango’s motion tracking capabilities, game developers can experiment with 6DoF to create immersive 3D AR gaming experiences, transform the home into a game level, or create magic windows into virtual and augmented environments.

Physical space measurement and 3D mapping
Using their built-in sensors, Tango-enabled devices are engineered to sense and capture the 3D measurements of a room, which support exciting new use cases, like real-time modeling of interior spaces and 3D visualization for shopping and interior design.

Marker detection with AR
A Tango device can search for a marker, usually a black and white barcode or a user defined marker. Once the marker is found, a 3D object is then superimposed on the marker. Using the phone’s camera to track the relative position of the device and the marker, the user is able to walk around the marker and view the 3D object at all angles.





  • URL copied!