The anatomy of a home robot
It always seems impossible until it is done (N. Mandela)
The second question everyone asks me about temi is why this kind of product doesn’t exist yet? and my answer to them is simple: temi existed for years.
Personal robots have existed in our dreams, movies and imagination for centuries from the day the PC appeared, the only thing that separated them from becoming a reality is technology and affordability.
Building a robot for the home is probably one for the most complex and difficult challenges. Creating an intelligent and autonomous robotic system like temi needs to overcome every single tech field – electric engineering, software, algorithms, IOS/Linux/Android, Optics, AI, Mechanics, Language processing, machine vision, UX… you name it.
I want to share with you how temi works, hoping that more and more engineers and startups will get into the field of personal robotics which we truly believe will become the next wave following the smartphone.
So, if you want to understand personal robotics, here are a few of my insights that will welcome you into the field.
Where am I?
The first thing any of us do when we enter an unknown area is try to understand where we are. Basically, we are creating a sort of map in our minds and positioning ourselves in it. As we move forward into that unknown area, we are constantly updating this map and the objects around us.
temi is doing the same thing. In the professional jargon this is called SLAM – Simultaneous Localization and Mapping – and it is probably one of the basic but hardest problems a roboticist must solve. In a living room for example, there isn’t a lot of free space. temi needs to know where it can pass, including very narrow areas. Most robots are using a single sensor for the SLAM process but this is not sufficient as each sensor has its limitations. temi is fusing data intelligently from a variety of sensors hundreds of times per second! Sensors such as LiDAR, 3D cameras, 2D cameras, encoders measuring the wheel’s movements, IMU, considering the uncertainty of each sensor and data from that sensor, to create an accurate sub-centimeter level map and localization in real time.
The second part, after building a map, is to understand where we are semantically.
We, humans, instinctively understand what is happening in front of our eyes – a big oval table with 8 chairs around it, big windows, not much other furniture – we entered a meeting room. On the other hand – a big oval table with 8 chairs around it, pictures on the walls, other entrances to other rooms with no doors – we entered a dining room. temi understands what it sees by running a set of sophisticated state-of-the-art deep learning algorithms to recognize objects, areas, situations and the relations between all. temi saves the information in a structured knowledge graph to build an understanding of what it sees and to continue improving its perception of the environment over time.
How to get to there?
In order to get from one side of the room to the other, we, humans, instinctively choose the easiest path to follow with no effort, passing over small obstacles and around bigger ones. This is a complex task for robots. More precisely, the time complexity of simply choosing the best way from point A to point B is exponential with respect to the number of possible edges to follow. temi is not only choosing an efficient path in a fraction of a second, it also chooses the most natural path. A robot can technically pass in very strange areas. The shortest way is not necessarily the fastest. For example, when you move from the kitchen to the living room you can pass through sofas and slalom through all the kitchen chairs or you can bypass them the long way which will be the fast and natural way. temi is learning the natural paths humans do in order to move in a natural way in such situations and self improves himself every time.
Another aspect is low energy movement. For example, running on a thick carpet is a harder task than on a bare floor. temi is capable of dynamically analyzing the surface it is moving on, in order to adapt its control system and to choose the lowest energy and fastest path possible.
Everything that is described above needs to be done constantly over and over while moving. What happens if suddenly Fluffy the dog runs in front of temi? temi needs to act fast. This is why temi monitors the chosen path to detect new obstacles, updates its knowledge of the world and adapts its path to avoid those dynamic obstacles.
Simply entering a room and crossing it from one side to the other requires years of expertise, millions of lines of code, dozens of sensors and the best engineers to implement all these. Thanks to the computing performance and cost revolution in the last couple of years, this has been made possible.
How can I serve you?
In order for a robot to help you it has to understand what you want, where you are, and who you are.
temi has dozens of features and capabilities. It is so complex yet so simple to operate – simple as turning on the light. temi has only 2 options – “with you” or not. Tap its head or say: “Hey temi”, and temi moves to interaction mode ready to serve you. To use all the features described above and more, one needs to simply say “Hey temi, call mom”. temi will lock on your position, plan an efficient and natural path to get closer to you and follow you to video you, decide on the best communication layer and quality and initiate a video call.
Oh yes, temi will also recognize who you are and call your mom and not your spouse’s mom. temi uses a multi-model person recognition algorithm to establish high confidence identity, using speaker recognition from voice data and face recognition from the camera to generate high confidence recognition.
temi provides a simple and intuitive method for us to interact with it – voice. temi understands when we call it, what we told it and analyses the intent of what we actually want. temi is always running on low power mode, ready for us to call him. This is called “wake-up-word detection” and is implemented directly on the HW to keep the power low. Once a detection has occurred, temi launches the ASR (Automatic Speech Recognition) and NLU (Natural Language Understanding) layers to understand what the person said and intended it to do. temi uses RGBD algorithms to detect and understand what the user is actually doing – sitting, standing, talking with someone else, etc. in order to fully understand the user’s intention and provide answers to his real needs. This is called “Activity Recognition” and this cannot be done using voice sensors solely.
temi can also serve you when you are away from him. We developed iOS and Android applications to control temi from a distance. Physical presence is needed for the 1-to-1 pairing to ensure high level of security. The mobile app allows a temi owner to sync his contacts list with temi and to connect to temi from any place in the world. Connecting from afar allows the owner to navigate freely in his home and “Be there”. Unlike ever before, temi offers a navigation functionality without a joystick. You don’t need to be a hard-core gamer to make temi go to the kitchen. You can simply tap on the kitchen icon on your mobile app, and temi will navigate there autonomously. You can also tap on the camera feed and temi will move toward the direction you told it, using obstacle avoidance algorithms to make sure it is not tipping over that vase you got for your wedding. temi implements a real-time MQTT (Message Queue Telemetry Transport) to ensure connectivity with the mobile app over low-bandwidth, high-latency or unreliable networks.
The new way to connect
The best way to connect to our loved ones is to physically be there. The next best thing is temi.
Current video chat applications and devices just don’t give us what we want. We want to be hands-free, move around naturally and we don’t want to worry about framing ourselves in the center of the camera, so the other side will be able to see us. temi offers an exceptional remote presence experience.
First, temi is able to detect where the person is by hearing his or her voice. It uses a system of omni-directional microphones-array to capture the sound. The sound waves arrive in a short delay to each microphone allowing temi to calculate an accurate heading to the person and an estimated distance. This allows temi to ensure a great sound experience with echo cancellation, the ability to remove background noises, and to focus on the person’s voice.
Second, temi is adapting its position to capture the best image of that person by intelligently fusing RGB cameras, 3D cameras and a LiDAR. It then runs our proprietary person detection and tracking algorithms on that fused data to generate an accurate tracking of the person as he or she moves around freely in the room. Moreover, temi uses machine learning algorithms to recognize the clothes worn by the user as he moves around and continues tracking him efficiently without confusing with other family members or colleagues around.
To ensure great video experience, the temi team implemented P2P (peer-to-peer) communication using adaptive bit-rate mechanism which encodes the captured video and audio in a variety of qualities. temi then uses the optimal encoding depending on the quality of the communication line, bandwidth and load. All of this is done in real-time during a video call.
When we use temi, we move from one room to another. We’ve all suffered from drastic changes in the quality of internet connection in different rooms. In order to work great throughout your home, temi includes an 802.11ac Wi-Fi connectivity (The newest generation with dual-band wireless technology) combined with LTE (4G) cellular network connectivity so it will never lose connectivity and will provide an awesome experience. The beauty with the integration of4G LTE is that there is no setup needed and you can enjoy temi the moment you open its box.
Open to endless possibilities
We, humans, have an extraordinary ability to learn from observing and apply that learning immediately. This ability is called Intelligence. Formally, intelligence is defined as one’s ability to acquire new knowledge and skills, and apply them in real world situations. temi is constantly learning about our day to day life to provide us better service. temi is autonomously learning how to move better, who are the family members, where people are usually sitting in the house, etc. All this allows temi to improve the way it serves us and the way it interacts with us. Better voice recognition, better understanding, more knowledge about the world and improved manners accustomed to the family temi is with.
Improving also means bringing in new experiences. temi’s operating system is open to 3rd party developers (Android) and provides a whole new platform for apps and games that can move, talk and interact with you – autonomous navigation, mobility, video and audio capture, voice interaction and advanced AI provide a new ultra-creative and unique platform allowing developers to reach and influence people with new experiences like never before. For example: gaming apps that interact and move with you like hide and seek, educational apps that get you on your feet and make learning fun, tele-medicine, security and much more – The possibilities are endless.
The community of innovators & developers will make temi an indispensable product. Therefore, temi’s developer tools include a simple Android based SDK and API which is easy to use, provides a lot of functionality and makes sure the interaction is safe no matter what the developer is doing. temi skills can be developed on Android Studio as the development environment minimizing the technical ramp-up.
No one wants to see a robot falls or run over a foot. For this purpose, temi includes several layers of safety features – temi uses a depth camera and laser distance sensors which monitor the space in front of temi to detect objects above the floor in order to maneuver around them. It detects surface edges (like a carpet) and slows down or stops in the case of a step or a hole. temi also implements a strict control safety mechanism that connects directly to the safety sensors and permits the robot to advance only if the space is free, so even a faulty command won’t cause temi to drive over your foot or fall down the stairs.
You wouldn’t want temi to fall when you, or your 3 years old kid, push it. This quality is achieved in the design process by creating a very low COM (center-of-mass). All heavy components are located on the bottom of temi while all top components are chosen to be very light.
Safety also means that hackers won’t be able to tamper with temi. temi offers top regulation and quality standards with smart technology components that protect devices from hacking threats. We encrypt your data when it is sent over the internet and while it is stored in our cloud storage platform. We utilize Amazon Web Services to help create a secure cloud platform.
For some it may look like magic. For us it is just another day at the office, living the dream.
Want to be the first to meet temi? Join our Early Adopter Program