UTRC visit, 12 Oct 2017
Several researchers from United Technologies Research Center (UTRC) will be visiting Caltech on 12 Oct (Thu).
- 7:45 am: arrive LAX. Drive to Pasadena
- 9:00 am: Sofie Haesaert, Annenberg Lounge
- 9:30 am: Petter Nilsson, Annenberg Lounge
- 10-11 am: seminar
- 11-11:30 am: Open (seminar followup discusssions)
- 11:30 am: Aaron Ames and Soon-Jo Chung, 266 Gates Thomas
- Noon: Lunch at the Athenaeum
- UTRC: Claudio, Amit
- Caltech: Gabor Orosz, Sofie, Jin
- 1 pm: CAST lab tour (Kyunam Kim, Kyunam <firstname.lastname@example.org>)
- 1:30 pm: Robotics lab tour (Aaron Ames)
- 2 pm: done for the day
Autonomy and Robotics Research at United Technologies Research Center
Thursday, October 12, 10-11am
Dr. Andrzej Banaszuk – Senior Director, Systems Department
Dr. Sunil Kukreja – Associate Director, Control Systems
Dr. Claudio Pinello - Associate Director, Cyber Physical Systems
Dr. Julian Ryde - Staff Research Scientist, Cyber Physical Systems
United Technologies Research Center
East Hartford, CT and Berkeley, CA
Abstract: This presentation will give a broad overview of research at UTRC’s Systems Department, with particular focus on the areas of autonomous and intelligent systems and robotics. The research is conducted by a diverse team of researchers in dynamical systems, advanced control, applied mathematics, human factors, and robotics. Autonomous and intelligent systems research for aerial and ground robotics includes intelligent system architecture, human-machine systems, perception, collaborative motion planning with dynamic collision avoidance, manipulation, and formal verification. The presentation will conclude with a discussion of existing and future career and internship opportunities in the area of robotics. The Cyber-Physical Systems group, based in Berkeley CA, will be highlighted, including a more detailed discussion of the RenderMap technology "Exploiting the Link Between Perception and Rendering for Dense Mapping".
We introduce an approach for the real-time (2Hz) creation of a dense map and alignment of a moving robotic agent within that map by rendering using a Graphics Processing Unit (GPU). This is done by recasting the scan alignment part of the dense mapping process as a rendering task. Alignment errors are computed from rendering the scene, comparing with range data from the sensors, and minimized by an optimizer.
The proposed approach takes advantage of the advances in rendering techniques for computer graphics and GPU hardware to accelerate the algorithm. Moreover, it allows one to exploit information not used in classic dense mapping algorithms such as Iterative Closest Point (ICP) by rendering interfaces between the free space, occupied space and the unknown. The proposed approach leverages directly the rendering capabilities of the GPU, in contrast to other GPU-based approaches that deploy the GPU as a general purpose parallel computation platform.
We argue that the proposed concept is a general consequence of treating perception problems as inverse problems of rendering. Many perception problems can be recast into a form where much of the computation is replaced by render operations. This is not only efficient since rendering is fast, but also simpler to implement and will naturally benefit from future advancements in GPU speed and rendering techniques. Furthermore, this general concept can go beyond addressing perception problems and can be used for other problem domains such as pa