Dr. Damian Lyons
Department of Computer & Information Sciences
320 John Mulcahy Hall
441 East Fordham Road, Bronx NY 10458
Dr. Lyons's Homepage
Dr. Lyons's Short CV
Dr. Damian M. Lyons is an Associate Professor of Computer Science at
Fordham University. Prior to this, he worked for 15 years as a researcher
and research program manager in the area of Computer Vision and
Robotics. He completed his undergraduate education in
Math, Engineering and Computer
Science at Trinity College,
University of Dublin in Ireland. He earned his Ph.D. in
Computer Science from the
University of Massachusetts at Amherst
for research on a formal model of computation for sensory-based
He worked for many years as
a researcher in the US branch of the
corporate research laboratories
of Philips Electronics, the European
Semiconductor and Consumer Electronics giant. His work there included
research in representing and analyzing robot action plans,
integrating reactive/behavior-based & deliberative approaches to action
planning, multimodal user interfaces, and automated video surveillance. Dr.
Lyons served as project leader for Philips' research activities in Automated
Video Surveillance, and later as Department head for the Video and Display
Processing research department, responsible for technical leadership and
funding for this diverse group. Dr. Lyons is currently the Director of
Fordham’s Computer Vision and Robotics Lab, and Associate Chair for
Graduate Studies in the Computer & Information Science Department.
From 1990 through 1995, Dr. Lyons served as chair
of the IEEE Robotics & Automation technical committee on
assembly and task planning. He has served on numerous conference program
committees, has published over 80 technical papers in conferences, journals
and books, and holds 8 patents. Dr. Lyons is a member of ACM and IEEE.
My research interests are in Computer Vision and Robotics, in particular
for systems that operate in the same kind of dynamic and unstructured
environments as humans. Previously I have worked in:
- the integration of planning and reaction in robot systems
- automated video surveillance, and
- vision-based human-machine interfaces.
I am involved in two pieces of research in my role as Directory of the
Computer Vision &
Robotics Laboratory at Fordham:
- Performance Guarantees for Emergent Behavior in Mobile
The objective of this research program is to develop the necessary advances
in theory and software to build robot systems that can be deployed in
critical environments in a safe, effective and reliablefashion. The approach
in this proposal is based on understanding what ar ethe computational
characteristics of the behavior-based approach to robotics. Unique
characteristics include: the necessity for some form of
sensory-motorstructure such as Schemas, the necessity for
asynchronous methods of composing concurrent behaviors, and the
conclusion that even simple behavior-based systems exhibit complex
behavior when acting in a complex environment.
are developing a succinct formalism that captures these issues, and
especially addresses the issue of modeling the complex environment. A
process-based environment model is being developed, allowing a shared
vocabulary (processes) between robot controller and environment model.
It employs the Port Automata model as an operational semantics, and a
CSP like selection of process composition operators for process description.
The emergent behaviors of a robot interacting with an active and dynamic
environment can be modelled and explored with this approach.
- Multi-Sensor Fusion and Target-Tracking for Automated
Target tracking in CCTV (Close-Circuit TV) surveillance involves
determining which portions of an image in a CCTV video sequence corresponds
to which surveillance target - where targets are typically people. The
standard approaches developed for point target tracking, e.g.,MHT and
JPDAF, have been applied to visual target tracking with some success.
However, the information from a video sequence, even from a single camera,
is much richer than from the point tracking applications with which
multi-target methods originated. For this reason a crucial problem becomes
the integration of multiple sensory cues, especially in cases where some
cues can be misleading some of the time.
studing sensory fusion modules based on the
”Rankand Fuse” (RAF) method,
which is a general, efficient and easily scalable approach to sensory
fusion. The RAF method considers both the spatial and temporal results from
the sensory system, and offers an elegant method to provide top-down
feedback from the application system to the fusion process, so that task
context can be used to select a sensory fusion method appropriate for the
D. Lyons, R. Arkin, T-L. Liu, S. Jiang,P. Nirmal (2013). "Verifying Performance for Autonomous Robot Missions with Uncertainty", IFAC Intelligent Autonomous Vehicles Symposium IAV’13, Gold Coast, Australia, June.
D.M. Lyons, T-L. Liu, K. Shresta (2013). "Fusion of Ranging Data From Robot Teams Operating in Confined Areas", Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications, SPIE Defense Security and Sensing Symposium, Baltimore MD, April.
P. Nirmal and D. M. Lyons (2013). "Visual homing with a pan-tilt based stereo camera", Conference on Intelligent Robots and Computer Vision XXVII: Algorithms and Techniques, San Francisco, CA, February .
D.M. Lyons, R.C. Arkin, P. Nirmal, S. Jiang, T-M Liu, J. Deeb (2013). "Getting it Right the First time: Robot Mission Guarantees in the Presence of Uncertainty", Intelligent Robots and Systems (IROS) 2013, Tokyo, Japan, November.
D.M. Lyons, P. Nirmal (2012). "Navigation of uncertain terrain by fusion of information from real and synthetic imagery.", Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications, SPIE Defense Security and Sensing Symposium, Baltimore MD, April.
D.M. Lyons, R.C. Arkin, S. D. Fox, P. Nirmal, J. Shu (2012). "Designing Autonomous Robot Missions with Performance Guarantees", Intelligent Robots and Systems (IROS) 2012, Vila Moura, Algarve Portugal, Oct. 7-12th.
Stephen D. Fox, Damian M. Lyons (2012). "An approach to stereo-point cloud registration using image homographies", SPIE Conference on Intelligent Robots and Computer Vision XXVII: Algorithms and Techniques, San Francisco, CA, January.
D.M. Lyons, P. Benjamin (2011). "A relaxed fusion of information from real and synthetic images to predict complex behavior", Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications at the SPIE Defense and Security Symposium, Orlando (Kissimmee), FL, April.
Lyons, D.M. (2011). "Cluster Computing for Robotics and Computer Vision". World Scientific.
Lyons, D. (2010). "Selection and Recognition of Landmarks Using Terrain Spatiograms", IEEE/RSJ International Conference on Intelligent RObots and Systems (IROS), Tapei, Taiwan, October.
Lyons, D. (2010). "Detection and Filtering of Landmark Occlusions using Terrain Spatiograms.", IEEE Int. Conference on Robotics and Automation, Anchorage, Alaska, May.
D.M. Lyons, S. Chaudhry, P. Benjamin (2010). "A Visual Imagination Approach to Cognitive Robotics.", Symposium on Understanding the Mind and Brain, Tucson, Arizona, May.
Lyons, D.M. (2009). "Sharing Landmark Information Using MOG Terrain Spatiograms", IEEE/RSJ International Conference on Intelligent RObots and Systems (IROS), St Louis, MO, October.
D.M. Lyons and D.F. Hsu (2009). "Method of Combining Multiple Scoring Systems for Target Tracking using Rank-Score Characteristics", Information Fusion 10(2),
Lyons, D. (2007). "A Novel Approach to Efficient Legged Locomotion.", 10th Int. Conference on Climbing and Walking Robots 16-18 July Singapore..