Understanding the demographics, preferences, and affective states of people will be highly beneficial for commerce, urban safety, security, and many other areas. Our multimodal profiling analytics (MMPA) project aims to recognize individuals’ attributes. We consider multiple long-term attributes (age, gender, ethnicity, personality type) together with short-term or transient attributes (gait/posture, affect, attention, fatigue, engagement), using both explicit and subtle cues detected via multiple sensor modalities (visual, cognitive, and physiological).
Ubiquitous and affordable digital cameras have enabled users to take pictures and videos everywhere and anytime. Photowork, i.e. assessing, selecting, editing, organizing, and annotating this large amount of visual data, is tedious and time-consuming, as it involves a lot of manual labor with only minimal basic computational support available to users. This project aims to address major gaps and challenges in automating photowork, with a particular focus on large content collections.
Lossy compression methods for visual information in digital form introduce distortions whose perceptibility highly depends on scene content. Measuring the subjective visibility of these artifacts accurately and reliably is difficult. The focus of our research is on metrics for video quality assessment.
Cloud and rain attenuation affect satellite communication, especially at high frequencies. We use images from low-cost ground-based camera systems to analyze cloud cover and its properties, with the aim of developing a cloud attenuation model that is location-specific and time sensitive.
We are developing a system for real-time automated analysis of soccer video using novel computer vision and machine learning techniques. The analysis includes tracking players and the ball as well as the detection and recognition of important soccer events and activities.