9-th European Workshop on Visual Information Processing

23 - 25 JUNE, 2021, Paris, France, Fully virtual

Due to the new development of the sanitary situation related to COVID-19, the committee decided to organize the event in a FULLY VIRTUAL mode. 

The organizing committee has decided this year to waive the registration for all participants who wish to attend the virtual event. 

Note that a registration fee is still required to cover the publication of an accepted paper. The committee invites young authors of the accepted papers, from low-income countries or countries experiencing economic and political crises, to apply for a registration fee waiver.

Welcome to EUVIP 2021

The 9th European Workshop on Visual Information Processing (EUVIP) will take place in Paris, France, 23-25 June 2021. EUVIP 2021 continues the series of workshops on visual information processing, modeling, and analysis methods inspired by the human and biological visual systems, with applications to image and video processing and communication. EUVIP 2021 offers experienced researchers an intellectually stimulating environment for discussion and interactions with their peers, encourages early career researchers to widen their experience and horizons, and guides Ph.D. students towards new research directions. The flagship theme of this edition is « Image Quality Assessment and Enhancement (IQAE) in the context of medical diagnostic imaging ».

The conference program will feature oral and poster presentations, tutorials, and a panel discussion session. The Best Regular paper will be selected for an award.

As for the previous EUVIP workshops, all the accepted regular papers in the proceedings will be published in the IEEE Xplore. Furthermore, authors of selected articles will be invited to submit an extended and substantially improved version of the EUVIP 2021 paper to a Journal Special Issue, related to IQAE in the context of medical imaging and diagnosis. Topics of particular interest to EUVIP 2021 include, but are not limited to:

  • Computational vision models and perceptual-based processing
  • Image restoration, enhancement and super-resolution
  • Video processing and analytics 
  • Biometrics, forensics, and image & video content protection 
  • Depth map, 3D, multi-view encoding 
  • Visual quality and quality of experience assessment 
  • Color image understanding & processing 
  • Visual information processing for AR/VR Systems 
  • Vision assisted navigation
  • Image & video compression
  • Medical imaging
  • Sparse & redundant visual data representation
  • Image & video data fusion  
  • Image & video communication in the cloud 
  • Visual substitution for blind and visually impaired  
  • Display and quantization of visual signals  
  • Deep learning for visual information processing 
  • Image processing for autonomous vehicles 
  • Image & video forensic 
  • 360/omnidirectional image/video processing 
  • Vision assisted surgery

Call for papers

deadline: Extended to April 8, 2021

  • The booklet is available here.
  • The technical program is now available
  • The program at a glance is now available
  • The registration link for tutorials is now open on the tutorial page
  • Due to several kind requests, the EUVIP2021 organizing committee has decided to extend the deadline for submission of the Camera-Ready paper and registration to 19 May 2021.
  • The deadline of the submission is extended to 08 April 2021

Regular Paper Submission

Regular paper: Prospective authors are invited to submit full-length papers, with 4-6 pages of technical content, figures, and references, through the submission system. Submitted papers will undergo a double-blind review process. All regular submissions will be fully refereed, and accepted papers will be presented in a technical session (oral or poster). Regular papers presented at the conference will be included in the conference proceedings and submitted for inclusion to IEEE Xplore.

Learn more

Wolfgang Heidrich

Professor of Computer Science; Director, KAUST Visual Computing center

Deep Optics — Joint Design of Imaging Hardware and Reconstruction Methods

Classical imaging systems are characterized by the independent design of optics, sensors, and image processing algorithms. In contrast, computational imaging systems are based on a joint design of two or more of these components, which allows for greater flexibility of the type of captured information beyond classical 2D photos, as well as for new form factors and domain-specific imaging systems. In this talk, I will describe how numerical optimization and learning-based methods can be used to achieve truly end-to-end optimized imaging systems that outperform classical solutions.

Abdesselam Bouzerdoum

Professor and Head of ICT Division, College of Science and Engineering

Semantic Segmentation for Assistive Navigation using Deep Bayesian Gabor Networks

Semantic scene segmentation is a challenging problem that has great importance in assistive and autonomous navigation systems. Such vision systems must cope well with image distortions, lighting variations, changing surfaces, and varying illumination conditions. For a vision-impaired person, the task of navigating in an unstructured environment presents major challenges and constant danger. It is reported that on average one in 12 pedestrians living with blindness is hit by a cyclist, a motorbike, or a car. Safe navigation involves multiple cognitive tasks at both macro and micro levels. At the macro level, a blind person needs to know the general route to take, his/her location along the route at any time, and the relevant landmarks and intersections. At the micro-level, the blind person needs to walk within the pedestrian lane on a safe surface, maintain his or her balance, detect obstacles in the scene, and avoid hazardous situations. To support the vision impaired navigating safely in unconstrained outdoor environments, an assistive vision system should perform several vital tasks such as finding pedestrian lanes, detecting and recognizing traffic obstacles, and sensing dangerous traffic situations. 

In this talk, we will present vision-based assistive navigation systems that can segment objects in the scene, measure their distances, identify pedestrians, and detect a walking path. Using range and intensity images enable fast and accurate object segmentation and provide useful navigation cues such as distances to nearby objects and types of objects. Furthermore, the talk will present a new hybrid deep learning approach for semantic segmentation. The new architecture combines Bayesian learning with deep Gabor convolutional neural networks (GCNNs) to perform semantic segmentation of unstructured scenes. In this approach, the Gabor filter parameters are modeled as normal distributions with mean and variance that are learned using variational Bayesian inference. The resulting network has a smaller number of trainable parameters, which helps mitigate the overfitting problem while maintaining the modeling power. In addition to the output segmentation map, the system provides two maps of aleatoric and epistemic uncertainty—a measure that is negatively correlated with the confidence level with which we can trust the segmentation results. This measure is important for assistive navigation applications since its prediction affects the safety of its users. Compared to the state-of-the-art semantic segmentation methods, the hybrid Bayesian GCNN yields a competitive segmentation performance with a very compact architecture (a size reduction of between 25.4 and 231.2 times), a fast prediction time (1.6 to 67.4 times faster), and a well-calibrated uncertainty measure.

Stéphane Cotin

Research Director at Inria and leader of the MIMESIS team

Augmented Surgery:  how numerical simulation, computer vision and machine learning can help reduce risks in the operating room

Developments of imaging devices, numerical methods, and medical robotics are profoundly changing how modern medicine is practiced. This talk will highlight the increasing role of real-time numerical simulation in the fields of surgery and interventional radiology, with an impact in two major application areas: surgical training and computer-assisted interventions.

Numerical simulations have been used for several decades to perform complex biomedical phenomena analysis, with various levels of success. A specificity of our application contexts is the need for real-time simulations adapted to each patient. To this end, we have developed specific numerical methods, which allow for real-time computation of finite element simulations. While information about the organ shape, for instance, can be obtained pre-operatively, other patient-specific parameters can only be determined intra-operatively. This is achieved by exploiting our application domain's context, where images of different nature are acquired during the intervention. Machine learning methods are then used to extract information from these images and, in some cases, to replace the numerical simulation itself. An illustration of these different topics will be demonstrated by modeling liver biomechanics and its parametrization to achieve patient-specific augmented reality during surgery. 

Pascal Mamassian

Research Director CNRS, director of Laboratoire des Système Perceptifs (LSP)

Visual Confidence

Visual confidence refers to our ability to predict the correctness of our perceptual decisions. Knowing the limits of this ability, both in terms of biases (e.g. overconfidence) and sensitivity (e.g. blindsight), is clearly important to approach a full picture of perceptual decision making. In recent years, we have explored visual confidence using a paradigm called confidence forced-choice. In this paradigm, observers have to choose which of two perceptual decisions is more likely to be correct. I will review some behavioral results obtained with the confidence forced-choice paradigm, together with a theoretical model based on signal detection theory.