ICIP 2022 Challenge Session

ICIP2022_LOGOlogo-ieee-sps

International Conference on Image Processing (ICIP), 16-19 October 2022 Bordeaux, France

ICIP 2022 Challenge Session

                  

Challenge Title: Video distortion detection and classification in the context of video surveillance

The perceptual quality of images/videos in the context of video surveillance has a very significant impact on high-level tasks such as object detection, identification of abnormal events, visual tracking, to name a few. Despite the development of advanced video sensors with higher resolution, the quality of the acquired video is often affected by some distortions due to the environment, encoding and storage technologies, which can only be avoided by employing of intelligent post-processing solutions. It is then necessary to develop methods to detect and identify the different degradations to apply the appropriate quality enhancement processing. Another major difficulty not often considered in video quality enhancement studies is the case of multiple distortions that affect the signal simultaneously. The database presented here includes this real problem. It is indeed essential to identify the elements of the distortion mixture in order to apply the most appropriate quality enhancement solutions. In order to complete this challenge, we will provide our dataset of short-duration surveillance videos called the Video Surveillance Quality Assessment Dataset (VSQuAD). The database consists of original videos recorded by the team members and other sequences selected from public databases. These videos are degraded artificially with various types of distortions (single or simultaneous) at different levels.Important: The selected teams will be invited to be part of a joint paper, summarizing the top proposed solutions, to be submitted for publication in an IEEE Transaction. Capture-new

One of the essential characteristics of an intelligent video surveillance system is to offer the possibility of interpreting the content of the observed scene to help make decisions. As such, the quality of the visual information captured by the sensors plays a key role in the reliability of the scene interpretation process. The interest of this challenge is not only related to the importance of image quality in the context of video surveillance but also to it serving as a basis for identifying the best solutions for the detection and classification of various distortions that are little or not at all studied in the field of image quality assessment in this context. It is also important to note that unlike the classical case, the database proposed in this challenge presents additional difficulties due to the absence of visual semantics making the classification problem very challenging. Indeed, in our database the differences of distortions are sometimes subtle and difficult to identify. Furthermore, we added another level of complexity by requiring the distortion recognition and classification tools to be able to identify the distortion and its severity level.

We will make available the new database to competitors to test their solutions. The solutions proposed by each participant must be able to solve the classification problem in real time and for each scenario considered in the database. The participants would be required to submit an easily readable code of their algorithm (preferably in Matlab or Python) with comments along with a document describing a brief summary and steps of their method. Moreover, each solution should contain a demo code which could be used to run the submitted solution using a test video. The classification results should also be displayed in real-time on the tested video (or on a console window/terminal alongside) while the video is being played. In case of multiple distortions, all classes should be displayed. The participants must also share the execution time of their code and the system specifications on which it was run.The proposed solution must meet the following criteria

  • distortion type detection
  • detection of distortion mixture cases and the nature of each of its elements
  • identification of the severity level of each detected distortion (4 possible levels, 1 being the lowest level and 4 the highest one)
  • optimization of the computation time (by specifying the characteristics of the equipment used) and algorithmic complexity.

  • The submissions will be judged on the two main following criteria:
    1. Speed of the algorithm: The submissions will be run on a standard Windows OS based system with NVIDIA GPU. A smaller running time would be given a higher score provided that the algorithm scores well on the second criteria
    2. Classification criteria: The submissions would be judged on their implementation using a classification score based on a weighted combination of classification accuracy and F1 score.
  • Testing protocol and ranking rules:
    1. The algorithms would also be tested with a different set of surveillance video data than that provided.
    2. Moreover, the performance of the algorithm would also be judged separately for videos with single distortion and for those with multiple distortions at various levels of distortion.
    3. More weightage would be given to a method that performs well for the multi-distorted videos.

The VSQuAD database to be provided for this challenge would consist of 36 reference videos of 10 seconds duration in HD and associated distorted versions (9 single distortions and 9 multiple distortions at 4 levels of severity). The reference videos are captured by the team members or collected through the internet. These videos have been artificially degraded by means of distortion generation algorithms. The details of the database are given in the table below. Important: The link to access the dataset will only be provided to the registered participants.

Parameter Value
Number of reference videos 36
Categories (Vision) 2 (Daily-light RGB videos and Night-light RGB videos)
Acquisition mode 2 (Fixed camera, Moving camera)
Type of Scenes 2 (Indoor, Outdoor)
Number of scenarios 28
Scenarios Airport, Car moving on bridge, Bus by night, Monument place, Harbor, Landing plane, Highway, Bus stop, Traffic road, Manifestation, Mountain village, Outside bank, Outside shop, Parking, Pedestrian, Railway platform, River view, Roma by night, Saida night, Square place, Stadium, Tour de France, Tram station, Vatican square night, Boat on river, Car street by night, Mall, School
Frame rate 30 fps
Resolution of each video 1920 x 1080
Duration of each video 10 seconds
Number of single distortions 9
Distortion types AWGN, Defocus Blur, Motion Blur (due to camera instability), Uneven-illumination, Smoke, Haze, Rain, Low illumination, Compression
Number of mixture types 9
Number of distortion levels 4 (1 – Just Noticeable/Hardly visible, 2 – Visible but Not Annoying, 3 – Annoying, 4 – Very Annoying)
Number of videos with single distortion 960
Number of videos affected by more than one distortion 640

Following team will run the challenge session: University Sorbonne Paris Nord, France

beghdadi           mounir            borhene
             Azeddine Beghdadi                                Mounir Kaaniche                            Borhene  Eddine Dakkar

             Professor                              Associate Professor               Postdoc Fellow

The Islamia University of Bahawalpur, Pakistan

qureshi             hammad
Muhammad Ali Qureshi                           Hammad Hassan Gillani

Associate Professor                        PhD Scholar

L2S, CentraleSupélec, University Paris Saclay, France

zohaiib

Zohaib Amjad Khan

Postdoc Fellow

Norwegian University of Science and Technology (NTNU), Norway

faouzi           mohib

Faouzi Alaya Cheikh                                 Mohib Ullah

Professor                                   Postdoc Fellow

Registration opening: January 23, 2022
Training data available: January 27, 2022
Testing data available: February 13, 2022 March 1, 2022
Challenge paper submission (optional): February 25, 2022 April 22, 2022
Validation data available: May 1, 2022
Solutions/codes submission : May 15, 2022  May 31, 2022
Camera ready submission of accepted challenge papers (optional): July 11, 2022  July 22, 2022
Announcement of the winners: @ IEEE ICIP 2022

The coordinators will contact potential sponsors for supporting 1-3 awards for the competition winners.

    1. I. Bezzine, Z. A. Khan, A. Beghdadi, N. Almaadeed, M. Kaaniche, S. Almaadeed, A. Bouridane, F. Alaya Cheikh,  » Video quality assessment dataset for smart public security systems », in the Proceedings of the 23rd IEEE-INMIC, Bahawalpur, Pakistan, 5-7 November 2020
    2. G.-H. Franz, V. Hosu, H. Lin, D. Saupe, « KonVid-150k: A Dataset for No-Reference Video Quality Assessment of Videos in-the-Wild. » IEEE Access 9 (2021): 72139-72160.
    3. M. Aqqa, P. Mantini, and S. K. Shah. « Understanding How Video Quality Affects Object Detection Algorithms. » VISIGRAPP (5: VISAPP). 2019.
    4. M. I. Leszczuk,, P. Romaniak, and L. Janowski. « Quality assessment in video surveillance. » Recent Developments in Video Surveillance (2012): 57.
    5. S. Muller-Schneiders, T. Jager, H. S. Loos, W. Niem , “Performance evaluation of a real time video surveillance system”. In IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance 2005 Oct 15, pp. 137-143.

logo-uspn-france logo-ntnu-norway logo-iub  logo-paris-saclay

Les commentaires sont fermés.