Hyperspectral Image Processing

Hyperspectral images are now available for a wide range of applications: monitoring, mapping or helping with disaster management.

A major challenge lies in the interpretation of these images, a task which is referred to in computer science as the classification of images. For each pixel, we are provided with a set of intensities, one for each bandwidth. These intensities are somehow related to the surface reflectance and hence to the type of land cover or of land use. And instead of being able to model precisely this extremely complex relationship between intensities and interpretation, the scientific literature provides an abundance of techniques to capture information from the data themselves with the help of the ground truth.

These lectures aim at describing some of these techniques: what are their objectives, what kind of information they use, how reliable are their predictions? To study these techniques, we will consider toy examples, sometimes get involved in the mathematical technicalities and sometimes consider simple algorithms. Some ideas developed in these lectures come from textbooks for university students, many others stem from research papers and related questions.

I would expect these lectures to help getting more familiar with how proposed techniques are described in research papers. Throughout these lectures we will consider in the context of binary classification of hyperspectral images the following issues: learning regarded as an optimization problem, can we be positive about machine learning predictions, why is there a need for some strange concepts? We will have a look at some segmentation issues stemming from the computer vision community.

 

 

Assignment  assignment.pdf

            The evaluation will mainly take into account the compliance with the assignment or at least an exact compliance with part of what is requested. The formulas I am expecting are those that someone implementing your proposition would have to use. They should be explained not in terms of how in general one would use them but how they should be used when implementing your proposition. I will take into account the following expectations listed by order of decreasing importance.

0. The proposal should be clear enough for someone not having taken the course, to be able to implement it. It should be described using pseudocodes interacting together. The computations involved in these pseudocodes should be described in formulas. The notations need not be the same as in the lecture, however they must be precisely defined.

1. Formulas even if they happen to be wrong or contradictory, they should primarily make sense (for instance, a matrix should not be added to a scalar, a parameter used as a counter in a summation should not appear outside this summation).

2. Whereas in the literature, a single technique may actually be used in different ways in different proposals, the report should be unambiguous as to what exactly is being proposed.

3. What is claimed in the report in terms of complying with the constraints should be correct.

4. The computations included in the report should be correct (it is advised to make some simple numerical tests).

5. The technique used to fuse the different sources of information (spatial context, different bandwidths...) should be beneficial, in that performances should not be increased when one source of information is lacking.

Notebook (only started) with in appendix, all exercises and all Octave/Matlab code to yield the displayed figures in the slides

lecture_notes2.pdf

Slides used during the lecture

mlCl_slides2.pdf

First lecture (March 13th 2024)

            HIP2_lesson_1.mp4

Second lecture (March 14th 2024)

            HIP2_lesson_2.mp4

Third lecture (March 15th 2024)

HIP2_lesson_3.mp4

Fourth lecture (March 16th 2024)

            HIP2_lesson_4.mp4

 

This is my mail Gabriel.dauphin@univ-paris13.fr (please mention HIP2 in the subject of emails).

 

GABRIEL DAUPHIN