Skip to content
Snippets Groups Projects
Select Git revision
  • b147b9bb49700870e6c47eb3558653b18a63188d
  • master default protected
2 results

cross-domain-identification

cedricnimpa's avatar
cedricnimpa authored
b147b9bb
History

Cross-Domain Identification for Thermal-to-Visible Face Recognition

The repository contains the software implementation for the 2020 International Joint Conference on Biometrics (IJCB) paper, "Cross-Domain Identification for Thermal-to-Visible Face Recogntiion," which proposes a novel domain adaptation framework that combines a new feature mapping sub-network with existing deep feature models, which are based on modified network architectures (e.g., VGG16 or Resnet50). This framework is optimized by introducing new cross-domain identity and domain invariance loss functions for thermal-to-visible face recognition, which alleviates the requirement for precisely co-registered and synchronized imagery. The paper provides extensive analysis of both features and loss functions used, and compare the proposed domain adaptation framework with state-of-the-art feature based domain adaptation models on a difficult dataset containing facial imagery collected at varying ranges, poses, and expressions. Moreover, the paper analyzes the viability of the proposed framework for more challenging tasks, such as non-frontal thermal-to-visible face recognition.

When using and referencing this repository, including derivative works, please cite the paper as:

@INPROCEEDINGS{8272680,
  author={C. {Nimpa Fondje} and S. {Hu} N. J. {Short} and B. S. {Riggan}},
  booktitle={2020 IEEE International Joint Conference on Biometrics (IJCB)}, 
  title={Cross-Domain Identification for Thermal-to-Visible Face Recognition}, 
  year={2020},
  volume={},
  number={},
  pages={}}

The model

framework

Prerequisites

This project requires Python 3 and the following Python libraries installed:

  • Tensorflow >= 1.14 (Tensorflow 2 not currently supported)
  • Numpy >= 1.16
  • keras_applications >= 1.1.0
  • keras_preprocessing >= 1.0.8
  • scikit-image >= 0.15.0
  • scikit-learn >= 0.21.3
  • cyvlfeat >= 0.5.1

You will also need to have "regression_models.py" and "nntoolbox.py" to train using the DPM.

Dataset

Our model was trained and tested using a multi-modal face dataset from the U.S. Army CCDC Army Research Laboratory. This dataset contains frontal imagery (visible and polarimetric thermal) with neutral and varying expressions. Dataset requests can be sent to:

Project Directory Contents

The project directory contains source code including:

  • main.py: the main file that run experiments.
  • regression_models.py: supporting fuctions to run (1) "Deep Perceptual Mapping" and (2) "Coupled Neural Network" regression models.
  • nntoolbox.py: supporting functions for training neural networks
  • protocol directories: protocol1_ijcb, protocol2_ijcb, and protocol3_ijcb. These directories contain ".txt" files containing image filesname for training and evaluation for different protocols.There are five training/testing splits provided for each protocol. Note that main.py reports the average of five splits.

How to run the code

Usage:

python main.py -r <root> -m <list of models> -l <list of model layers> --protocol <protocol> [-c <crop region>] [--dog] [--pca] [--dpm] [--proposed]
[--train] [--loss]

There are several commandline arguments:

'-r' or '--root': specify root directory containing images. (e.g., -r /home/username/images)

'-m' or '--models': space delimited list of model types. Valid inputs are: dsift, vgg16, or resnet50. For example, "-m dsift vgg16 resnet50" to run all three models. In order to run only DSIFT model, use "-m dsift"

'-l' or '--layers': space delimited list of layer names in order to extract features. There should be the same number of layers as number of models.
Valid layer name for each model include: dsift - None vgg16 - block1_pool, block2_pool, block3_pool, block4_pool, block5_pool resnet50 - activation_39, activation_21, activation_9 For example, "-m dsift vgg16 resnet50 -l None block3_pool actication_21"

'--protocol': specified protocol being used. List of valid protocol arguments are: protocol1_r1_b, protocol1_r1_e, protocol1_r2_b, protocol1_r3_b, protocol2_r1_b, protocol2_r1_e, protocol2_r2_b, protocol2_r3_b, protocol3_e, protocol_p

'-c' or '--crop': optional crop region specified as four integers x0, y0, x1, y1, where (x0, y0) and (x1, y1) are the upper left and lower right corners of the cropping region. For example, "-c 39 123 239 323" in order to crop a 360x280 image to 200x200 pixels. Note if no cropping is used then it will take longer to train and evaluate.

'--dog': optional flag in order to enable Difference of Gaussian filtering

'--pca': optional flag in order to enable Principal Components Analysis, which reduced the dimensionality of feature embeddings

'--dpm': optional flag in order to enable Deep Perceptual Mapping regression model

'--proposed': optional flag in order to enable our proposed domain adaptation model

'--train': optional flag in order to train and save the best proposed domain adaptation model

'--loss': optional flag in order to enable the domain invariance loss

Note: DPM can take some time run.

Example1:The following command reports protocol1_r1_b (protocol1 range1 baseline) results using DSIFT, VGG16, and ResNet50 models on tightly cropped faces.
Additionally, DoG filtering is used for preprocessing, and PCA and DPM are used for post-processing.

python main.py -r /home/username/images -m dsift vgg16 resnet50 -l None block3_pool actication_21 --protocol protocol1_r1_b -c 39 123 239 323 --dog --pca --dpm

Example2: The following command reports protocol3_e (protocol3 expression) results using VGG16, and ResNet50 models on tightly cropped faces. Additionally, DoG filtering is used for preprocessing, and PCA, proposed, loss, and train are used for post-processing.

python main.py -r /home/username/images -m resnet50 vgg16 -l activation_21 block3_pool -c 39 123 239 323 --protocol protocol3_e --dog --pca --proposed --loss --train

Note: The root directory above may not exist. Therefore, replace this image path with a valid root directory.

Results

identification

Authors

  • Cedric Nimpa Fondje - University of Nebraska-Lincoln
  • Shuowen Hu - CCDC Army Research Laboratory
  • Nathaniel J. Short - Booz Allen Hamilton
  • Benjamin S. Riggan - University of Nebraska-Lincoln

-Corresponding authors: cedricnimpa@huskers.unl.edu, briggan2@unl.edu

License

This project is licensed under the BSD 3-Clause License - see the LICENSE file for details

Acknowledgments

This research project was partially supported by Booz Allen Hamilton (BAH) and the U.S. Army Combat CapabilitiesDevelopment Command (CCDC) Army Research Laboratory.