WHO?

Matlab    Computer Vision     3 weeks      2020

ABOUT

WHO? is a face recognition application to identify 48 individuals in my Computer Vision course. It even includes a filter that censors people’s faces by sprouting flowers in their eyes. The following face-recognition models are implemented via Matlab: 

  • SURF-SVM feature classifier
  • HOG-SVM feature classifier
  • CNN classifier

PROCESS

The diagram on the left shows the procedure for making a face detection classifier, using the HOG-SVM as the example. 

[1] Face Database

The face database includes a series of photos of my classmates holding a piece of paper with an identification number. To organize the photos with their corresponding identities, I developed a helper function using ocr to automatically read the identification number and categorize the image into its own folder. The Viola-Jones object detector is used to detect and crop the face in the photo. 

Categorize Face (Abridged)
for n = 1:count      
  I = imread(imgSet.Files{n}); 
  fbox = faceDetector(I);
  ocrResults = ocr(I,’CharacterSet’,’0′:’9′); 
  recognizedText = strtrim(ocrResults.Text);      
 
   if ~isfile(fullfile(dirFolder, recognizedText)) 
     mkdir(dirFolder, recognizedText);
   end
 
   if ~isempty(fbox) 
     face = imcrop(I, fbox(1,:));
     [~, name, ext] = fileparts(imgSet.Files{n});
     imwrite(F,fullfile(dirFolder,recognizedText, [name ext] )); 
  end
end

[2] Feature Extraction

After the face database is generated, features need to be extracted from the detected face. For the HOG-SVM classifier for instance, a feature vector of HOG features are extracted using extractHOGFeatures

[3] Modelling

After feature extraction, the SVM classifier is trained using the Matlab function fitcecoc, which uses K(K-1)/2 binary SVM models to create an ECOC model. 

HOG-SVM training (Abridged)
imgSets = imageSet(rootFolder,’recursive’);
[training, test] = partition(imgSets, 0.8,’randomize’);
for i = 1:size(training,2)
    for j = 1:training(i).Count
         trainingFeatures(featureCount,:) = extractHOGFeatures(imresize(
              read(training(i),j),[300 300]));
         trainingLabel{featureCount} = training(i).Description;
         featureCount = featureCount + 1;
    end
    personIndex{i} = training(i).Description;
end
faceClassifier = fitcecoc(trainingFeatures, trainingLabel);
 

[4] Testing

The classifier is then tested on an unknown image. The image is first processed by registering the face using the Viola-Jones object detector and then extracting HOG features. The face is then classified using the trained HOG-SVM classifier and annotated accordingly. Other than the HOG-SVM classifier, a SURF-SVM and CNN model have also been trained.These models have been incorporated into a single function called RecognizeFace, as seen on the right. 

[P] = RecognizeFace( InputImage, FeatureType, ClassifierType, CreativeMode)

  • = (N x 3) matrix where N is the number of people detected and the three columns represent: student ID, center of face x, and center of face y.
  • InputImage = input
  • FeatureType = ie. HOG, SURF
  • ClassifierType = ie. SVM, CNN
  • CreativeMode = toggles face filter on/offf

[5] Evaluation

The classifier is evaluated by generating a confusion matrix for the predicted data against the known data. The confusion chart on the right shows the evaluation as tested against a set of unknown individual images of classmates. 

Evaluation code (Abridged)
YPred = predict(faceClassifier, testFeatures);
conf = confusionmat(testLabel,YPred,’Order’,personIndex);
accuracy = mean(diag(conf))/mean([test(:).Count]);
confchart = confusionchart(testLabel,YPred,… 
    ‘RowSummary’,’row-normalized’,’ColumnSummary’,’column-normalized’);
 

FACE FILTER

Image Augmentation

[6] Image Augmentation

An eye detector is generated using the Viola-Jones left and right eye detectors. To ensure that the same eye is not detected twice for an face, I applied bboxOverlapRatio. Regarding the superimposing of the flowers onto the detected eyes, I replaced the pixels of the input image with the resized flower image. 

Filter code (Abridged)
for i = 1:size(fbox,1)
       leftEye = leftEyeDetector(face); 
       rightEye = rightEyeDetector(face); 
       [overlapMax, overlapIndex] = max(bboxOverlapRatio(leftEye,rightEye));
       if (overlapMax > 0)   rightEye(overlapIndex,:) = [];
       elseif (~isempty(rightEye))  rightEye = rightEye(1,:);
       end
       indivEyes = [leftEye; rightEye]; 
       pts = [floor(indivEyes(:,1) + indivEyes(:,3)/2), …
            floor(indivEyes(:,2) + indivEyes(:,4)/2)];
        Fscale = floor((indivEyes(1,3)+indivEyes(2,3))*.8);
 
    for n = 1:size(pts,1)
        F = read(flowerSets,(randperm(flowerSets.Count,1))); 
        F = imresize(F,[Fscale NaN]);
        Fsize = size(F);
        newPts = pts – floor(Fsize(1)/2);
        newPts(:,1) = newPts(:,1) + fbox(i,1);  
        newPts(:,2) = newPts(:,2) + fbox(i,2);  
        xx = 0;
        for x = newPts(n,1):newPts(n,1)+Fsize(1) – 1
            xx = xx+1; yy = 0;
            for y = newPts(n,2):newPts(n,2)+Fsize(1) – 1
                yy = yy+1; 
                if (F(yy,xx,1) > 0 && F(yy,xx,2) > 0 && F(yy,xx,3) > 0) 
                    if (F(yy,xx,:) < 90)
                        if (max(F(yy,xx,:)) – min(F(yy,xx,:))) > 63
                            img(y,x,:) = F(yy,xx,:);
                        end
                    else
                        img(y,x,:) = F(yy,xx,:);
                    end
                end
            end
        end
    end          
end

Sources
https://uk.mathworks.com
https://www.freepik.com/free-photos-vectors/flower