Contact: Adrien Desies
This simple experience consists essentially in creating "adversarial examples" as in [1] for whole tracking sequences.
The ground truth target patches of tracking sequences were extracted, then altered to "fool" a DCNN trained to perform image classification. That is, the error gradients of a given image for an arbitrary class were back-propagated down to the input image. By doing this iteratively, it is possible to obtain a high prediction confidence in a wrong class while altering the image subtly enough so that it is difficult for a human observer to notice. This was performed on the VOT 2013 dataset, using MatConvNet and the imagenet-vgg-f pre-trained CNN from [4].
The results can be visualized on the following youtube channel.
In the following videos, when the target matched a class learned by the network, each frame was fooled in such a way that the confidence in this precise class was maximized. When no such class was available, the fooling class was decided to be something nonsensical, such as "waffle iron" and "banana".
[2] Nguyen, A.; Yosinski, J. & Clune, J. Deep Neural Networks Are Easily Fooled: High Confidence Predictions for Unrecognizable Images, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015
[3] Simonyan, K.; Vedaldi, A. & Zisserman, A. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps, CoRR?, 2013, abs/1312.6034
[4] Chatfield, K.; Simonyan, K.; Vedaldi, A. & Zisserman, A. Return of the Devil in the Details: Delving Deep into Convolutional Nets, CoRR?, 2014, abs/1405.3531