Learning based person re-identication across camera views.

行人再識別的主要任務是匹配不交叉的監控攝像頭中觀測到的行人。隨著監控攝像頭的普遍,這是一個非常重要的任務。並且,它是其他很多任務的重要子任務,例如跨攝像頭的跟蹤。行人再識別的難度存在於不同攝像頭中觀測到的同一個人會有很大的變化。這些變化來自於觀察角度的不同,光照的不同,和行人姿態的變化等等。在本文中,我們希望從如下的方面來重新思考並解決這個問題。 === 首先,我們發現當待匹配集合增大的時候,匹配的難度大幅度增加。在實際應用中,我們可以通過時間上的推演來減少待匹配集合的大小,簡化行人再識別這個問題。現有通過機器學習的方法來解決這個問題的算法基本會假設一個全局固定的度量。我們的方法來自提出於對於...

Full description

Bibliographic Details
Other Authors: Li, Wei.
Format: Others
Language:English
Chinese
Published: 2013
Subjects:
Online Access:http://library.cuhk.edu.hk/record=b5549295
http://repository.lib.cuhk.edu.hk/en/item/cuhk-328773
id ndltd-cuhk.edu.hk-oai-cuhk-dr-cuhk_328773
record_format oai_dc
collection NDLTD
language English
Chinese
format Others
sources NDLTD
topic Pattern recognition systems--Mathematics
Computer vision
Video surveillance
spellingShingle Pattern recognition systems--Mathematics
Computer vision
Video surveillance
Learning based person re-identication across camera views.
description 行人再識別的主要任務是匹配不交叉的監控攝像頭中觀測到的行人。隨著監控攝像頭的普遍,這是一個非常重要的任務。並且,它是其他很多任務的重要子任務,例如跨攝像頭的跟蹤。行人再識別的難度存在於不同攝像頭中觀測到的同一個人會有很大的變化。這些變化來自於觀察角度的不同,光照的不同,和行人姿態的變化等等。在本文中,我們希望從如下的方面來重新思考並解決這個問題。 === 首先,我們發現當待匹配集合增大的時候,匹配的難度大幅度增加。在實際應用中,我們可以通過時間上的推演來減少待匹配集合的大小,簡化行人再識別這個問題。現有通過機器學習的方法來解決這個問題的算法基本會假設一個全局固定的度量。我們的方法來自提出於對於不同的待匹配集合應該有不同的度量的新觀點。因此,我們把這個問題重新定義在一個遷移學習的框架下。給定一個較大的訓練集合,我們通過訓練集合的樣本與當前的查詢集合和待匹配集合的相似程度,重新對訓練集合進行加權。這樣,我們提出一個加權的最大化邊界的度量學習方法,而這個度量較全訓練集共享的整體度量更加的具體。 === 我們進一步發現,在兩個不同的鏡頭中,物體形態的變換很難通過一個單一模型來進行描述。為了解決這一個問題,我們提出一個混合專家模型,要將圖片的空間進行進一步細化。我們的算法將剖分圖形空間和在每個細分後的空間中學習一個跨鏡頭的變換來將特征進行對齊。測試時,新樣本會與現有的“專家“模型進行匹配,選擇合適的變換。 我們通過一個稀疏正則項和最小信息損失正則項來進行約束。 === 在對上面各種方法的分析中,我們發現提取特征和訓練模型總是分開進行。一個更好的方法是將模型的訓練和特征提取同時進行。為此,我們希望能夠使用卷積神經網絡 來實現這個目標。通過精心設計網絡結構,底層網絡能夠通過兩組一一對應的特征來描 述圖像的局部信息。而這種信息對於匹配人的顏色紋理等特徵更加適合。在較高的層我 們希望學習到人在空間上的位移來判斷局部的位移是符合於人在不同攝像頭中的位移。 通過這些信息,我們的模型來決定這兩張圖片是否來自于同一個人。 === 在以上三個部分中,我們都同最先進的度量學習和其他基于特征設計的行人再識別方法進行比較。我們在不同的數據集上均取得了較為優秀的效果。我們進一步建立了一 個大規模的數據集,這個數據集包含更多的視角、更多的人且每個人在不同的視角下有 更多的圖片。 === Person re-identification is to match persons observed in non-overlapping camera views with visual features. This is an important task in video surveillance by itself and serves as metatask for other problems like inter-camera tracking. Challenges lie in the dramatic intra-person variation introduced by viewpoint change, illumination change and pose variation etc. In this thesis, we are trying to tackle this problem in the following aspects: === Firstly, we observe that the ambiguity increases with the number of candidates to be distinguished. In real world scenario, temporal reasoning is available and can simplify the problem by pruning the candidate set to be matched. Existing approaches adopt a fixed metric for matching all the subjects. Our approach is motivated by the insight that different visual metrics should be optimally learned for different candidate sets. The problem is further formulated under a transfer learning framework. Given a large training set, the training samples are selected and re-weighted according to their visual similarities with the query sample and its candidate set. A weighted maximum margin metric is learned and transferred from a generic metric to a candidate-set-specific metric. === Secondly, we observe that the transformations between two camera views may be too complex to be uni-modal. To tackle this, we propose to partition the image space and formulate the problem into a mixture of expert framework. Our algorithm jointly partitions the image spaces of two camera views into different configurations according to the similarity of cross-view transforms. The visual features of an image pair from different views are locally aligned by being projected to a common feature space and then matched with softly assigned metrics which are locally optimized. The features optimal for recognizing identities are different from those for clustering cross-view transforms. They are jointly learned by utilizing sparsity-inducing norm and information theoretical regularization. === In all the above analysis, feature extraction and learning models are separately designed. A better idea is to directly learn features from training samples and those features can be applied to directly train a discriminative models. We propose a new model where feature extraction is jointly learned with a discriminative convolutional neural network. Local filters at the bottom layer can well extract the information useful for matching persons across camera views like color and texture. Higher layers will capture the spatial shift of those local patches. Finally, we will test whether the shift patterns of those local patches conform to the intra-camera variation of the same person. === In all three parts, comparisons with the state-of-the-art metric learning algorithms and person re-identification methods are carried out and our approach shows the superior performance on public benchmark dataset. Furthermore, we are building a much larger dataset that addresses the real-world scenario which contains much more camera views, identities, and images perview. === Detailed summary in vernacular field only. === Detailed summary in vernacular field only. === Detailed summary in vernacular field only. === Detailed summary in vernacular field only. === Detailed summary in vernacular field only. === Li, Wei. === Thesis (M.Phil.)--Chinese University of Hong Kong, 2013. === Includes bibliographical references (leaves 63-68). === Abstracts also in Chinese. === Acknowledgments --- p.iii === Abstract --- p.vii === Contents --- p.xii === List of Figures --- p.xiv === List of Tables --- p.xv === Chapter 1 --- Introduction --- p.1 === Chapter 1.1 --- Person Re-Identification --- p.1 === Chapter 1.2 --- Challenge in Person Re-Identification --- p.2 === Chapter 1.3 --- Literature Review --- p.4 === Chapter 1.3.1 --- Feature Based Person Re-Identification --- p.4 === Chapter 1.3.2 --- Learning Based Person Re-Identification --- p.7 === Chapter 1.4 --- Thesis Organization --- p.8 === Chapter 2 --- Tranferred Metric Learning for Person Re-Identification --- p.10 === Chapter 2.1 --- Introduction --- p.10 === Chapter 2.2 --- Related Work --- p.12 === Chapter 2.2.1 --- Transfer Learning --- p.12 === Chapter 2.3 --- Our Method --- p.13 === Chapter 2.3.1 --- Visual Features --- p.13 === Chapter 2.3.2 --- Searching and Weighting Training Samples --- p.13 === Chapter 2.3.3 --- Learning Adaptive Metrics by Maximizing Weighted Margins --- p.15 === Chapter 2.4 --- Experimental Results --- p.17 === Chapter 2.4.1 --- Dataset Description --- p.17 === Chapter 2.4.2 --- Generic Metric Learning --- p.18 === Chapter 2.4.3 --- Transferred Metric Learning --- p.19 === Chapter 2.5 --- Conclusions and Discussions --- p.21 === Chapter 3 --- Locally Aligned Feature Transforms for Person Re-Identification --- p.23 === Chapter 3.1 --- Introduction --- p.23 === Chapter 3.2 --- Related Work --- p.24 === Chapter 3.2.1 --- Localized Methods --- p.25 === Chapter 3.3 --- Model --- p.26 === Chapter 3.4 --- Learning --- p.27 === Chapter 3.4.1 --- Priors --- p.27 === Chapter 3.4.2 --- Objective Function --- p.29 === Chapter 3.4.3 --- Training Model --- p.29 === Chapter 3.4.4 --- Multi-Shot Extension --- p.30 === Chapter 3.4.5 --- Discriminative Metric Learning --- p.31 === Chapter 3.5 --- Experiment --- p.32 === Chapter 3.5.1 --- Identification with Two Fixed Camera Views --- p.33 === Chapter 3.5.2 --- More General Camera Settings --- p.37 === Chapter 3.6 --- Conclusions --- p.38 === Chapter 4 --- Deep Neural Network for Person Re-identification --- p.39 === Chapter 4.1 --- Introduction --- p.39 === Chapter 4.2 --- Related Work --- p.43 === Chapter 4.3 --- Introduction of the New Dataset --- p.44 === Chapter 4.4 --- Model --- p.46 === Chapter 4.4.1 --- Architecture Overview --- p.46 === Chapter 4.4.2 --- Convolutional and Max-Pooling Layer --- p.48 === Chapter 4.4.3 --- Patch Matching Layer --- p.49 === Chapter 4.4.4 --- Maxout Grouping Layer --- p.52 === Chapter 4.4.5 --- Part Displacement --- p.52 === Chapter 4.4.6 --- Softmax Layer --- p.53 === Chapter 4.5 --- Training Strategies --- p.54 === Chapter 4.5.1 --- Data Augmentation and Balancing --- p.55 === Chapter 4.5.2 --- Bootstrapping --- p.55 === Chapter 4.6 --- Experiment --- p.56 === Chapter 4.6.1 --- Model Specification --- p.56 === Chapter 4.6.2 --- Validation on Single Pair of Cameras --- p.57 === Chapter 4.7 --- Conclusion --- p.58 === Chapter 5 --- Conclusion --- p.60 === Chapter 5.1 --- Conclusion --- p.60 === Chapter 5.2 --- Future Work --- p.61 === Bibliography --- p.63
author2 Li, Wei.
author_facet Li, Wei.
title Learning based person re-identication across camera views.
title_short Learning based person re-identication across camera views.
title_full Learning based person re-identication across camera views.
title_fullStr Learning based person re-identication across camera views.
title_full_unstemmed Learning based person re-identication across camera views.
title_sort learning based person re-identication across camera views.
publishDate 2013
url http://library.cuhk.edu.hk/record=b5549295
http://repository.lib.cuhk.edu.hk/en/item/cuhk-328773
_version_ 1718977440758366208
spelling ndltd-cuhk.edu.hk-oai-cuhk-dr-cuhk_3287732019-02-19T03:33:43Z Learning based person re-identication across camera views. Pattern recognition systems--Mathematics Computer vision Video surveillance 行人再識別的主要任務是匹配不交叉的監控攝像頭中觀測到的行人。隨著監控攝像頭的普遍,這是一個非常重要的任務。並且,它是其他很多任務的重要子任務,例如跨攝像頭的跟蹤。行人再識別的難度存在於不同攝像頭中觀測到的同一個人會有很大的變化。這些變化來自於觀察角度的不同,光照的不同,和行人姿態的變化等等。在本文中,我們希望從如下的方面來重新思考並解決這個問題。 首先,我們發現當待匹配集合增大的時候,匹配的難度大幅度增加。在實際應用中,我們可以通過時間上的推演來減少待匹配集合的大小,簡化行人再識別這個問題。現有通過機器學習的方法來解決這個問題的算法基本會假設一個全局固定的度量。我們的方法來自提出於對於不同的待匹配集合應該有不同的度量的新觀點。因此,我們把這個問題重新定義在一個遷移學習的框架下。給定一個較大的訓練集合,我們通過訓練集合的樣本與當前的查詢集合和待匹配集合的相似程度,重新對訓練集合進行加權。這樣,我們提出一個加權的最大化邊界的度量學習方法,而這個度量較全訓練集共享的整體度量更加的具體。 我們進一步發現,在兩個不同的鏡頭中,物體形態的變換很難通過一個單一模型來進行描述。為了解決這一個問題,我們提出一個混合專家模型,要將圖片的空間進行進一步細化。我們的算法將剖分圖形空間和在每個細分後的空間中學習一個跨鏡頭的變換來將特征進行對齊。測試時,新樣本會與現有的“專家“模型進行匹配,選擇合適的變換。 我們通過一個稀疏正則項和最小信息損失正則項來進行約束。 在對上面各種方法的分析中,我們發現提取特征和訓練模型總是分開進行。一個更好的方法是將模型的訓練和特征提取同時進行。為此,我們希望能夠使用卷積神經網絡 來實現這個目標。通過精心設計網絡結構,底層網絡能夠通過兩組一一對應的特征來描 述圖像的局部信息。而這種信息對於匹配人的顏色紋理等特徵更加適合。在較高的層我 們希望學習到人在空間上的位移來判斷局部的位移是符合於人在不同攝像頭中的位移。 通過這些信息,我們的模型來決定這兩張圖片是否來自于同一個人。 在以上三個部分中,我們都同最先進的度量學習和其他基于特征設計的行人再識別方法進行比較。我們在不同的數據集上均取得了較為優秀的效果。我們進一步建立了一 個大規模的數據集,這個數據集包含更多的視角、更多的人且每個人在不同的視角下有 更多的圖片。 Person re-identification is to match persons observed in non-overlapping camera views with visual features. This is an important task in video surveillance by itself and serves as metatask for other problems like inter-camera tracking. Challenges lie in the dramatic intra-person variation introduced by viewpoint change, illumination change and pose variation etc. In this thesis, we are trying to tackle this problem in the following aspects: Firstly, we observe that the ambiguity increases with the number of candidates to be distinguished. In real world scenario, temporal reasoning is available and can simplify the problem by pruning the candidate set to be matched. Existing approaches adopt a fixed metric for matching all the subjects. Our approach is motivated by the insight that different visual metrics should be optimally learned for different candidate sets. The problem is further formulated under a transfer learning framework. Given a large training set, the training samples are selected and re-weighted according to their visual similarities with the query sample and its candidate set. A weighted maximum margin metric is learned and transferred from a generic metric to a candidate-set-specific metric. Secondly, we observe that the transformations between two camera views may be too complex to be uni-modal. To tackle this, we propose to partition the image space and formulate the problem into a mixture of expert framework. Our algorithm jointly partitions the image spaces of two camera views into different configurations according to the similarity of cross-view transforms. The visual features of an image pair from different views are locally aligned by being projected to a common feature space and then matched with softly assigned metrics which are locally optimized. The features optimal for recognizing identities are different from those for clustering cross-view transforms. They are jointly learned by utilizing sparsity-inducing norm and information theoretical regularization. In all the above analysis, feature extraction and learning models are separately designed. A better idea is to directly learn features from training samples and those features can be applied to directly train a discriminative models. We propose a new model where feature extraction is jointly learned with a discriminative convolutional neural network. Local filters at the bottom layer can well extract the information useful for matching persons across camera views like color and texture. Higher layers will capture the spatial shift of those local patches. Finally, we will test whether the shift patterns of those local patches conform to the intra-camera variation of the same person. In all three parts, comparisons with the state-of-the-art metric learning algorithms and person re-identification methods are carried out and our approach shows the superior performance on public benchmark dataset. Furthermore, we are building a much larger dataset that addresses the real-world scenario which contains much more camera views, identities, and images perview. Detailed summary in vernacular field only. Detailed summary in vernacular field only. Detailed summary in vernacular field only. Detailed summary in vernacular field only. Detailed summary in vernacular field only. Li, Wei. Thesis (M.Phil.)--Chinese University of Hong Kong, 2013. Includes bibliographical references (leaves 63-68). Abstracts also in Chinese. Acknowledgments --- p.iii Abstract --- p.vii Contents --- p.xii List of Figures --- p.xiv List of Tables --- p.xv Chapter 1 --- Introduction --- p.1 Chapter 1.1 --- Person Re-Identification --- p.1 Chapter 1.2 --- Challenge in Person Re-Identification --- p.2 Chapter 1.3 --- Literature Review --- p.4 Chapter 1.3.1 --- Feature Based Person Re-Identification --- p.4 Chapter 1.3.2 --- Learning Based Person Re-Identification --- p.7 Chapter 1.4 --- Thesis Organization --- p.8 Chapter 2 --- Tranferred Metric Learning for Person Re-Identification --- p.10 Chapter 2.1 --- Introduction --- p.10 Chapter 2.2 --- Related Work --- p.12 Chapter 2.2.1 --- Transfer Learning --- p.12 Chapter 2.3 --- Our Method --- p.13 Chapter 2.3.1 --- Visual Features --- p.13 Chapter 2.3.2 --- Searching and Weighting Training Samples --- p.13 Chapter 2.3.3 --- Learning Adaptive Metrics by Maximizing Weighted Margins --- p.15 Chapter 2.4 --- Experimental Results --- p.17 Chapter 2.4.1 --- Dataset Description --- p.17 Chapter 2.4.2 --- Generic Metric Learning --- p.18 Chapter 2.4.3 --- Transferred Metric Learning --- p.19 Chapter 2.5 --- Conclusions and Discussions --- p.21 Chapter 3 --- Locally Aligned Feature Transforms for Person Re-Identification --- p.23 Chapter 3.1 --- Introduction --- p.23 Chapter 3.2 --- Related Work --- p.24 Chapter 3.2.1 --- Localized Methods --- p.25 Chapter 3.3 --- Model --- p.26 Chapter 3.4 --- Learning --- p.27 Chapter 3.4.1 --- Priors --- p.27 Chapter 3.4.2 --- Objective Function --- p.29 Chapter 3.4.3 --- Training Model --- p.29 Chapter 3.4.4 --- Multi-Shot Extension --- p.30 Chapter 3.4.5 --- Discriminative Metric Learning --- p.31 Chapter 3.5 --- Experiment --- p.32 Chapter 3.5.1 --- Identification with Two Fixed Camera Views --- p.33 Chapter 3.5.2 --- More General Camera Settings --- p.37 Chapter 3.6 --- Conclusions --- p.38 Chapter 4 --- Deep Neural Network for Person Re-identification --- p.39 Chapter 4.1 --- Introduction --- p.39 Chapter 4.2 --- Related Work --- p.43 Chapter 4.3 --- Introduction of the New Dataset --- p.44 Chapter 4.4 --- Model --- p.46 Chapter 4.4.1 --- Architecture Overview --- p.46 Chapter 4.4.2 --- Convolutional and Max-Pooling Layer --- p.48 Chapter 4.4.3 --- Patch Matching Layer --- p.49 Chapter 4.4.4 --- Maxout Grouping Layer --- p.52 Chapter 4.4.5 --- Part Displacement --- p.52 Chapter 4.4.6 --- Softmax Layer --- p.53 Chapter 4.5 --- Training Strategies --- p.54 Chapter 4.5.1 --- Data Augmentation and Balancing --- p.55 Chapter 4.5.2 --- Bootstrapping --- p.55 Chapter 4.6 --- Experiment --- p.56 Chapter 4.6.1 --- Model Specification --- p.56 Chapter 4.6.2 --- Validation on Single Pair of Cameras --- p.57 Chapter 4.7 --- Conclusion --- p.58 Chapter 5 --- Conclusion --- p.60 Chapter 5.1 --- Conclusion --- p.60 Chapter 5.2 --- Future Work --- p.61 Bibliography --- p.63 Li, Wei. Chinese University of Hong Kong Graduate School. Division of Electronic Engineering. 2013 Text bibliography electronic resource electronic resource remote 1 online resource (xv, 68 leaves) : ill. (some col.) cuhk:328773 http://library.cuhk.edu.hk/record=b5549295 eng chi Use of this resource is governed by the terms and conditions of the Creative Commons “Attribution-NonCommercial-NoDerivatives 4.0 International” License (http://creativecommons.org/licenses/by-nc-nd/4.0/) http://repository.lib.cuhk.edu.hk/en/islandora/object/cuhk%3A328773/datastream/TN/view/Learning%20based%20person%20re-identication%20across%20camera%20views.jpghttp://repository.lib.cuhk.edu.hk/en/item/cuhk-328773