DSpace Repository

Error Level Fusion of Multimodal Biometrics

Show simple item record

dc.contributor.author Grover, Jyotsana
dc.date.accessioned 2023-01-23T10:30:07Z
dc.date.available 2023-01-23T10:30:07Z
dc.date.issued 2011
dc.identifier.uri http://www.jprr.org/index.php/jprr/article/view/314
dc.identifier.uri http://dspace.bits-pilani.ac.in:8080/xmlui/handle/123456789/8670
dc.description.abstract This paper presents a multimodal biometric system based on error level fusion. Two error level fusion strategies, one involving the Choquet integral and another involving the t-norms are proposed. The first strategy fully exploits the non additive aspect of the integral that accounts for the dependence or the overlapping information between the error rates FAR's and FRR's of each biometric modality under consideration. A hybrid learning algorithm using combination of Particle Swarm Optimization, Bacterial Foraging and Reinforcement learning is developed to learn the fuzzy densities and the interaction factor. The second strategy employs t-norms that require no learning. The fusion of the error rates using t-norms is not only fast but results in very good performance. This sort of fusion is a kind of decision level fusion as the error rates are derived from the decisions made on individual modalities. The experimental evaluation on two hand based datasets and two publically available datasets confirms the utility of the error level fusion en_US
dc.language.iso en en_US
dc.publisher JPRR en_US
dc.subject Computer Science en_US
dc.subject Multimodal biometrics en_US
dc.title Error Level Fusion of Multimodal Biometrics en_US
dc.type Article en_US


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account