Library Catalog

>>
Library Catalog
>
000 nam5i
001 2210080934212
003 DE-He213
005 20250321105346
007 cr nn 008mamaa
008 240529s2024 sz | s |||| 0|eng d
020 a97830315738979978-3-031-57389-7
024 a10.1007/978-3-031-57389-72doi
040 a221008
050 aTK5105.5-5105.9
072 aUKN2bicssc
072 aCOM0430002bisacsh
072 aUKN2thema
082 a004.6223
100 aLi, Shaofeng.eauthor.4aut4http://id.loc.gov/vocabulary/relators/aut
245 00 aBackdoor Attacks against Learning-Based Algorithmsh[electronic resource] /cby Shaofeng Li, Haojin Zhu, Wen Wu, Xuemin (Sherman) Shen.
250 a1st ed. 2024.
264 aCham :bSpringer Nature Switzerland :bImprint: Springer,c2024.
300 aXI, 153 p. 58 illus., 56 illus. in color.bonline resource.
336 atextbtxt2rdacontent
337 acomputerbc2rdamedia
338 aonline resourcebcr2rdacarrier
347 atext filebPDF2rda
490 aWireless Networks,x2366-1445
505 aIntroduction -- Literature Review of Backdoor Attacks -- Invisible Backdoor Attacks in Image Classification Based Network Services -- Hidden Backdoor Attacks in NLP Based Network Services -- Backdoor Attacks and Defense in FL -- Summary and Future Directions.
520 aThis book introduces a new type of data poisoning attack, dubbed, backdoor attack. In backdoor attacks, an attacker can train the model with poisoned data to obtain a model that performs well on a normal input but behaves wrongly with crafted triggers. Backdoor attacks can occur in many scenarios where the training process is not entirely controlled, such as using third-party datasets, third-party platforms for training, or directly calling models provided by third parties. Due to the enormous threat that backdoor attacks pose to model supply chain security, they have received widespread attention from academia and industry. This book focuses on exploiting backdoor attacks in the three types of DNN applications, which are image classification, natural language processing, and federated learning. Based on the observation that DNN models are vulnerable to small perturbations, this book demonstrates that steganography and regularization can be adopted to enhance the invisibility of backdoor triggers. Based on image similarity measurement, this book presents two metrics to quantitatively measure the invisibility of backdoor triggers. The invisible trigger design scheme introduced in this book achieves a balance between the invisibility and the effectiveness of backdoor attacks. In the natural language processing domain, it is difficult to design and insert a general backdoor in a manner imperceptible to humans. Any corruption to the textual data (e.g., misspelled words or randomly inserted trigger words/sentences) must retain context-awareness and readability to human inspectors. This book introduces two novel hidden backdoor attacks, targeting three major natural language processing tasks, including toxic comment detection, neural machine translation, and question answering, depending on whether the targeted NLP platform accepts raw Unicode characters. The emerged distributed training framework, i.e., federated learning, has advantages in preserving users' privacy. It has been widely used in electronic medical applications, however, it also faced threats derived from backdoor attacks. This book presents a novel backdoor detection framework in FL-based e-Health systems. We hope this book can provide insightful lights on understanding the backdoor attacks in different types of learning-based algorithms, including computer vision, natural language processing, and federated learning. The systematic principle in this book also offers valuable guidance on the defense of backdoor attacks against future learning-based algorithms.
650 aComputer networks .
650 aWireless communication systems.
650 aMobile communication systems.
650 aMachine learning.
650 aComputer Communication Networks.
650 aWireless and Mobile Communication.
650 aMachine Learning.
700 aZhu, Haojin.eauthor.4aut4http://id.loc.gov/vocabulary/relators/aut
700 aWu, Wen.eauthor.4aut4http://id.loc.gov/vocabulary/relators/aut
700 aShen, Xuemin (Sherman).eauthor.4aut4http://id.loc.gov/vocabulary/relators/aut
710 aSpringerLink (Online service)
773 tSpringer Nature eBook
776 iPrinted edition:z9783031573880
776 iPrinted edition:z9783031573903
776 iPrinted edition:z9783031573910
830 aWireless Networks,x2366-1445
856 uhttps://doi.org/10.1007/978-3-031-57389-7
912 aZDB-2-SCS
912 aZDB-2-SXCS
950 aComputer Science (SpringerNature-11645)
950 aComputer Science (R0) (SpringerNature-43710)
Backdoor Attacks against Learning-Based Algorithms[electronic resource] /by Shaofeng Li, Haojin Zhu, Wen Wu, Xuemin (Sherman) Shen
Material type
전자책
Title
Backdoor Attacks against Learning-Based Algorithms[electronic resource] /by Shaofeng Li, Haojin Zhu, Wen Wu, Xuemin (Sherman) Shen
Author's Name
판 사항
1st ed. 2024.
Physical Description
XI, 153 p 58 illus, 56 illus in color online resource.
Keyword
This book introduces a new type of data poisoning attack, dubbed, backdoor attack. In backdoor attacks, an attacker can train the model with poisoned data to obtain a model that performs well on a normal input but behaves wrongly with crafted triggers. Backdoor attacks can occur in many scenarios where the training process is not entirely controlled, such as using third-party datasets, third-party platforms for training, or directly calling models provided by third parties. Due to the enormous threat that backdoor attacks pose to model supply chain security, they have received widespread attention from academia and industry. This book focuses on exploiting backdoor attacks in the three types of DNN applications, which are image classification, natural language processing, and federated learning. Based on the observation that DNN models are vulnerable to small perturbations, this book demonstrates that steganography and regularization can be adopted to enhance the invisibility of backdoor triggers. Based on image similarity measurement, this book presents two metrics to quantitatively measure the invisibility of backdoor triggers. The invisible trigger design scheme introduced in this book achieves a balance between the invisibility and the effectiveness of backdoor attacks. In the natural language processing domain, it is difficult to design and insert a general backdoor in a manner imperceptible to humans. Any corruption to the textual data (e.g., misspelled words or randomly inserted trigger words/sentences) must retain context-awareness and readability to human inspectors. This book introduces two novel hidden backdoor attacks, targeting three major natural language processing tasks, including toxic comment detection, neural machine translation, and question answering, depending on whether the targeted NLP platform accepts raw Unicode characters. The emerged distributed training framework, i.e., federated learning, has advantages in preserving users' privacy. It has been widely used in electronic medical applications, however, it also faced threats derived from backdoor attacks. This book presents a novel backdoor detection framework in FL-based e-Health systems. We hope this book can provide insightful lights on understanding the backdoor attacks in different types of learning-based algorithms, including computer vision, natural language processing, and federated learning. The systematic principle in this book also offers valuable guidance on the defense of backdoor attacks against future learning-based algorithms.
관련 URL

Holdings Information

RReservation
MMissing Book Request
CClosed Stack Request
IInter-Campus Loan
CPriority Cataloging
PPrint
Registration no. Call no. Location Mark Location Status Due for return Service
전자자료는 소장사항이 존재하지 않습니다

Book Overview

Full menu