| 000 | 03085naaaa2200649uu 4500 | ||
|---|---|---|---|
| 001 | https://directory.doabooks.org/handle/20.500.12854/76429 | ||
| 005 | 20220219212126.0 | ||
| 020 | _abooks978-3-0365-0803-0 | ||
| 020 | _a9783036508023 | ||
| 020 | _a9783036508030 | ||
| 024 | 7 |
_a10.3390/books978-3-0365-0803-0 _cdoi |
|
| 041 | 0 | _aEnglish | |
| 042 | _adc | ||
| 072 | 7 |
_aKNTX _2bicssc |
|
| 100 | 1 |
_aGeiger, Bernhard _4edt |
|
| 700 | 1 |
_aKubin, Gernot _4edt |
|
| 700 | 1 |
_aGeiger, Bernhard _4oth |
|
| 700 | 1 |
_aKubin, Gernot _4oth |
|
| 245 | 1 | 0 | _aInformation Bottleneck : Theory and Applications in Deep Learning |
| 260 |
_aBasel, Switzerland _bMDPI - Multidisciplinary Digital Publishing Institute _c2021 |
||
| 300 | _a1 electronic resource (274 p.) | ||
| 506 | 0 |
_aOpen Access _2star _fUnrestricted online access |
|
| 520 | _aThe celebrated information bottleneck (IB) principle of Tishby et al. has recently enjoyed renewed attention due to its application in the area of deep learning. This collection investigates the IB principle in this new context. The individual chapters in this collection: • provide novel insights into the functional properties of the IB; • discuss the IB principle (and its derivates) as an objective for training multi-layer machine learning structures such as neural networks and decision trees; and • offer a new perspective on neural network learning via the lens of the IB framework. Our collection thus contributes to a better understanding of the IB principle specifically for deep learning and, more generally, of information–theoretic cost functions in machine learning. This paves the way toward explainable artificial intelligence. | ||
| 540 |
_aCreative Commons _fhttps://creativecommons.org/licenses/by/4.0/ _2cc _4https://creativecommons.org/licenses/by/4.0/ |
||
| 546 | _aEnglish | ||
| 650 | 7 |
_aInformation technology industries _2bicssc |
|
| 653 | _ainformation theory | ||
| 653 | _avariational inference | ||
| 653 | _amachine learning | ||
| 653 | _alearnability | ||
| 653 | _ainformation bottleneck | ||
| 653 | _arepresentation learning | ||
| 653 | _aconspicuous subset | ||
| 653 | _astochastic neural networks | ||
| 653 | _amutual information | ||
| 653 | _aneural networks | ||
| 653 | _ainformation | ||
| 653 | _abottleneck | ||
| 653 | _acompression | ||
| 653 | _aclassification | ||
| 653 | _aoptimization | ||
| 653 | _aclassifier | ||
| 653 | _adecision tree | ||
| 653 | _aensemble | ||
| 653 | _adeep neural networks | ||
| 653 | _aregularization methods | ||
| 653 | _ainformation bottleneck principle | ||
| 653 | _adeep networks | ||
| 653 | _asemi-supervised classification | ||
| 653 | _alatent space representation | ||
| 653 | _ahand crafted priors | ||
| 653 | _alearnable priors | ||
| 653 | _aregularization | ||
| 653 | _adeep learning | ||
| 856 | 4 | 0 |
_awww.oapen.org _uhttps://mdpi.com/books/pdfview/book/3864 _70 _zDOAB: download the publication |
| 856 | 4 | 0 |
_awww.oapen.org _uhttps://directory.doabooks.org/handle/20.500.12854/76429 _70 _zDOAB: description of the publication |
| 999 |
_c44560 _d44560 |
||