Skip to content

Commit

Permalink
Pr (#224)
Browse files Browse the repository at this point in the history
* add DOI and add to NF category

* Fix merge conflicts

---------

Co-authored-by: Francesco Vaselli <[email protected]>
  • Loading branch information
ramonpeter and francesco-vaselli authored Oct 29, 2024
1 parent 7d81092 commit e41076a
Show file tree
Hide file tree
Showing 5 changed files with 10 additions and 3 deletions.
7 changes: 6 additions & 1 deletion HEPML.bib
Original file line number Diff line number Diff line change
Expand Up @@ -2215,7 +2215,12 @@ @article{Vaselli:2024vrx
archivePrefix = "arXiv",
primaryClass = "hep-ex",
month = "2",
year = "2024"
year = "2024",
doi = "10.1088/2632-2153/ad563c",
journal = "Mach. Learn.: Sci. Technol.",
volume = "5",
number = "3",
pages = "035007"
}

% February 22, 2024
Expand Down
2 changes: 1 addition & 1 deletion HEPML.tex
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,7 @@
\\\textit{Generative Adversarial Networks~\cite{Goodfellow:2014upx} learn $p(x)$ implicitly through the minimax optimization of two networks: one that maps noise to structure $G(z)$ and one a classifier (called the discriminator) that learns to distinguish examples generated from $G(z)$ and those generated from the target process. When the discriminator is maximally `confused', then the generator is effectively mimicking $p(x)$.}
\item \textbf{(Variational) Autoencoders}~\cite{Monk:2018zsb,ATL-SOFT-PUB-2018-001,Cheng:2020dal,1816035,Howard:2021pos,Buhmann:2021lxj,Bortolato:2021zic,deja2020endtoend,Hariri:2021clz,Fanelli:2019qaq,Collins:2021pld,Orzari:2021suh,Jawahar:2021vyu,Tsan:2021brw,Buhmann:2021caf,Touranakou:2022qrp,Ilten:2022jfm,Collins:2022qpr,AbhishekAbhishek:2022wby,Cresswell:2022tof,Roche:2023int,Anzalone:2023ugq,Lasseri:2023dhi,Chekanov:2023uot,Zhang:2023khv,Hoque:2023zjt,Kuh:2024lgx,Liu:2024kvv}
\\\textit{An autoencoder consists of two functions: one that maps $x$ into a latent space $z$ (encoder) and a second one that maps the latent space back into the original space (decoder). The encoder and decoder are simultaneously trained so that their composition is nearly the identity. When the latent space has a well-defined probability density (as in variational autoencoders), then one can sample from the autoencoder by applying the detector to a randomly chosen element of the latent space.}
\item \textbf{(Continuous) Normalizing flows}~\cite{Albergo:2019eim,1800956,Kanwar:2003.06413,Brehmer:2020vwc,Bothmann:2020ywa,Gao:2020zvv,Gao:2020vdv,Nachman:2020lpy,Choi:2020bnf,Lu:2020npg,Bieringer:2020tnw,Hollingsworth:2021sii,Winterhalder:2021ave,Krause:2021ilc,Hackett:2021idh,Menary:2021tjg,Hallin:2021wme,NEURIPS2020_a878dbeb,Vandegar:2020yvw,Jawahar:2021vyu,Bister:2021arb,Krause:2021wez,Butter:2021csz,Winterhalder:2021ngy,Butter:2022lkf,Verheyen:2022tov,Leigh:2022lpn,Chen:2022ytr,Albandea:2022fky,Krause:2022jna,Cresswell:2022tof,Kach:2022qnf,Kach:2022uzq,Dolan:2022ikg,Backes:2022vmn,Heimel:2022wyj,Albandea:2023wgd,Rousselot:2023pcj,Diefenbacher:2023vsw,Nicoli:2023qsl,R:2023dcr,Nachman:2023clf,Raine:2023fko,Golling:2023yjq,Wen:2023oju,Xu:2023xdc,Singha:2023xxq,Buckley:2023rez,Pang:2023wfx,Golling:2023mqx,Reyes-Gonzalez:2023oei,Bickendorf:2023nej,Finke:2023ltw,Bright-Thonney:2023sqf,Albandea:2023ais,Pham:2023bnl,Gavranovic:2023oam,Heimel:2023ngj,Bierlich:2023zzd,ElBaz:2023ijr,Ernst:2023qvn,Krause:2023uww,Kanwar:2024ujc,Deutschmann:2024lml,Kelleher:2024rmb,Kelleher:2024jsh,Schnake:2024mip,Daumann:2024kfd,Abbott:2024knk,Bai:2024pii,Du:2024gbp,Favaro:2024rle,Buss:2024orz,Dreyer:2024bhs,Quetant:2024ftg}
\item \textbf{(Continuous) Normalizing flows}~\cite{Albergo:2019eim,1800956,Kanwar:2003.06413,Brehmer:2020vwc,Bothmann:2020ywa,Gao:2020zvv,Gao:2020vdv,Nachman:2020lpy,Choi:2020bnf,Lu:2020npg,Bieringer:2020tnw,Hollingsworth:2021sii,Winterhalder:2021ave,Krause:2021ilc,Hackett:2021idh,Menary:2021tjg,Hallin:2021wme,NEURIPS2020_a878dbeb,Vandegar:2020yvw,Jawahar:2021vyu,Bister:2021arb,Krause:2021wez,Butter:2021csz,Winterhalder:2021ngy,Butter:2022lkf,Verheyen:2022tov,Leigh:2022lpn,Chen:2022ytr,Albandea:2022fky,Krause:2022jna,Cresswell:2022tof,Kach:2022qnf,Kach:2022uzq,Dolan:2022ikg,Backes:2022vmn,Heimel:2022wyj,Albandea:2023wgd,Rousselot:2023pcj,Diefenbacher:2023vsw,Nicoli:2023qsl,R:2023dcr,Nachman:2023clf,Raine:2023fko,Golling:2023yjq,Wen:2023oju,Xu:2023xdc,Singha:2023xxq,Buckley:2023rez,Pang:2023wfx,Golling:2023mqx,Reyes-Gonzalez:2023oei,Bickendorf:2023nej,Finke:2023ltw,Bright-Thonney:2023sqf,Albandea:2023ais,Pham:2023bnl,Gavranovic:2023oam,Heimel:2023ngj,Bierlich:2023zzd,ElBaz:2023ijr,Ernst:2023qvn,Krause:2023uww,Kanwar:2024ujc,Deutschmann:2024lml,Kelleher:2024rmb,Vaselli:2024vrx,Kelleher:2024jsh,Schnake:2024mip,Daumann:2024kfd,Abbott:2024knk,Bai:2024pii,Du:2024gbp,Favaro:2024rle,Buss:2024orz,Dreyer:2024bhs,Quetant:2024ftg}
\\\textit{Normalizing flows~\cite{pmlr-v37-rezende15} learn $p(x)$ explicitly by starting with a simple probability density and then applying a series of bijective transformations with tractable Jacobians.}
\item \textbf{Diffusion Models}~\cite{Mikuni:2022xry,Leigh:2023toe,Mikuni:2023dvk,Shmakov:2023kjj,Buhmann:2023bwk,Butter:2023fov,Mikuni:2023tok,Acosta:2023zik,Leigh:2023zle,Imani:2023blb,Amram:2023onf,Diefenbacher:2023flw,Cotler:2023lem,Diefenbacher:2023wec,Mikuni:2023tqg,Hunt-Smith:2023ccp,Buhmann:2023kdg,Buhmann:2023zgc,Buhmann:2023acn,Devlin:2023jzp,Heimel:2023ngj,Wang:2023sry,Butter:2023ira,Sengupta:2023vtm,Jiang:2024ohg,Kobylianskii:2024ijw,Vaselli:2024vrx,Jiang:2024bwr,Kobylianskii:2024sup,Favaro:2024rle,Kita:2024nnw,Quetant:2024ftg,Wojnar:2024cbn}
\\\textit{These approaches learn the gradient of the density instead of the density directly.}
Expand Down
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -1480,6 +1480,7 @@ This review was built with the help of the HEP-ML community, the [INSPIRE REST A
* [Flow-based sampling for lattice field theories](https://arxiv.org/abs/2401.01297)
* [Accelerating HEP simulations with Neural Importance Sampling](https://arxiv.org/abs/2401.09069) [[DOI](https://doi.org/10.1007/JHEP03(2024)083)]
* [Improving $\Lambda$ Signal Extraction with Domain Adaptation via Normalizing Flows](https://arxiv.org/abs/2403.14076) [[DOI](https://doi.org/10.22323/1.456.0043)]
* [End-to-end simulation of particle physics events with Flow Matching and generator Oversampling](https://arxiv.org/abs/2402.13684) [[DOI](https://doi.org/10.1088/2632-2153/ad563c)]
* [Normalizing Flows for Domain Adaptation when Identifying $\Lambda$ Hyperon Events](https://arxiv.org/abs/2403.14804) [[DOI](https://doi.org/10.1088/1748-0221/19/06/C06020)]
* [CaloPointFlow II Generating Calorimeter Showers as Point Clouds](https://arxiv.org/abs/2403.15782)
* [One flow to correct them all: improving simulations in high-energy physics with a single normalising flow and a switch](https://arxiv.org/abs/2403.18582) [[DOI](https://doi.org/10.1007/s41781-024-00125-0)]
Expand Down
1 change: 1 addition & 0 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -1617,6 +1617,7 @@ const expandElements = shouldExpand => {
* [Flow-based sampling for lattice field theories](https://arxiv.org/abs/2401.01297)
* [Accelerating HEP simulations with Neural Importance Sampling](https://arxiv.org/abs/2401.09069) [[DOI](https://doi.org/10.1007/JHEP03(2024)083)]
* [Improving $\Lambda$ Signal Extraction with Domain Adaptation via Normalizing Flows](https://arxiv.org/abs/2403.14076) [[DOI](https://doi.org/10.22323/1.456.0043)]
* [End-to-end simulation of particle physics events with Flow Matching and generator Oversampling](https://arxiv.org/abs/2402.13684) [[DOI](https://doi.org/10.1088/2632-2153/ad563c)]
* [Normalizing Flows for Domain Adaptation when Identifying $\Lambda$ Hyperon Events](https://arxiv.org/abs/2403.14804) [[DOI](https://doi.org/10.1088/1748-0221/19/06/C06020)]
* [CaloPointFlow II Generating Calorimeter Showers as Point Clouds](https://arxiv.org/abs/2403.15782)
* [One flow to correct them all: improving simulations in high-energy physics with a single normalising flow and a switch](https://arxiv.org/abs/2403.18582) [[DOI](https://doi.org/10.1007/s41781-024-00125-0)]
Expand Down
2 changes: 1 addition & 1 deletion docs/recent.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ This is an automatically compiled list of papers which have been added to the li
* [Anomaly Detection Based on Machine Learning for the CMS Electromagnetic Calorimeter Online Data Quality Monitoring](https://arxiv.org/abs/2407.20278)
* [Accelerating template generation in resonant anomaly detection searches with optimal transport](https://arxiv.org/abs/2407.19818)
* [Probing Charm Yukawa through $ch$ Associated Production at the Hadron Collider](https://arxiv.org/abs/2407.19797)
* [Accuracy versus precision in boosted top tagging with the ATLAS detector](https://arxiv.org/abs/2407.20127)
* [Accuracy versus precision in boosted top tagging with the ATLAS detector](https://arxiv.org/abs/2407.20127) [[DOI](https://doi.org/10.1088/1748-0221/19/08/P08018)]
* [Comparison of Geometrical Layouts for Next-Generation Large-volume Cherenkov Neutrino Telescopes](https://arxiv.org/abs/2407.19010)
* [The Observation of a 95 GeV Scalar at Future Electron-Positron Colliders](https://arxiv.org/abs/2407.16806)
* [Applying generative neural networks for fast simulations of the ALICE (CERN) experiment](https://arxiv.org/abs/2407.16704)
Expand Down

0 comments on commit e41076a

Please sign in to comment.