久久影院一区二区三区-久久影院午夜伦手机不四虎卡-久久影院毛片一区二区-久久影视一区-在线精品91青草国产在线观看-在线激情小视频

<Back

Exploring Diffusion Time-steps for Unsupervised Representation Learning

Zhongqi Yue, Jiankun Wang, Qianru Sun, Lei Ji, Eric I-Chao Chang, Hanwang Zhang

ICLR 2024 Conference

May 2024

Keywords: unsupervised representation learning, diffusion model, representation disentanglement, counterfactual generation

Abstract:

Representation learning is all about discovering the hidden modular attributes that generate the data faithfully. We explore the potential of Denoising Diffusion Probabilistic Model (DM) in unsupervised learning of the modular attributes. We build a theoretical framework that connects the diffusion time-steps and the hidden attributes, which serves as an effective inductive bias for unsupervised learning. Specifically, the forward diffusion process incrementally adds Gaussian noise to samples at each time-step, which essentially collapses different samples into similar ones by losing attributes, e.g., fine-grained attributes such as texture are lost with less noise added (i.e., early time-steps), while coarse-grained ones such as shape are lost by adding more noise (i.e., late time-steps). To disentangle the modular attributes, at each time-step t, we learn a t-specific feature to compensate for the newly lost attribute, and the set of all {1,...,t}-specific features, corresponding to the cumulative set of lost attributes, are trained to make up for the reconstruction error of a pre-trained DM at time-step t. On CelebA, FFHQ, and Bedroom datasets, the learned feature significantly improves attribute classification and enables faithful counterfactual generation, e.g., interpolating only one specified attribute between two images, validating the disentanglement quality.

View More PDF>>

主站蜘蛛池模板: 武山县| 伊宁市| 易门县| 石林| 分宜县| 马关县| 莫力| 渝中区| 万州区| 曲水县| 南岸区| 香港 | 府谷县| 南岸区| 西乡县| 射阳县| 合水县| 射洪县| 仁寿县| 神农架林区| 宝坻区| 江西省| 上高县| 即墨市| 建宁县| 广汉市| 壶关县| 毕节市| 贡觉县| 丰原市| 新建县| 颍上县| 余庆县| 法库县| 织金县| 威远县| 松滋市| 开江县| 松滋市| 大名县| 黄骅市|