Notice
Recent Posts
Recent Comments
Today
Total
01-26 19:30
Link
๊ด€๋ฆฌ ๋ฉ”๋‰ด

Partially Committed

[Review] Beyond a Gaussian Denoiser: Residual Learnung for Deep CNN for Image Denoising (IEEE 2017) ๋ณธ๋ฌธ

๐Ÿ‘จ‍๐Ÿ‘ง‍๐Ÿ‘ฆ ํ™œ๋™/Low-light Image enhancement(์กธ์—…์ž‘ํ’ˆ)

[Review] Beyond a Gaussian Denoiser: Residual Learnung for Deep CNN for Image Denoising (IEEE 2017)

WonderJay 2023. 3. 13. 15:17
728x90
๋ฐ˜์‘ํ˜•
SMALL

1. Introduction

  Image denoising ์˜ ๊ธฐ๋ณธ์ ์ธ ๋ชฉ์ ์€ noisy observation y ๋กœ๋ถ€ํ„ฐ ๊นจ๋—ํ•œ ์ด๋ฏธ์ง€ x ๋ฅผ ์–ป์–ด๋‚ด๋Š” ๊ฒƒ์ด๋‹ค. ( y = x +v ) ์ด๋•Œ v ๋Š” AWGN(additive white Gaussian noise with standard deviation) ์œผ๋กœ ๊ฐ€์ •ํ•œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ ์ œ์•ˆํ•˜๋Š” DnCNN ๋ชจ๋ธ์€ image denoising ์„ plain discriminative learning problem ์œผ๋กœ ๊ฐ„์ฃผํ•œ๋‹ค. ์ด๋Š” noisy image ๋กœ๋ถ€ํ„ฐ noise ๋ฅผ feed-forward convolutional neural network ๋กœ ๋ถ„๋ฆฌํ•˜๊ณ ์ž ํ•œ๋‹ค๋Š” ๊ฒƒ์ด๋‹ค. CNN ์„ ์‚ฌ์šฉํ•˜๋Š” ์ด์œ ๋Š” ์•„๋ž˜์™€ ๊ฐ™๋‹ค.

 

   1. deep ํ•œ CNN ๊ตฌ์กฐ๋Š” ์ด๋ฏธ์ง€์˜ ํŠน์„ฑ์„ ์ถฉ๋ถ„ํžˆ ๊ณ ๋ คํ•  ์ˆ˜ ์žˆ์„๋งŒํผ capacity, flexibility ํŠน์„ฑ์ด ์ข‹๋‹ค

   2. CNN ์„ ํ•™์Šต์‹œํ‚ค๋Š” ๋ฐ์— ์‚ฌ์šฉํ•˜๋Š” regularization , learning method ๊ฐ€ ์ƒ๋‹น ์ˆ˜์ค€ ๋ฐœ์ „ํ–ˆ๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค.

   3. CNN ์€ GPU ์—์„œ ๋ณ‘๋ ฌ ์ปดํ“จํŒ… ํ™˜๊ฒฝ์— ์ ํ•ฉํ•˜๊ธฐ ๋•Œ๋ฌธ์— run time performance ๊ฐ€ ์ข‹๋‹ค.

 

  ์œ„์™€ ๊ฐ™์€ ์ด์œ ๋กœ CNN ์„ ํ™œ์šฉํ•œ ๋ชจ๋ธ์ธ DnCNN ๋ชจ๋ธ์„ ์ œ์•ˆํ•œ๋‹ค. denoised image ๋ฅผ ๋ฐ”๋กœ outputing ํ•˜๋Š” ๊ฒƒ์ด ์•„๋‹ˆ๋ผ residual image ๋ฅผ outputing ํ•œ๋‹ค. DnCNN ์—์„œ๋Š” hidden layer ์—์„œ latent clean image ๋ฅผ remove ํ•˜๋Š” ๋ฐฉํ–ฅ์œผ๋กœ ๋™์ž‘ํ•œ๋‹ค๋Š” ๊ฒƒ์ด๋‹ค. BN(batch normalization) ์€ ์ด๋ฅผ ๋ณด๋‹ค stable ํ•˜๊ณ  training performance ๋ฅผ ์ฆ๊ฐ€์‹œ์ผœ์ฃผ๋Š” ์ž‘์šฉ์„ ํ•œ๋‹ค.

 

  ๋ณธ ๋…ผ๋ฌธ์˜ contribution ์€ ์•„๋ž˜์™€ ๊ฐ™์ด ์š”์•ฝํ•  ์ˆ˜ ์žˆ๋‹ค.

1. Gaussian denoising ์„ ์œ„ํ•œ end-to-end trainable deep CNN ๋ชจ๋ธ์„ ์ œ์•ˆํ•œ๋‹ค. direct ํ•˜๊ฒŒ latent clean image ๋ฅผ estimate ํ•˜๋Š” ๋ฐฉ๋ฒ•์ด ์•„๋‹Œ noisy observation ์œผ๋กœ๋ถ€ํ„ฐ latent clean image ๋ฅผ ์ œ๊ฑฐํ•˜๋Š” ๋ฐฉํ–ฅ์œผ๋กœ ์ง„ํ–‰ํ•œ๋‹ค.

2. residual laerning ๊ณผ batch noramlization ์„ ํ†ตํ•ด ํ•™์Šต ์†๋„ ๋ฟ๋งŒ์ด ์•„๋‹ˆ๋ผ denoising performance ๊ฐ€ ๋†’์•„์ง€๋Š” ๊ฒƒ์„ ๋ฐœ๊ฒฌํ•˜์˜€๋‹ค.

3. DnCNN ๋ชจ๋ธ์€ ์ผ๋ฐ˜์ ์œผ๋กœ ํ™•์žฅํ•˜๊ธฐ ์‰ฝ๋‹ค. 

 

 

2. Related Work

  B. Residual Learning (use 'skip connection' !)  and Batch Normalization

  CNN ๊ทผ๋ณธ์ ์œผ๋กœ ๊ฐ€์ง€๋Š” ๋ฌธ์ œ๋Š” depth ๊ฐ€ ์ฆ๊ฐ€ํ•  ์ˆ˜๋ก performance ๊ฐ€ ๊ฐ์†Œํ•œ๋‹ค๋Š” ๊ฒƒ์ด๋‹ค. ์ด๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ gradient vanishing problem ์— ์˜ํ•ด ๋ฐœ์ƒํ•˜๋Š”๋ฐ, skip connection ์„ ํ™œ์šฉํ•œ residual learning strategy ๋กœ ์ด๋ฅผ ๋ฐฉ์ง€ํ•˜๊ณ  CNN ๋ชจ๋ธ์„ ๋ณด๋‹ค ๊นŠ๊ฒŒ stacking ํ•  ์ˆ˜ ์žˆ๋‹ค. 

  Batch Normalization ์— ๋Œ€ํ•ด์„œ ๊ฐ„๋‹จํžˆ ์•Œ์•„๋ณด์ž. Mini-batch stochastic gradient descent(SGD) ๊ฐ€ ๊ฐ„๋‹จํ•˜๋ฉด์„œ๋„ ํšจ๊ณผ์ ์ด๊ธฐ ๋•Œ๋ฌธ์— CNN model ์˜ optimizer ๋กœ ์ข…์ข… ์‚ฌ์šฉํ–ˆ์ง€๋งŒ, training ๊ณผ์ •์—์„œ input ์˜ distribution ์ด ๋‹ค๋ฅธ ๊ฒฝ์šฐ๊ฐ€ ์žˆ๋‹ค. ์ด๋ฅผ covariate shift ๋ผ๊ณ  ํ•˜๋Š”๋ฐ, ์ด๋Ÿฌํ•œ ํ˜„์ƒ์— ์˜ํ•˜์—ฌ training ์ด ์–ด๋ ค์›Œ์ง€๋Š” ๊ฒฐ๊ณผ๊ฐ€ ๋ฐœ์ƒํ•œ๋‹ค. Batch normalization ์œผ๋กœ ํ•˜์—ฌ๊ธˆ ์ด๋ฅผ ํ•ด๊ฒฐํ•  ์ˆ˜ ์žˆ๋‹ค. ๊ฐ๊ฐ์˜ layer ๋ฅผ normalizing ํ•จ์œผ๋กœ์จ ์œ ์‚ฌํ•œ ๋ถ„ํฌ๋ฅผ ๊ฐ€์ง€๋„๋ก ํ•ด์„œ stable ํ•˜๋ฉด์„œ๋„ training speed ๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ๋‹ค. ์š”์•ฝํ•˜์ž๋ฉด SGD ์˜ ๊ฒฝ์šฐ CNN ์— ์‚ฌ์šฉ๋˜๋Š” ํ”ํ•œ ๋ฐฉ๋ฒ•์ด์ง€๋งŒ ํ•ด๋‹น optimizer ๋Š” internal covariate shift ์— ์˜ํ•ด training speed ๊ฐ€ ๊ธ‰๊ฒฉํžˆ ๋–จ์–ด์ง€๋Š” ๊ฒฐ๊ณผ๋ฅผ ์•ผ๊ธฐํ•œ๋‹ค. ์ด๋ฅผ ๋ฐฉ์ง€ํ•˜๊ธฐ ์œ„ํ•œ ๋ฐฉ๋ฒ•์œผ๋กœ Batch normalization ๊ธฐ๋ฒ•์ด ์•„์ฃผ ํšจ๊ณผ์ ์ด๋ผ๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•˜๋‹ค.

 

3. The Proposed Denoising CNN Model

  DnCNN ๋ชจ๋ธ์˜ architecture ๋Š” VGG network ๋ฅผ image denoising ์— ์ ํ•ฉํ•˜๋„๋ก modify ํ•œ ๊ฒƒ์ด๋‹ค. model ํ•™์Šต์„ ์œ„ํ•ด residual learning ์™€ batch normalization ์„ ์‚ฌ์šฉํ•˜์˜€๋‹ค.

 

A. Network Depth

  DnCNN ๋ชจ๋ธ์˜ input ์€ noisy observation ์ด๋‹ค. (y = x  + v) ๊ธฐ์กด์˜ ๋ชจ๋ธ๋“ค์€ F(y) = x ์— ๋Œ€ํ•ญํ•˜๋Š” mapping function ์ธ F ๋ฅผ ์–ป๋Š” ๋ฐ์— ์ดˆ์ ์„ ๋‘์ง€๋งŒ, DnCNN ์˜ ๊ฒฝ์šฐ์—๋Š” residual learning ์„ ํ•˜๊ธฐ ๋•Œ๋ฌธ์— R(y) ~= v ๋ฅผ ์–ป์–ด๋‚ด๊ณ  x = y - R(y) ์™€ ๊ฐ™์€ ๋ฐฉ์‹์œผ๋กœ Clean image ๋ฅผ ์–ป๊ณ ์ž ํ•œ๋‹ค. desired residual image ์™€ estimated residual image ๊ฐ„์˜ averaged MSE ๋ฅผ loss function ์œผ๋กœ ์‚ฌ์šฉํ•œ๋‹ค.

 --- ์ž‘์„ฑ์ค‘ ---

 

  

728x90
๋ฐ˜์‘ํ˜•
LIST
Comments