星期五, 9月 20, 2019
DeepN-JPEG: A Deep Neural Network Favorable JPEG-based Image Compression Framework
The marriage of big data and deep learning leads to the great success of artificial intelligence, but it also raises new challenges in data communication, storage and computation [7] incurred by the growing amount of distributed data and the increasing DNN model size. For resource-constrained IoT applications, while recent researches have been conducted [8, 9] to handle the computation and memory-intensive DNN workloads in an energy efficient manner, there lack efficient solutions to reduce the power-hungry data offloading and storage on terminal devices like edge sensors, especially in face of the stringent constraints on communication bandwidth, energy and hardware resources. Recent studies show that the latencies to upload a JPEG-compressed input image (i.e. 152KB) for a single inference of a popular CNN–“AlexNet” via stable wireless connections with 3G (870ms), LTE (180ms) and Wi-Fi (95ms), can exceed that of DNN computation (6∼82ms) by a mobile or cloud-GPU [10]. Moreover, the communication energy is comparable with the associated DNN computation energy.
Existing image compression frameworks (such as JPEG) can compress data aggressively, but they are often optimized for the Human-Visual System (HVS) or human’s perceived image quality, which can lead to unacceptable DNN accuracy degradation at higher compression ratios (CR) and thus significantly harm the quality of intelligent services. As shown later, testing a well-trained AlexNet using CR =∼ 5× compressed JPEG images (w.r.t. CR = 1× high quality images ), can lead to ∼ 9% image recognition accuracy reduction for the large scale dataset— ImageNet, almost offsetting the improvement brought by more complex DNN topology, i.e. from AlexNet to GoogLeNet (8 layers, 724M MACs v.s. 22 layers, 1.43G MACs) [11, 12]. This prompts the need of developing an DNN-favorable deep compression framework.
DeepN-JPEG: A Deep Neural Network Favorable JPEG-based Image Compression Framework
https://arxiv.org/pdf/1803.05788.pdf
Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples
https://arxiv.org/pdf/1803.05787.pdf
訂閱:
張貼留言 (Atom)
lec-1 (2022-05-12) Accelerating deep learning computation & strategies
雖然用 DNN train/predict model 也好一陣子了,但這週才是第一次搞懂 cuDNN 是作什麼的 以前好奇過 tensorflow/pytorch 是怎麼做 convolution 的,FFT 不是比較好嗎? 下面的 reference 就給了很好的解釋: Wh...
-
今天是『談判:合作之決策』的第一堂課。 大師開講,果然如沐春風。有機會上這門課,真是一種福氣。 飢渴地唸書,熱心地服務幼童,熱血的關心國事,大學是他的黃金歲月 台大管理學院教授江炯聰,人如其名,目光炯炯、聰明智慧,於1965年以第一名成績考進台北市建國中學,1968年再以第一志...
-
https://github.com/chenhsiu/remagic/blob/master/convolution.ipynb The Overlap add/save method gives us an idea about how to use FFT ...
沒有留言:
張貼留言