TY - JOUR
T1 - Distributionally robust learning based on dirichlet process prior in edge networks
AU - Zhang, Zhao Feng
AU - Chen, Yue
AU - Zhang, Jun Shan
N1 - Funding Information: Manuscript received Jan. 13, 2020; accepted Feb. 28, 2020. This work was supported in part by NSF under Grant CPS-1739344, ARO under grant W911NF-16-1-0448, and the DTRA under Grant HDTRA1-13-1-0029. Part of this work will appear in the Proceedings of 40th IEEE International Conference on Distributed Computing Systems (ICDCS), Singapore, July 8-10, 2020. The associate editor coordinating the review of this paper and approving it for publication was J. Zhang. Publisher Copyright: © 2020, Posts and Telecom Press Co Ltd. All rights reserved.
PY - 2020/3
Y1 - 2020/3
N2 - In order to meet the real-time performance requirements, intelligent decisions in Internet of things applications must take place right here right now at the network edge. Pushing the artificial intelligence frontier to achieve edge intelligence is nontrivial due to the constrained computing resources and limited training data at the network edge. To tackle these challenges, we develop a distributionally robust optimization (DRO)-based edge learning algorithm, where the uncertainty model is constructed to foster the synergy of cloud knowledge and local training. Specifically, the cloud transferred knowledge is in the form of a Dirichlet process prior distribution for the edge model parameters, and the edge device further constructs an uncertainty set centered around the empirical distribution of its local samples. The edge learning DRO problem, subject to these two distributional uncertainty constraints, is recast as a single-layer optimization problem using a duality approach. We then use an Expectation-Maximization algorithm-inspired method to derive a convex relaxation, based on which we devise algorithms to learn the edge model. Furthermore, we illustrate that the meta-learning fast adaptation procedure is equivalent to our proposed Dirichlet process prior-based approach. Finally, extensive experiments are implemented to showcase the performance gain over standard approaches using edge data only.
AB - In order to meet the real-time performance requirements, intelligent decisions in Internet of things applications must take place right here right now at the network edge. Pushing the artificial intelligence frontier to achieve edge intelligence is nontrivial due to the constrained computing resources and limited training data at the network edge. To tackle these challenges, we develop a distributionally robust optimization (DRO)-based edge learning algorithm, where the uncertainty model is constructed to foster the synergy of cloud knowledge and local training. Specifically, the cloud transferred knowledge is in the form of a Dirichlet process prior distribution for the edge model parameters, and the edge device further constructs an uncertainty set centered around the empirical distribution of its local samples. The edge learning DRO problem, subject to these two distributional uncertainty constraints, is recast as a single-layer optimization problem using a duality approach. We then use an Expectation-Maximization algorithm-inspired method to derive a convex relaxation, based on which we devise algorithms to learn the edge model. Furthermore, we illustrate that the meta-learning fast adaptation procedure is equivalent to our proposed Dirichlet process prior-based approach. Finally, extensive experiments are implemented to showcase the performance gain over standard approaches using edge data only.
KW - Dirichlet process
KW - Distributionally robust optimization
KW - Edge learning
KW - Wasserstein distance
UR - http://www.scopus.com/inward/record.url?scp=85113186849&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85113186849&partnerID=8YFLogxK
M3 - Article
SN - 2096-1081
VL - 5
SP - 26
EP - 39
JO - Journal of Communications and Information Networks
JF - Journal of Communications and Information Networks
IS - 1
ER -