TY - GEN
T1 - Early Experiences of Noise-Sensitivity Performance Analysis of a Distributed Deep Learning Framework
AU - Rojas, Elvis
AU - Knobloch, Michael
AU - Daoud, Nour
AU - Meneses, Esteban
AU - Mohr, Bernd
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Deep Learning (DL) applications are used to solve complex problems efficiently. These applications require complex neural network models composed of millions of parameters and huge amounts of data for proper training. This is only possible by parallelizing the necessary computations by so-called distributed deep learning (DDL) frameworks over many GPUs distributed over multiple nodes of a HPC cluster. These frameworks mostly utilize the compute power of the GPUs and use only a small portion of the available compute power of the CPUs in the nodes for I/O and inter-process communication, leaving many CPU cores idle and unused. The more powerful the base CPU in the cluster nodes, the more compute resources are wasted. In this paper, we investigate how much of this unutilized compute resources could be used for executing other applications without lowering the performance of the DDL frameworks. In our experiments, we executed a noise-generation application, which generates a very-high memory, network or I/O load, in parallel with DDL frameworks, and use HPC profiling and tracing techniques to determine whether and how the generated noise is affecting the performance of the DDL frameworks. Early results indicate that it might be possible to utilize the idle cores for jobs of other users without affecting the performance of the DDL applications in a negative way.
AB - Deep Learning (DL) applications are used to solve complex problems efficiently. These applications require complex neural network models composed of millions of parameters and huge amounts of data for proper training. This is only possible by parallelizing the necessary computations by so-called distributed deep learning (DDL) frameworks over many GPUs distributed over multiple nodes of a HPC cluster. These frameworks mostly utilize the compute power of the GPUs and use only a small portion of the available compute power of the CPUs in the nodes for I/O and inter-process communication, leaving many CPU cores idle and unused. The more powerful the base CPU in the cluster nodes, the more compute resources are wasted. In this paper, we investigate how much of this unutilized compute resources could be used for executing other applications without lowering the performance of the DDL frameworks. In our experiments, we executed a noise-generation application, which generates a very-high memory, network or I/O load, in parallel with DDL frameworks, and use HPC profiling and tracing techniques to determine whether and how the generated noise is affecting the performance of the DDL frameworks. Early results indicate that it might be possible to utilize the idle cores for jobs of other users without affecting the performance of the DDL applications in a negative way.
KW - Distributed Deep Learning
KW - Noisy Environments
KW - Performance Analysis
UR - http://www.scopus.com/inward/record.url?scp=85140924574&partnerID=8YFLogxK
U2 - 10.1109/CLUSTER51413.2022.00066
DO - 10.1109/CLUSTER51413.2022.00066
M3 - Contribución a la conferencia
AN - SCOPUS:85140924574
T3 - Proceedings - IEEE International Conference on Cluster Computing, ICCC
SP - 516
EP - 522
BT - Proceedings - 2022 IEEE International Conference on Cluster Computing, CLUSTER 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 IEEE International Conference on Cluster Computing, CLUSTER 2022
Y2 - 6 September 2022 through 9 September 2022
ER -