Deep Neural Networks (DNNs) have proven effective in various applications due to their dominant performance. However, integrating DNNs into edge devices remains challenging due to the large size of the DNN model, which requires efficient model parallelization and workload partitioning. Previous attempts to address these challenges have focused on data and model parallelism but have fallen short in terms of finding the optimal DNN model partitions for efficient distribution, considering available resources. This paper presents a pipelined DNN model parallelism framework that improves the performance of DNNs on edge devices. The framework optimizes DNN model training by determining the optimal number of partitions based on available edge resources. This is achieved through a combination of data and model parallelism techniques, which efficiently distribute the workload across multiple processors to reduce training time. The framework also includes a task controller to manage computing resources effectively. The experimental results demonstrate the effectiveness of the proposed approach, showing a significant reduction in the model training time compared to a baseline model AlexNet.
Article ID: 2023L14
Month: June
Year: 2023
Address: Online
Venue: The 36th Canadian Conference on Artificial Intelligence
Publisher: Canadian Artificial Intelligence Association
URL: https://caiac.pubpub.org/pub/ly32gqd5