Validation of the NVDLA architecture using its aws virtual prototype-FPGA co-simulation platform
No hay miniatura disponible
Fecha
2023-05-23
Autores
Título de la revista
ISSN de la revista
Título del volumen
Editor
Pontificia Universidad Católica del Perú
DOI
Resumen
La inferencia de Redes Neuronales Profundas (o DNNs, por sus siglas en inglés, Deep
Neural Networks) se ha vuelto cada vez más demandante en términos de almacenamiento de
memoria, complejidad computacional y consumo de energía. Desarrollar hardware especializado
en DNNs puede ser un proceso tedioso, que se alarga aún más si se considera el tiempo requerido
en escribir software para ello. Así, esta tesis consiste en la validación del acelerador de hardware
de redes neuronales NVDLA (por sus siglas en inglés, Nvidia Deep Learning Accelerator)
utilizando un ambiente de co-simulación basado en su plataforma híbrida: un CPU implementado
como Prototipo Virtual (PV), basado en el Quick Emulator (QEMU), y el modelo de hardware en
RTL del NVDLA dentro de un FPGA. Para ello, la arquitectura más portátil del NVDLA nv_small
es configurada en el FPGA de una instancia F1 del servicio E2C AWS. Para complementar el
sistema, el PV del NVDLA es usado, consistiendo de un CPU Arm emulado con QEMU,
ejecutando el sistema operativo Linux y el software runtime del NVDLA, dentro de una capa de
SystemC/TLM conectada al FPGA de la instancia F1 a través de un puerto PCIe. Una vez que la
plataforma híbrida de co-simulación está configurada, se ejecutan regresiones de pruebas de
hardware en la implementación en el FPGA para revisar la propia funcionalidad e integridad de
los bloques que componen al NVDLA. Luego, se ejecutan pruebas de sanidad de software en el
PV para confirmar la configuración correcta de todo el sistema integrado. Finalmente, la DNN
AlexNet es ejecutada. Los resultados muestran la propia funcionalidad del hardware y del PV, y
que la red AlexNet se ejecutó exitosamente en el ambiente de co-simulación, tomando
aproximadamente 112 minutos.
Deep neural network (DNN) inference has become increasingly demanding over the years in terms of memory storage, computational complexity, and energy consumption. Developing hardware targeting DNNs can be a lengthy process, which only grows if considered the time of writing software for it. Therefore, this thesis consists of the validation of the NVDLA deep learning hardware accelerator (NVDLA) using a co-simulation environment based on its hybrid platform: a CPU implemented as a Virtual Prototype (VP) based on Quick Emulator (QEMU) and the NVDLA RTL hardware model on a FPGA. For this, the more portable nv_small architecture of the NVDLA is configured into the FPGA of a F1 instance from the EC2 AWS service. To complement the system, the VP of the NVDLA is used, consisting of an Arm CPU emulated with QEMU running a Linux OS and the NVDLA runtime software, inside a SystemC/TLM wrapper connected to the F1 instance FPGA through a PCI express port. Once the hybrid co-simulation platform is set up, hardware regression tests are run on the FPGA implementation in order to check proper functionality and integrity of the NVDLA component blocks, sanity software tests are run on the VP to check the correct setup of the whole stack, and finally the AlexNet DNN is executed. The results showed proper hardware and VP functionality, and the AlexNet execution in the cosimulation environment was successful, taking approximately 112 minutes.
Deep neural network (DNN) inference has become increasingly demanding over the years in terms of memory storage, computational complexity, and energy consumption. Developing hardware targeting DNNs can be a lengthy process, which only grows if considered the time of writing software for it. Therefore, this thesis consists of the validation of the NVDLA deep learning hardware accelerator (NVDLA) using a co-simulation environment based on its hybrid platform: a CPU implemented as a Virtual Prototype (VP) based on Quick Emulator (QEMU) and the NVDLA RTL hardware model on a FPGA. For this, the more portable nv_small architecture of the NVDLA is configured into the FPGA of a F1 instance from the EC2 AWS service. To complement the system, the VP of the NVDLA is used, consisting of an Arm CPU emulated with QEMU running a Linux OS and the NVDLA runtime software, inside a SystemC/TLM wrapper connected to the F1 instance FPGA through a PCI express port. Once the hybrid co-simulation platform is set up, hardware regression tests are run on the FPGA implementation in order to check proper functionality and integrity of the NVDLA component blocks, sanity software tests are run on the VP to check the correct setup of the whole stack, and finally the AlexNet DNN is executed. The results showed proper hardware and VP functionality, and the AlexNet execution in the cosimulation environment was successful, taking approximately 112 minutes.
Descripción
Palabras clave
Redes neuronales (Computación), Software de aplicación, Simulación
Citación
Colecciones
item.page.endorsement
item.page.review
item.page.supplemented
item.page.referenced
Licencia Creative Commons
Excepto se indique lo contrario, la licencia de este artículo se describe como info:eu-repo/semantics/openAccess