From 5ae788a8f7e7cbbe0c8ba8d4ba713f3be605b28a Mon Sep 17 00:00:00 2001 From: josorio Date: Fri, 9 Sep 2022 14:52:55 +0200 Subject: [PATCH] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 5b9b746..e54a465 100644 --- a/README.md +++ b/README.md @@ -5,8 +5,8 @@ Fused Multiply-Add (FMA) functional units constitute a fundamental hardware component to train Deep Neural Networks (DNNs). Its silicon area grows quadratically with the mantissa bit count of the computer number format, which has motivated the adoption of the BrainFloat16 format (BF16). BF16 features 1 sign, 8 exponent and 7 explicit mantissa bits. Some approaches to train DNNs achieve significant performance benefits by using the BF16 format. However, these approaches must combine BF16 with the standard IEEE 754 Floating-Point 32-bit (FP32) format to achieve state-of-the-art training accuracy, which limits the impact of adopting BF16. This article proposes the first approach able to train complex DNNs entirely using the BF16 format. We propose a new class of FMA operators, FMAbf16n_m , that entirely rely on BF16 FMA hardware instructions and deliver the same accuracy as FP32. FMAbf16n_m operators achieve performance improvements within the 1.28-1.35× range on ResNet101 with respect to FP32. FMAbf16n_m enables training complex DNNs on simple low-end hardware devices without requiring expensive FP32 FMA functional units. ### Prerequisites -GCC compiler (Tested with gcc 8.10) -AVX512 support +* GCC compiler (Tested with gcc 8.10) +* AVX512 support ### Installation To test our FMAbf16n_m approach we need to use an emulation tool. To do so we are using SERP (Seamless Emulation of Reduced Precision Formats). First step is to install Intel PIN, the tool used by SERP. Just extract the content in the same pin folder and export a variable in the environment like the following. -- GitLab