Deep Learning Inference on
Page 19
We also ran a competitive test using several models were conducted on the servers R740-P4 and R7245-P4 and their results were compared against the R7425-T4-16GB results. This could be useful in Phase 2. In Phase 2, ResNet50 was tested with TensorRT C++ API... if there are any performance differences. image classification vs applications which require low latency Architectures & Technologies Dell EMC | Infrastructure Solutions Group 17 The server R7425-T4-16GB performed around 1.8X faster versus other servers on PowerEdge R7425 Figure 14. Deep Learning Inference on the model ...
We also ran a competitive test using several models were conducted on the servers R740-P4 and R7245-P4 and their results were compared against the R7425-T4-16GB results. This could be useful in Phase 2. In Phase 2, ResNet50 was tested with TensorRT C++ API... if there are any performance differences. image classification vs applications which require low latency Architectures & Technologies Dell EMC | Infrastructure Solutions Group 17 The server R7425-T4-16GB performed around 1.8X faster versus other servers on PowerEdge R7425 Figure 14. Deep Learning Inference on the model ...