What you've learned

You should now know how to:

  • Deploy PyTorch NLP Sentiment Analysis models from Hugging Face on Arm servers.
  • Evaluate the performance of three NLP models using the Sentiment Analysis pipeline.
  • Measure the performance uplift of these models by enabling support for BFloat16 fast math kernels on Arm Neoverse-based AWS Graviton3 Processors.

Knowledge Check

Do all Arm Neoverse CPUs include support for BFloat16 instructions?

Can you run Hugging Face PyTorch models on an Arm AArch64 CPU?

Does enabling support for BFloat16 fast math kernels in PyTorch improve the performance of NLP models?


Back
Next