PyTorch is the perfect library for any deep learning project. It makes it easy to build fast, reliable networks with strong GPU acceleration that you can easily read and understand in Python. As an example, you can think about Numpy with strong GPU acceleration.
PyTorch today announced the release of PyTorch 1.10. This long-awaited update is composed of over 3,400 commits since the last version. Made by 426 contributors, this new major release comes with plenty in store for developers.
PyTorch 1.10 is the latest update that will further improve the training and performance of Pytorch as well as make it easier for developers to use their skillsets more effectively.
In Version 1.10, CPU overheads for CUDA workloads were considered with CUDA Graphs APIs being integrated. In this new version of PyTorch, many frontend APIs such as FX, torch.special, and nn.Module Parametrization are more stable than their beta. The new version also provides support for automatic fusion in JIT Compiler expands to CPUs in addition to GPUs. There is also support for Android NNAPI available beta.
Github: https://github.com/pytorch/pytorch/releases/tag/v1.10.0
PyTorch Blog: https://pytorch.org/blog/pytorch-1.10-released/
Suggested
Credit: Source link
Comments are closed.