What’s new in PyTorch 1.1 and why should your team use it for your future AI applications? With the recent release of PyTorch 1.1, Facebook has added a variety of new features to the popular deep learning library. This includes support for TensorBoard, a suite of visualization tools that were created by Google originally for its deep learning library, TensorFlow.
PyTorch 1.1 also comes with an improved JIT compiler, expanding PyTorch’s built-in capabilities for scripting. One of the biggest changes with this version 1.1 release is the ability to perform distributed training on multiple GPUs, which allows for extremely fast training on very large deep learning models.
In this blog, I’ll delve into what’s new with the latest PyTorch release, the exciting outcomes the tech world has seen from it in just three years, and why you and your team may want to use it in future AI applications.
What is PyTorch and what is it used for?
PyTorch is a deep learning library built for Python by the Facebook AI Research team. It’s still relatively new, with the original released in October 2016 and version 1.1 debuting in spring 2019. PyTorch provides the ability to perform tensor computing, the fundamental base of deep learning. It also provides built-in automatic differentiation, which is how deep learning networks actually “learn” from data sets.
PyTorch uses a dynamic graph approach to computation, allowing users to have access to every level of computation. This helps developers understand their code better and see exactly what is happening at each step in the code. Since the computational graph is defined at runtime, this allows direct integration with Python’s built-in debugging tools.
Benefits of using PyTorch
1. Python-friendly. PyTorch was built directly with Python in mind, unlike other deep learning libraries that were ported over to Python. It provides a hybrid front end enabling you to seamlessly share the majority of code between the prototyping and research phase and move quickly to graph execution mode for production.
2. Optimized for GPUs supported by AWS and Azure. PyTorch is also optimized to take advantage of GPUs for accelerated training times. The largest cloud service providers are also on board with this development, as Amazon Web Services currently supports the latest version of PyTorch, optimized for GPU and even includes it in its Deep Learning AMI (Amazon Machine Image). Microsoft also plans to support PyTorch in their Azure cloud offerings. It also has built-in declarative data parallelism, allowing you to leverage multiple GPUs on cloud providers.
3. Rich ecosystem of tools and libraries. PyTorch also includes several tools and libraries, with a rich ecosystem of tools and libraries for extending PyTorch, this includes additions such as Torchvision, PyTorch’s built-in tools for working with complex image datasets. The PyTorch ecosystem includes projects, tools, models, and libraries from a broad community of researchers in academia and industry, application developers, and ML engineers. The goal of this ecosystem is to support developers and data scientists in the exploration and application of deep learning using PyTorch.
While there are over a dozen tools available today, a notable one includes Flair, a library used for natural language processing tasks, such as named entity recognition and part-of-speech tagging. There is also Translate, Facebook’s own translation model which is used in the machine translations seen in your Facebook News Feed.
Train your team in the latest deep learning and machine learning skills. Request a demo to discover which AI learning path is best for your business needs.
5 ways to use PyTorch for your AI applications
Using PyTorch for deep learning tasks allows you and your team to create predictive algorithms from data sets. For example, you could leverage historical real estate data to predict future housing prices or use a manufacturing plant’s historical production data to predict failure rates on new parts. Other common uses of PyTorch include:
- Image classification: PyTorch can be used to build specialized neural network architectures called Convolutional Neural Networks (CNNs). These multilayer CNNs are fed images of a specific thing, say, a kitten, and much like how human brains works, once the CNN sees a data set of kitten images, it should be able to confidently identify a new image of a kitten. This application is seeing momentum in healthcare, where a CNN was recently used in a study to detect skin cancer.
- Handwriting recognition: This involves deciphering human handwriting and its inconsistencies from person to person and across languages. Facebook’s Chief AI Scientist, Yann LeCun, pioneered CNNs that could recognize handwritten numerical digits.
- Forecast time sequences: A Recurrent Neural Network (RNN) is a type of neural network designed for sequence modeling and is especially useful for training an algorithm on the past. It can make decisions and predictions based on past data, so that it can make decisions based on the past. For example, an airline may want to forecast the number of passengers it will have in a single month based on the data from past months.
- Text generation: RNNs and PyTorch also power text generation, which is the training of an AI model on a specific text (all of Shakespeare’s works, for example) to create its own output on what it learned.
- Style transfer: One of the most popular and fun applications of PyTorch is a style transfer. It uses a class of deep learning algorithms to manipulate videos or images and adopt the visual style of that image on another image. For example, a style transfer can make your favorite digital vacation photo look like a painting or drawing by a famous artist, such as Monet. It’s even advanced enough to do the reverse, convert paintings into realistic looking photos! Check out the image below for some examples.
Which companies use PyTorch?
According to market tracking by HG Insights, companies such as Apple, ADP, Pepsico, NVIDIA and Walmart are using PyTorch to create deep learning models for predictive analytics. Thanks to these major corporations’ adoption of the technology, the three major cloud providers—Amazon, Microsoft, and Google—now provide cloud computing instances that have PyTorch 1.1 preinstalled and ready to go right out of the box.
There are incredible applications for PyTorch in deep learning and this is just the beginning! Imagine how you and your team may be able to apply this technology. Is it identifying counterfeit goods through image classification? Creating a new type of photo filter by adapting style transfer principles? Add PyTorch to your tech skills with my new course PyTorch for Deep Learning with Python Bootcamp.
About the author:
Jose Portilla has a BS and MS in Mechanical Engineering from Santa Clara University and years of experience as a professional instructor in data science and programming. Over the course of his career, he developed a skill set in analyzing data and he hopes to use his experience in teaching and data science to help other people learn the power of programming the ability to analyze data, as well as present the data in clear and beautiful visualizations. Currently, he works as the Head of Data Science for Pierian Data Inc.
About Udemy for Business:
Udemy for Business is a learning platform that helps companies stay competitive in today’s rapidly changing workplace by offering fresh, relevant on-demand learning content, curated from the Udemy marketplace. Our mission is to help employees do whatever comes next—whether that’s the next project to do, skill to learn, or role to master. We’d love to partner with you on your employee development needs. Get in touch with us at email@example.com