5 AI Trends to Watch in 2020

Kirill Eremenko and Hadelin de Ponteves

Udemy for Business instructors

February 13, 2020

What AI trends should you keep an eye on? As Udemy instructors and the founders of SuperDataScience, a common refrain we hear from students and companies is that there are too many artificial intelligence trends to keep up with — how do you know which one matters and will still be in use in five years? If you train your team of data scientists in machine learning, will it have a lasting impact on the business? What other businesses are using this technology, and is it working for them?

We recently hosted a webinar on Udemy for Business that cuts through the AI hype and focuses on which technologies companies and individuals should consider adopting in the coming decade. As AI becomes ubiquitous, it can also be challenging to know which buzzword is worth the investment. Here are 5 AI trends that we’re telling students and businesses to follow in 2020 and beyond. 

Find out how you can train your team on the latest AI skills with a Udemy for Business subscription.

5 AI Trends to Watch in 2020

1. Robotic Process Automation (RPA)

Robotic Process Automation (RPA) is a simple AI technology, but also one of the most disruptive. Imagine your job requires you to perform a high-volume, repetitive task on the computer. Maybe it’s related to invoicing a client. This requires you to open an email attachment, copy data from the attachment into a CRM database, then grab related data from a different database, and send that new data in an email reply. The same task is done multiple times per day and prevents you from working on projects that you’re more interested in.

Robotic Process Automation is a type of software robot that can take on these manual repetitive tasks. Using the example above, an RPA tool would read the email, open the attachment, copy data into a CRM, get data from a different database, and even send the email reply. If there were an escalation requiring human intervention, the RPA would notify the employee to step in. In a nutshell, RPA removes mundane tasks and frees up people for more exciting work.

Key RPA applications: Invoicing, billing, payroll processing, data extraction and aggregation, shipment scheduling and tracking.

RPA case study: Financial services company Vanguard has $5.6 trillion in global assets under management. It uses RPA to perform certain straightforward trading tasks, “when x happens, do y,” etc. The RPA tools have not diminished the need for human traders. Rather, the combination of the two allows humans to work on more complex jobs, thereby creating a better overall service for Vanguard clients. 

Suggested course on Udemy: Artificial Intelligence for Business

2. Natural language processing (NLP)

Natural language processing applies machine learning models to teach computers how to understand what is said in written and spoken language. Because of its rich and growing applications, natural language processing is arguably one of the top branches of AI in overall economic value. It’s becoming especially popular as consumers adopt voice interface technology like Google Home or Amazon Alexa. Instead of writing or interacting with graphics on a screen, we talk to devices that can understand our casual language. 

Natural language processing can be divided into two sub-applications: 

  • Natural language understanding, which consists of a machine reviewing a text and accurately interpreting its meaning.
  • Natural language generation, where a system generates a logical response to a text or input.

Key natural language processing applications: Sentiment analysis, chatbots, machine translation, automatic summarization, auto video captioning.

Natural language processing case study: YouTube uses Natural Language Processing technology in many applications across the platform. One use most people will be familiar with is auto-generated captions. Speech recognition software ingests a YouTube video and returns the output of video captions. This technology first went live on the site in 2009 and has been fine-tuned and translated across a dozen languages thanks to the growing dataset available to the company — the videos uploaded every day to the platform. 

Suggested course on Udemy: Deep Learning and NLP A-Z™: How to create a Chatbot

3. Reinforcement learning

In its most simple explanation, reinforcement learning is an input- and output-based system that trains itself over trial and error to reach a certain goal, while using a reward system to reinforce its decisions. So, an AI takes as input some data and returns as output an action. When it does this correctly, it receives an award. The better it performs its task, the more rewards the system is given and vice versa. 

Imagine training an AI agent to predict whether an object is a carrot or a wood stick. If it accurately predicts a carrot, we give it a reward of plus one and if it erroneously predicts the wood stick, we give it a reward of minus one. 

Key reinforcement learning applications: Personalized recommendations, advertising budget optimization, and advertising content optimization.

Reinforcement learning case study: Alibaba, the popular Chinese e-commerce site, leveraged reinforcement learning to increase its return on investment for online advertising by 240% without increasing the advertising budget. In a research paper, the Alibaba team explains how reinforcement learning was used to optimize a sponsored search campaign by creating a bidding model for impressions each hour and performing real-time bidding accordingly. In the paper, you can see how this reinforcement learning system outperformed the benchmark of the other bidding systems. 

Suggested course on Udemy: Deep Reinforcement Learning 2.0

4. Edge computing

With smartphones, smartwatches, and Internet of Things-enabled devices in our homes and cars, there is a lot of data flying around. Processing all this data is a complex exercise requiring information sent to cloud computing machines based on servers hundreds or even thousands of miles away. Lose a Wi-Fi connection and your smart device becomes a very expensive brick. 

Enter edge computing, which takes the servers and data storage required for devices to access their smarts, and puts it directly on the device. This is real-time data processing that results in much faster computing responses and avoids network latency. If cloud computing is big data, edge computing is instant data. 

Another type of edge computing is performed on nodes. An edge computing node is a mini-server close to a local telecommunications provider. Using a node creates a bridge between cloud and local computing options. This technique results in lower costs and less time spent on data computation, making for a faster experience for the consumer.

Key edge computing applications: the interconnection of more devices, growth of Internet of Things technology.

Edge computing case study: Consider the Amazon Echo on your kitchen counter. The Alexa assistant technology on the Echo is not actually in the device. It recognizes the “wake-word” of “Alexa,” but the Echo must connect to Wi-Fi to process your audio query via a cloud-based server, no matter how simple or complex the request is. 

With a specially designed AI chip enabling edge computing, Amazon hopes to resolve simple questions such as “What time is it?” directly in the device, reducing the response time and providing a better user experience.

Suggested course on Udemy: Learn BERT – most powerful NLP algorithm by Google

5. Open-source AI frameworks

The programming world is built on libraries and frameworks that take redundancies out of everyday coding work. For example, JavaScript libraries like React and Angular help developers build websites quickly and with fewer errors since they supply common components. Likewise, open-source AI programming frameworks have allowed the development of AI technology to expand quickly. By democratizing these AI tools to programmers, data scientists, and technical teams of all levels, AI research is not exclusive to Silicon Valley professionals or Ph.D. candidates. 

Thanks to the libraries and platforms built for AI functionality, highly complex artificial intelligence algorithms, models, pipelines, and training procedures are now accessible to those with an interest in the technology. Say you want to build a computer vision-based project, some open-source AI frameworks will allow you to implement a computer vision system with very few lines of codes. 

Key open-source AI framework applications: Prototype and train complex AI algorithms; build pipelines to define, optimize, and assess an AI model; automate the training of a reinforcement learning module; build neural networks with just a few lines of code. 

Open-source AI framework case study: TensorFlow is an AI framework developed by Google that can be used across any branch of artificial intelligence. With TensorFlow, you can build a convolutional neural network for image classification. Some TensorFlow modules will also help simplify the creation of NLP systems.  This is among the most popular AI frameworks, especially since the development of TensorFlow 2.0, which allows users to create even more advanced AI systems.

Suggested course on Udemy: A Complete Guide on TensorFlow 2.0 using Keras API

There are many more open-source frameworks and libraries helping the advancement of artificial intelligence applications. We dive deeper into frameworks as well as the real-life business use cases of these AI trends in the full webinar, so be sure to watch the webinar here. 

Find out how you can train your team on these hot AI skills with a Udemy for Business subscription.


About the author:

Kirill Eremenko is an Amazon best-selling author, ex-Deloitte consultant, and serial entrepreneur with 10 years of experience in data science and education. With 50+ online courses and over 1 million students worldwide, he delivers inspiring and engaging courses in data science, machine learning, and AI. Kirill is the founder of SuperDataScience, a data science academy for all professionals, and of DataScienceGo, an annual careers conference and networking event that connects aspiring data scientists with major industry leaders.

Hadelin de Ponteves is the co-founder and CEO at BlueLife AI, which leverages the power of cutting-edge artificial intelligence to empower businesses to make massive profits by innovating, automating processes, and maximizing efficiency. Hadelin has created 50+ top-rated educational courses on topics such as machine learning, deep learning, artificial intelligence, and blockchain, teaching nearly 800,000 students.

About Udemy for Business:

Udemy for Business is a learning platform that helps companies stay competitive in today’s rapidly changing workplace by offering fresh, relevant on-demand learning content, curated from the Udemy marketplace. Our mission is to help employees do whatever comes next—whether that’s the next project to do, skill to learn, or role to master. We’d love to partner with you on your employee development needs. Get in touch with us at business@udemy.com

2020 Workplace Learning Trends Report: The Skills of the Future thumbnail

report

Discover the latest trending tech & soft skills, top 10 skills by industry & role, and how organizations are preparing their workforce for the future.

More from Tech Skills Topic

Tech Skills

How Deep Learning Can Predict If Your Customer Will Buy Again

Online stores are a gold mine of data for applying AI and deep learning in the retail and e-commerce business....

Tech Skills

Why Splunk Certification is a Top Skill for Data Scientists

Data, like so many other words, is a borrowed word in English. It comes from the Latin datum, meaning gift....

Tech Skills

Why Swift UI Should Be on the Radar of Every Mobile Developer

Swift UI is a user interface framework intended to make it easier to build Apple platform apps in the Swift...