[WAITING LIST] NVIDIA DLI Workshop: Building Transformer-Based NLP Applications

DUE TO THE HIGH NUMBER OF REGISTRANTS YOU CAN NOW SUBSCRIBE TO A WAITING LIST. WE ARE WORKING ON INCREASING THE NUMBER OF PARTICIPANTS ON THIS WORKSHOP OR ORGANIZE ANOTHER ONE, TO MAKE SURE EVERYONE CAN JOIN

*** REGISTER IS POSSIBLE WITH UNIVERSITY AND RESEARCH LAB EMAIL ADDRESS ONLY. REGISTRATIONS WILL BE VERIFIED IN THE LOBBY OF THE ONLINE EVENT. ***

This is an advanced level workshop. You need to have a basic understanding of Python programming language and how deep learning works.

Date and time: 24th November, 2023, 9:00-17:00 CET
Place: online (link will be sent before the event)

Applications for natural language processing (NLP) and generative AI have exploded in the past decade.

With the proliferation of applications like chatbots and intelligent virtual assistants, organizations are infusing their businesses with more interactive human-machine experiences. Understanding how transformer-based large language models (LLMs) can be used to manipulate, analyze, and generate text-based data is essential.

Modern pretrained LLMs can encapsulate the nuance, context, and sophistication of language, just as humans do. When fine-tuned and deployed correctly, developers can use these LLMs to build powerful NLP applications that provide natural and seamless human-computer interactions within chatbots, AI voice agents, and more.

Transformer-based LLMs, such as Bidirectional Encoder Representations from Transformers (BERT), have revolutionized NLP by offering accuracy comparable to human baselines on benchmarks like SQuAD for question answering, entity recognition, intent recognition, sentiment analysis, and more.

Learning Objectives

By participating in this workshop, you’ll:

  • How transformers are used as the basic building blocks of modern LLMs for NLP applications
  • How self-supervision improves upon the transformer architecture in BERT, Megatron, and other LLM variants for superior NLP results
  • How to leverage pretrained, modern LLM models to solve multiple NLP tasks such as text classification, named-entity recognition (NER), and question answering
  • Leverage pre-trained, modern NLP models to solve multiple tasks such as text classification, NER, and question answering
  • Manage inference challenges and deploy refined models for live applications
Sign in to Google to save your progress. Learn more
Email *
Your name

*
Country *
Institute

*
Are you a student, university staff or non-profit researcher? *
Submit
Clear form
Never submit passwords through Google Forms.
reCAPTCHA
This form was created inside of Budapest University of Technology and Economics. Report Abuse