Hack Session: State of Transfer Learning in NLP – BERT vs GPT2 vs XLNet

Nov 13, 2019

17:00

Auditorium 1

60 minutes

Natural Language Processing

We have been seeing some tremendous changes in the NLP landscape during the last 18 months. As quoted by popular NLP researcher Sebastian Ruder, NLP’s ImageNet moment has arrived. This is because the latest pre-trained language models like BERT, GPT2, TransformerXL, XLNet, XLM, etc. are achieving state of the art results in a wide range of NLP tasks. So, these models may have same impact on NLP as ImageNet had on computer vision.

NLP world has travelled a long way from word vectors in this last few months. So in this hack session, we will compare the performance of these different pre-trained models along with pre-trained word vector models on textual classification tasks. We will be using python language and PyTorch for this hack session.

 

 

Key Takeaways:

  • Building pre-trained word embedding models for textual classification
  • Using SOTA NLP models like BERT, XLNet, XLM
  • Fine-tuning pre-trained language models for textual classification tasks

Performance comparison of these different models

  • Sudalai Rajkumar (SRK)

    Data Scientist

    H2O.ai

Copyright 2019 Analytics Vidhya. All rights reserved