Transfer Learning in Natural Language Processing: A Survey

Authors

  • Bijesh Dhyani

DOI:

https://doi.org/10.17762/msea.v70i1.2312

Abstract

ABSTRACT

Transfer learning is a discipline that is expanding quickly within the realm of natural language processing (NLP) and machine learning. It is the application of previously learned models to the solution of a variety of problems that are connected to one another. This paper presents a comprehensive survey of transfer learning techniques in NLP, focusing on five key classification algorithms: (1) BERT, (2) GPT, (3) ELMo, (4) RoBERTa, and (5) ALBERT. We discuss the fundamental concepts, methodologies, and performance benchmarks of each algorithm, highlighting the various approaches taken to leverage pre-existing knowledge for effective learning. Furthermore, we provide an overview of the latest advancements and challenges in transfer learning for NLP, along with promising directions for future research in this domain.

Downloads

Published

2021-01-31

How to Cite

Dhyani, B. . (2021). Transfer Learning in Natural Language Processing: A Survey. Mathematical Statistician and Engineering Applications, 70(1), 303–311. https://doi.org/10.17762/msea.v70i1.2312

Issue

Section

Articles