Skip to content

Files

Latest commit

 

History

History
146 lines (90 loc) · 5.7 KB

xlm.rst

File metadata and controls

146 lines (90 loc) · 5.7 KB

XLM

Overview

The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. It's a transformer pretrained using one of the following objectives:

  • a causal language modeling (CLM) objective (next token prediction),
  • a masked language modeling (MLM) objective (BERT-like), or
  • a Translation Language Modeling (TLM) object (extension of BERT's MLM to multiple language inputs)

The abstract from the paper is the following:

Recent studies have demonstrated the efficiency of generative pretraining for English natural language understanding. In this work, we extend this approach to multiple languages and show the effectiveness of cross-lingual pretraining. We propose two methods to learn cross-lingual language models (XLMs): one unsupervised that only relies on monolingual data, and one supervised that leverages parallel data with a new cross-lingual language model objective. We obtain state-of-the-art results on cross-lingual classification, unsupervised and supervised machine translation. On XNLI, our approach pushes the state of the art by an absolute gain of 4.9% accuracy. On unsupervised machine translation, we obtain 34.3 BLEU on WMT'16 German-English, improving the previous state of the art by more than 9 BLEU. On supervised machine translation, we obtain a new state of the art of 38.5 BLEU on WMT'16 Romanian-English, outperforming the previous best approach by more than 4 BLEU. Our code and pretrained models will be made publicly available.

Tips:

  • XLM has many different checkpoints, which were trained using different objectives: CLM, MLM or TLM. Make sure to select the correct objective for your task (e.g. MLM checkpoints are not suitable for generation).
  • XLM has multilingual checkpoints which leverage a specific :obj:`lang` parameter. Check out the :doc:`multi-lingual <../multilingual>` page for more information.

The original code can be found here.

XLMConfig

.. autoclass:: transformers.XLMConfig
    :members:

XLMTokenizer

.. autoclass:: transformers.XLMTokenizer
    :members: build_inputs_with_special_tokens, get_special_tokens_mask,
        create_token_type_ids_from_sequences, save_vocabulary


XLM specific outputs

.. autoclass:: transformers.modeling_xlm.XLMForQuestionAnsweringOutput
    :members:


XLMModel

.. autoclass:: transformers.XLMModel
    :members: forward


XLMWithLMHeadModel

.. autoclass:: transformers.XLMWithLMHeadModel
    :members: forward


XLMForSequenceClassification

.. autoclass:: transformers.XLMForSequenceClassification
    :members: forward


XLMForMultipleChoice

.. autoclass:: transformers.XLMForMultipleChoice
    :members: forward


XLMForTokenClassification

.. autoclass:: transformers.XLMForTokenClassification
    :members: forward


XLMForQuestionAnsweringSimple

.. autoclass:: transformers.XLMForQuestionAnsweringSimple
    :members: forward


XLMForQuestionAnswering

.. autoclass:: transformers.XLMForQuestionAnswering
    :members: forward


TFXLMModel

.. autoclass:: transformers.TFXLMModel
    :members: call


TFXLMWithLMHeadModel

.. autoclass:: transformers.TFXLMWithLMHeadModel
    :members: call


TFXLMForSequenceClassification

.. autoclass:: transformers.TFXLMForSequenceClassification
    :members: call


TFXLMForMultipleChoice

.. autoclass:: transformers.TFXLMForMultipleChoice
    :members: call


TFXLMForTokenClassification

.. autoclass:: transformers.TFXLMForTokenClassification
    :members: call



TFXLMForQuestionAnsweringSimple

.. autoclass:: transformers.TFXLMForQuestionAnsweringSimple
    :members: call