Skip to content

Conversation

@Datta0
Copy link
Collaborator

@Datta0 Datta0 commented Oct 25, 2024

There is not much of a difference from upcasting lm_head and embed_tokens to FP32 while training/finetuning.
Below are the results for the same. Note that I had to shift right axis up a little to show the difference (which was on average 8x10-4 which can possibly be imprecisions adding up)

Coming to the memory stats,

bfloat16 - cont pretraing
GPU = NVIDIA A100-PCIE-40GB. Max memory = 39.496 GB.
11.398 GB of memory reserved. 
Goes till 29.2 GiB as observed on nvtop/torch.cuda.mem_get_info()
For IFT phase, it is 25.5 GiB
675.8237 seconds used for training.
11.26 minutes used for training.
Peak reserved memory = 28.701 GB.
Peak reserved memory for training = 17.303 GB.
Peak reserved memory % of max memory = 72.668 %.
Peak reserved memory for training % of max memory = 43.809 %.
float32
GPU = NVIDIA A100-PCIE-40GB. Max memory = 39.496 GB.
16.535 GB of memory reserved.
Goes till 37.5 GiB as observed on nvtop/torch.cuda.mem_get_info()
737.8355 seconds used for training.
12.3 minutes used for training.
Peak reserved memory = 37.16 GB.
Peak reserved memory for training = 20.625 GB.
Peak reserved memory % of max memory = 94.085 %.
Peak reserved memory for training % of max memory = 52.22 %.

image

image

@Datta0 Datta0 changed the title Do not upcast lm_head and embedding to float32 25% less mem and 10% faster training: Do not upcast lm_head and embedding to float32 Oct 25, 2024
@danielhanchen danielhanchen changed the base branch from main to nightly October 25, 2024 08:27
@danielhanchen danielhanchen merged commit 6776055 into unslothai:nightly Oct 25, 2024
@Datta0 Datta0 deleted the lm_head_no_upcast branch October 25, 2024 08:45
danielhanchen added a commit that referenced this pull request Oct 26, 2024
* Fix TRL

* Update mistral.py

* Patch processing_class

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Installation guide (#1165)

* chore: update chat_templates.py (#1166)

orginal -> original

* Disable Flex Attention

* Update tokenizer_utils.py

* Update _utils.py

* n_items

* Update cross_entropy_loss.py

* Fix DPO, ORPO

* Update _utils.py

* Update _utils.py

* fix/transformers-unpack (#1180)

* Fix DPO, ORPO (#1177)

* Fix TRL

* Update mistral.py

* Patch processing_class

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Installation guide (#1165)

* chore: update chat_templates.py (#1166)

orginal -> original

* Disable Flex Attention

* Update tokenizer_utils.py

* Update _utils.py

* n_items

* Update cross_entropy_loss.py

* Fix DPO, ORPO

* Update _utils.py

---------

Co-authored-by: timothelaborie <97834767+timothelaborie@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>

* Add warning for missing Unpack and KwargsForCausalLM in older Transformers versions

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>
Co-authored-by: timothelaborie <97834767+timothelaborie@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>

* Update cross_entropy_loss.py

* Update _utils.py

* Update _utils.py

* donot upcast lm_head and embeddings to float32 (#1186)

* Cleanup upcast logs (#1188)

* Fix/phi-longrope (#1193)

* Enhance rotary embedding handling in LlamaAttention and LongRopeRotaryEmbedding

* Typo

* Improve rotary embedding handling in LlamaAttention to prevent errors with short KV cache

* Update llama.py

* Update llama.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Update transformers

---------

Co-authored-by: timothelaborie <97834767+timothelaborie@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Edd <68678137+Erland366@users.noreply.github.com>
Co-authored-by: Datta Nimmaturi <datta.nimmaturi@nutanix.com>
danielhanchen added a commit that referenced this pull request Nov 5, 2024
* Fix TRL

* Update mistral.py

* Patch processing_class

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Installation guide (#1165)

* chore: update chat_templates.py (#1166)

orginal -> original

* Disable Flex Attention

* Update tokenizer_utils.py

* Update _utils.py

* n_items

* Update cross_entropy_loss.py

* Fix DPO, ORPO

* Update _utils.py

* Update _utils.py

* fix/transformers-unpack (#1180)

* Fix DPO, ORPO (#1177)

* Fix TRL

* Update mistral.py

* Patch processing_class

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Installation guide (#1165)

* chore: update chat_templates.py (#1166)

orginal -> original

* Disable Flex Attention

* Update tokenizer_utils.py

* Update _utils.py

* n_items

* Update cross_entropy_loss.py

* Fix DPO, ORPO

* Update _utils.py

---------

Co-authored-by: timothelaborie <97834767+timothelaborie@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>

* Add warning for missing Unpack and KwargsForCausalLM in older Transformers versions

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>
Co-authored-by: timothelaborie <97834767+timothelaborie@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>

* Update cross_entropy_loss.py

* Update _utils.py

* Update _utils.py

* donot upcast lm_head and embeddings to float32 (#1186)

* Cleanup upcast logs (#1188)

* Fix/phi-longrope (#1193)

* Enhance rotary embedding handling in LlamaAttention and LongRopeRotaryEmbedding

* Typo

* Improve rotary embedding handling in LlamaAttention to prevent errors with short KV cache

* Update llama.py

* Update llama.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Update transformers

* Unk token issues

* Update _utils.py

* Fix pad token

* Update llama.py

* Typo

* ignored labels

* Revert "ignored labels"

This reverts commit 9d07be0.

* More patching

* Update _utils.py

* Update _utils.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Feat/all tmp (#1219)

* Update save.py

Check whether path is in /tmp dir for Kaggle environment

* Update save.py

Move temporary_location to /tmp in Kaggle

* Enhance Kaggle environment support in save and tokenizer utilities

---------

Co-authored-by: dendarrion <37800703+dendarrion@users.noreply.github.com>
Co-authored-by: Erland366 <erland.pg366@gmail.com>

* Bug fixes

* Update pyproject.toml

* Update _utils.py

* Update __init__.py

* Update __init__.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Tied weights

* Revert "Tied weights"

This reverts commit 8090b7c.

* Tied weights

* Utils

* CE Loss patching

* Update __init__.py

* Update __init__.py

* Patching

* Update cross_entropy_loss.py

* CE Loss

* Update _utils.py

* Update _utils.py

* CE Loss

* Update _utils.py

* Update _utils.py

* Layernorm

* Update _utils.py

* Update _utils.py

* Post patch

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* typing

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* int64

* Update _utils.py

* Update cross_entropy_loss.py

* constexpr

* constexpr

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* CE

* Update cross_entropy_loss.py

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update utils.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* typing

* Update rope_embedding.py

* types

* Disable compiling

* Update _utils.py

* Update _utils.py

* Forward hook

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update pyproject.toml

---------

Co-authored-by: timothelaborie <97834767+timothelaborie@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Edd <68678137+Erland366@users.noreply.github.com>
Co-authored-by: Datta Nimmaturi <datta.nimmaturi@nutanix.com>
Co-authored-by: dendarrion <37800703+dendarrion@users.noreply.github.com>
Co-authored-by: Erland366 <erland.pg366@gmail.com>
danielhanchen added a commit that referenced this pull request Nov 6, 2024
* Fix TRL

* Update mistral.py

* Patch processing_class

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Installation guide (#1165)

* chore: update chat_templates.py (#1166)

orginal -> original

* Disable Flex Attention

* Update tokenizer_utils.py

* Update _utils.py

* n_items

* Update cross_entropy_loss.py

* Fix DPO, ORPO

* Update _utils.py

* Update _utils.py

* fix/transformers-unpack (#1180)

* Fix DPO, ORPO (#1177)

* Fix TRL

* Update mistral.py

* Patch processing_class

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Installation guide (#1165)

* chore: update chat_templates.py (#1166)

orginal -> original

* Disable Flex Attention

* Update tokenizer_utils.py

* Update _utils.py

* n_items

* Update cross_entropy_loss.py

* Fix DPO, ORPO

* Update _utils.py

---------

Co-authored-by: timothelaborie <97834767+timothelaborie@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>

* Add warning for missing Unpack and KwargsForCausalLM in older Transformers versions

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>
Co-authored-by: timothelaborie <97834767+timothelaborie@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>

* Update cross_entropy_loss.py

* Update _utils.py

* Update _utils.py

* donot upcast lm_head and embeddings to float32 (#1186)

* Cleanup upcast logs (#1188)

* Fix/phi-longrope (#1193)

* Enhance rotary embedding handling in LlamaAttention and LongRopeRotaryEmbedding

* Typo

* Improve rotary embedding handling in LlamaAttention to prevent errors with short KV cache

* Update llama.py

* Update llama.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Update transformers

* Unk token issues

* Update _utils.py

* Fix pad token

* Update llama.py

* Typo

* ignored labels

* Revert "ignored labels"

This reverts commit 9d07be0.

* More patching

* Update _utils.py

* Update _utils.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Feat/all tmp (#1219)

* Update save.py

Check whether path is in /tmp dir for Kaggle environment

* Update save.py

Move temporary_location to /tmp in Kaggle

* Enhance Kaggle environment support in save and tokenizer utilities

---------

Co-authored-by: dendarrion <37800703+dendarrion@users.noreply.github.com>
Co-authored-by: Erland366 <erland.pg366@gmail.com>

* Bug fixes

* Update pyproject.toml

* Update _utils.py

* Update __init__.py

* Update __init__.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Tied weights

* Revert "Tied weights"

This reverts commit 8090b7c.

* Tied weights

* Utils

* CE Loss patching

* Update __init__.py

* Update __init__.py

* Patching

* Update cross_entropy_loss.py

* CE Loss

* Update _utils.py

* Update _utils.py

* CE Loss

* Update _utils.py

* Update _utils.py

* Layernorm

* Update _utils.py

* Update _utils.py

* Post patch

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* typing

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* int64

* Update _utils.py

* Update cross_entropy_loss.py

* constexpr

* constexpr

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* CE

* Update cross_entropy_loss.py

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update utils.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* typing

* Update rope_embedding.py

* types

* Disable compiling

* Update _utils.py

* Update _utils.py

* Forward hook

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update pyproject.toml

* Update _utils.py

* Update llama.py

* CE Loss

* Update cross_entropy_loss.py

* Update _utils.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update llama.py

---------

Co-authored-by: timothelaborie <97834767+timothelaborie@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Edd <68678137+Erland366@users.noreply.github.com>
Co-authored-by: Datta Nimmaturi <datta.nimmaturi@nutanix.com>
Co-authored-by: dendarrion <37800703+dendarrion@users.noreply.github.com>
Co-authored-by: Erland366 <erland.pg366@gmail.com>
danielhanchen added a commit that referenced this pull request Nov 6, 2024
* Fix TRL

* Update mistral.py

* Patch processing_class

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Installation guide (#1165)

* chore: update chat_templates.py (#1166)

orginal -> original

* Disable Flex Attention

* Update tokenizer_utils.py

* Update _utils.py

* n_items

* Update cross_entropy_loss.py

* Fix DPO, ORPO

* Update _utils.py

* Update _utils.py

* fix/transformers-unpack (#1180)

* Fix DPO, ORPO (#1177)

* Fix TRL

* Update mistral.py

* Patch processing_class

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Installation guide (#1165)

* chore: update chat_templates.py (#1166)

orginal -> original

* Disable Flex Attention

* Update tokenizer_utils.py

* Update _utils.py

* n_items

* Update cross_entropy_loss.py

* Fix DPO, ORPO

* Update _utils.py

---------

Co-authored-by: timothelaborie <97834767+timothelaborie@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>

* Add warning for missing Unpack and KwargsForCausalLM in older Transformers versions

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>
Co-authored-by: timothelaborie <97834767+timothelaborie@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>

* Update cross_entropy_loss.py

* Update _utils.py

* Update _utils.py

* donot upcast lm_head and embeddings to float32 (#1186)

* Cleanup upcast logs (#1188)

* Fix/phi-longrope (#1193)

* Enhance rotary embedding handling in LlamaAttention and LongRopeRotaryEmbedding

* Typo

* Improve rotary embedding handling in LlamaAttention to prevent errors with short KV cache

* Update llama.py

* Update llama.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Update transformers

* Unk token issues

* Update _utils.py

* Fix pad token

* Update llama.py

* Typo

* ignored labels

* Revert "ignored labels"

This reverts commit 9d07be0.

* More patching

* Update _utils.py

* Update _utils.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Feat/all tmp (#1219)

* Update save.py

Check whether path is in /tmp dir for Kaggle environment

* Update save.py

Move temporary_location to /tmp in Kaggle

* Enhance Kaggle environment support in save and tokenizer utilities

---------

Co-authored-by: dendarrion <37800703+dendarrion@users.noreply.github.com>
Co-authored-by: Erland366 <erland.pg366@gmail.com>

* Bug fixes

* Update pyproject.toml

* Update _utils.py

* Update __init__.py

* Update __init__.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Tied weights

* Revert "Tied weights"

This reverts commit 8090b7c.

* Tied weights

* Utils

* CE Loss patching

* Update __init__.py

* Update __init__.py

* Patching

* Update cross_entropy_loss.py

* CE Loss

* Update _utils.py

* Update _utils.py

* CE Loss

* Update _utils.py

* Update _utils.py

* Layernorm

* Update _utils.py

* Update _utils.py

* Post patch

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* typing

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* int64

* Update _utils.py

* Update cross_entropy_loss.py

* constexpr

* constexpr

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* CE

* Update cross_entropy_loss.py

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update utils.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* typing

* Update rope_embedding.py

* types

* Disable compiling

* Update _utils.py

* Update _utils.py

* Forward hook

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update pyproject.toml

* Update _utils.py

* Update llama.py

* CE Loss

* Update cross_entropy_loss.py

* Update _utils.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update llama.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

---------

Co-authored-by: timothelaborie <97834767+timothelaborie@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Edd <68678137+Erland366@users.noreply.github.com>
Co-authored-by: Datta Nimmaturi <datta.nimmaturi@nutanix.com>
Co-authored-by: dendarrion <37800703+dendarrion@users.noreply.github.com>
Co-authored-by: Erland366 <erland.pg366@gmail.com>
danielhanchen added a commit that referenced this pull request Nov 7, 2024
* Fix TRL

* Update mistral.py

* Patch processing_class

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Installation guide (#1165)

* chore: update chat_templates.py (#1166)

orginal -> original

* Disable Flex Attention

* Update tokenizer_utils.py

* Update _utils.py

* n_items

* Update cross_entropy_loss.py

* Fix DPO, ORPO

* Update _utils.py

* Update _utils.py

* fix/transformers-unpack (#1180)

* Fix DPO, ORPO (#1177)

* Fix TRL

* Update mistral.py

* Patch processing_class

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Installation guide (#1165)

* chore: update chat_templates.py (#1166)

orginal -> original

* Disable Flex Attention

* Update tokenizer_utils.py

* Update _utils.py

* n_items

* Update cross_entropy_loss.py

* Fix DPO, ORPO

* Update _utils.py

---------

Co-authored-by: timothelaborie <97834767+timothelaborie@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>

* Add warning for missing Unpack and KwargsForCausalLM in older Transformers versions

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>
Co-authored-by: timothelaborie <97834767+timothelaborie@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>

* Update cross_entropy_loss.py

* Update _utils.py

* Update _utils.py

* donot upcast lm_head and embeddings to float32 (#1186)

* Cleanup upcast logs (#1188)

* Fix/phi-longrope (#1193)

* Enhance rotary embedding handling in LlamaAttention and LongRopeRotaryEmbedding

* Typo

* Improve rotary embedding handling in LlamaAttention to prevent errors with short KV cache

* Update llama.py

* Update llama.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Update transformers

* Unk token issues

* Update _utils.py

* Fix pad token

* Update llama.py

* Typo

* ignored labels

* Revert "ignored labels"

This reverts commit 9d07be0.

* More patching

* Update _utils.py

* Update _utils.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Feat/all tmp (#1219)

* Update save.py

Check whether path is in /tmp dir for Kaggle environment

* Update save.py

Move temporary_location to /tmp in Kaggle

* Enhance Kaggle environment support in save and tokenizer utilities

---------

Co-authored-by: dendarrion <37800703+dendarrion@users.noreply.github.com>
Co-authored-by: Erland366 <erland.pg366@gmail.com>

* Bug fixes

* Update pyproject.toml

* Update _utils.py

* Update __init__.py

* Update __init__.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Tied weights

* Revert "Tied weights"

This reverts commit 8090b7c.

* Tied weights

* Utils

* CE Loss patching

* Update __init__.py

* Update __init__.py

* Patching

* Update cross_entropy_loss.py

* CE Loss

* Update _utils.py

* Update _utils.py

* CE Loss

* Update _utils.py

* Update _utils.py

* Layernorm

* Update _utils.py

* Update _utils.py

* Post patch

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* typing

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* int64

* Update _utils.py

* Update cross_entropy_loss.py

* constexpr

* constexpr

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* CE

* Update cross_entropy_loss.py

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update utils.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* typing

* Update rope_embedding.py

* types

* Disable compiling

* Update _utils.py

* Update _utils.py

* Forward hook

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update pyproject.toml

* Update _utils.py

* Update llama.py

* CE Loss

* Update cross_entropy_loss.py

* Update _utils.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update llama.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Fix: cast logits to float32 in cross_entropy_forward to prevent errors (#1254)

* Fix: cast logits to float32 in cross_entropy_forward to prevent errors

* Update cross_entropy_loss.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Throw error when inferencing longer than max_popsition_embeddings (#1236)

* Throw error when inferencing longer than max_popsition_embeddings without rope scaling

* Update llama.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* CLI now handles user input strings for dtype correctly (#1235)

Co-authored-by: root <root@ieeres.chu.cam.ac.uk>

* Update flex_attention.py

* Update _utils.py

* Update _utils.py

* Update flex_attention.py

* Update flex_attention.py

* Update loader.py

* Update loader.py

* Update flex_attention.py

* Update flex_attention.py

* Update flex_attention.py

* Update flex_attention.py

* Update _utils.py

---------

Co-authored-by: timothelaborie <97834767+timothelaborie@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Edd <68678137+Erland366@users.noreply.github.com>
Co-authored-by: Datta Nimmaturi <datta.nimmaturi@nutanix.com>
Co-authored-by: dendarrion <37800703+dendarrion@users.noreply.github.com>
Co-authored-by: Erland366 <erland.pg366@gmail.com>
Co-authored-by: Edwin Fennell <edwinfennell1@gmail.com>
Co-authored-by: root <root@ieeres.chu.cam.ac.uk>
danielhanchen added a commit that referenced this pull request Nov 12, 2024
* Fix TRL

* Update mistral.py

* Patch processing_class

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Installation guide (#1165)

* chore: update chat_templates.py (#1166)

orginal -> original

* Disable Flex Attention

* Update tokenizer_utils.py

* Update _utils.py

* n_items

* Update cross_entropy_loss.py

* Fix DPO, ORPO

* Update _utils.py

* Update _utils.py

* fix/transformers-unpack (#1180)

* Fix DPO, ORPO (#1177)

* Fix TRL

* Update mistral.py

* Patch processing_class

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Installation guide (#1165)

* chore: update chat_templates.py (#1166)

orginal -> original

* Disable Flex Attention

* Update tokenizer_utils.py

* Update _utils.py

* n_items

* Update cross_entropy_loss.py

* Fix DPO, ORPO

* Update _utils.py

---------

Co-authored-by: timothelaborie <97834767+timothelaborie@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>

* Add warning for missing Unpack and KwargsForCausalLM in older Transformers versions

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>
Co-authored-by: timothelaborie <97834767+timothelaborie@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>

* Update cross_entropy_loss.py

* Update _utils.py

* Update _utils.py

* donot upcast lm_head and embeddings to float32 (#1186)

* Cleanup upcast logs (#1188)

* Fix/phi-longrope (#1193)

* Enhance rotary embedding handling in LlamaAttention and LongRopeRotaryEmbedding

* Typo

* Improve rotary embedding handling in LlamaAttention to prevent errors with short KV cache

* Update llama.py

* Update llama.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Update transformers

* Unk token issues

* Update _utils.py

* Fix pad token

* Update llama.py

* Typo

* ignored labels

* Revert "ignored labels"

This reverts commit 9d07be0.

* More patching

* Update _utils.py

* Update _utils.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Feat/all tmp (#1219)

* Update save.py

Check whether path is in /tmp dir for Kaggle environment

* Update save.py

Move temporary_location to /tmp in Kaggle

* Enhance Kaggle environment support in save and tokenizer utilities

---------

Co-authored-by: dendarrion <37800703+dendarrion@users.noreply.github.com>
Co-authored-by: Erland366 <erland.pg366@gmail.com>

* Bug fixes

* Update pyproject.toml

* Update _utils.py

* Update __init__.py

* Update __init__.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Tied weights

* Revert "Tied weights"

This reverts commit 8090b7c.

* Tied weights

* Utils

* CE Loss patching

* Update __init__.py

* Update __init__.py

* Patching

* Update cross_entropy_loss.py

* CE Loss

* Update _utils.py

* Update _utils.py

* CE Loss

* Update _utils.py

* Update _utils.py

* Layernorm

* Update _utils.py

* Update _utils.py

* Post patch

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* typing

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* int64

* Update _utils.py

* Update cross_entropy_loss.py

* constexpr

* constexpr

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* CE

* Update cross_entropy_loss.py

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update utils.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* typing

* Update rope_embedding.py

* types

* Disable compiling

* Update _utils.py

* Update _utils.py

* Forward hook

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update pyproject.toml

* Update _utils.py

* Update llama.py

* CE Loss

* Update cross_entropy_loss.py

* Update _utils.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update llama.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Fix: cast logits to float32 in cross_entropy_forward to prevent errors (#1254)

* Fix: cast logits to float32 in cross_entropy_forward to prevent errors

* Update cross_entropy_loss.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Throw error when inferencing longer than max_popsition_embeddings (#1236)

* Throw error when inferencing longer than max_popsition_embeddings without rope scaling

* Update llama.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* CLI now handles user input strings for dtype correctly (#1235)

Co-authored-by: root <root@ieeres.chu.cam.ac.uk>

* Update flex_attention.py

* Update _utils.py

* Update _utils.py

* Update flex_attention.py

* Update flex_attention.py

* Update loader.py

* Update loader.py

* Update flex_attention.py

* Update flex_attention.py

* Update flex_attention.py

* Update flex_attention.py

* Update _utils.py

* Update cross_entropy_loss.py

* Update _utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* triton_cast

* Update utils.py

* Qwen 2.5 Coder

---------

Co-authored-by: timothelaborie <97834767+timothelaborie@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Edd <68678137+Erland366@users.noreply.github.com>
Co-authored-by: Datta Nimmaturi <datta.nimmaturi@nutanix.com>
Co-authored-by: dendarrion <37800703+dendarrion@users.noreply.github.com>
Co-authored-by: Erland366 <erland.pg366@gmail.com>
Co-authored-by: Edwin Fennell <edwinfennell1@gmail.com>
Co-authored-by: root <root@ieeres.chu.cam.ac.uk>
danielhanchen added a commit that referenced this pull request Nov 14, 2024
* Fix TRL

* Update mistral.py

* Patch processing_class

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Installation guide (#1165)

* chore: update chat_templates.py (#1166)

orginal -> original

* Disable Flex Attention

* Update tokenizer_utils.py

* Update _utils.py

* n_items

* Update cross_entropy_loss.py

* Fix DPO, ORPO

* Update _utils.py

* Update _utils.py

* fix/transformers-unpack (#1180)

* Fix DPO, ORPO (#1177)

* Fix TRL

* Update mistral.py

* Patch processing_class

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Installation guide (#1165)

* chore: update chat_templates.py (#1166)

orginal -> original

* Disable Flex Attention

* Update tokenizer_utils.py

* Update _utils.py

* n_items

* Update cross_entropy_loss.py

* Fix DPO, ORPO

* Update _utils.py

---------

Co-authored-by: timothelaborie <97834767+timothelaborie@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>

* Add warning for missing Unpack and KwargsForCausalLM in older Transformers versions

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>
Co-authored-by: timothelaborie <97834767+timothelaborie@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>

* Update cross_entropy_loss.py

* Update _utils.py

* Update _utils.py

* donot upcast lm_head and embeddings to float32 (#1186)

* Cleanup upcast logs (#1188)

* Fix/phi-longrope (#1193)

* Enhance rotary embedding handling in LlamaAttention and LongRopeRotaryEmbedding

* Typo

* Improve rotary embedding handling in LlamaAttention to prevent errors with short KV cache

* Update llama.py

* Update llama.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Update transformers

* Unk token issues

* Update _utils.py

* Fix pad token

* Update llama.py

* Typo

* ignored labels

* Revert "ignored labels"

This reverts commit 9d07be0.

* More patching

* Update _utils.py

* Update _utils.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Feat/all tmp (#1219)

* Update save.py

Check whether path is in /tmp dir for Kaggle environment

* Update save.py

Move temporary_location to /tmp in Kaggle

* Enhance Kaggle environment support in save and tokenizer utilities

---------

Co-authored-by: dendarrion <37800703+dendarrion@users.noreply.github.com>
Co-authored-by: Erland366 <erland.pg366@gmail.com>

* Bug fixes

* Update pyproject.toml

* Update _utils.py

* Update __init__.py

* Update __init__.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Tied weights

* Revert "Tied weights"

This reverts commit 8090b7c.

* Tied weights

* Utils

* CE Loss patching

* Update __init__.py

* Update __init__.py

* Patching

* Update cross_entropy_loss.py

* CE Loss

* Update _utils.py

* Update _utils.py

* CE Loss

* Update _utils.py

* Update _utils.py

* Layernorm

* Update _utils.py

* Update _utils.py

* Post patch

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* typing

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* int64

* Update _utils.py

* Update cross_entropy_loss.py

* constexpr

* constexpr

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* CE

* Update cross_entropy_loss.py

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update utils.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* Update rms_layernorm.py

* typing

* Update rope_embedding.py

* types

* Disable compiling

* Update _utils.py

* Update _utils.py

* Forward hook

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update pyproject.toml

* Update _utils.py

* Update llama.py

* CE Loss

* Update cross_entropy_loss.py

* Update _utils.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update cross_entropy_loss.py

* Update llama.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Fix: cast logits to float32 in cross_entropy_forward to prevent errors (#1254)

* Fix: cast logits to float32 in cross_entropy_forward to prevent errors

* Update cross_entropy_loss.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Throw error when inferencing longer than max_popsition_embeddings (#1236)

* Throw error when inferencing longer than max_popsition_embeddings without rope scaling

* Update llama.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* CLI now handles user input strings for dtype correctly (#1235)

Co-authored-by: root <root@ieeres.chu.cam.ac.uk>

* Update flex_attention.py

* Update _utils.py

* Update _utils.py

* Update flex_attention.py

* Update flex_attention.py

* Update loader.py

* Update loader.py

* Update flex_attention.py

* Update flex_attention.py

* Update flex_attention.py

* Update flex_attention.py

* Update _utils.py

* Update cross_entropy_loss.py

* Update _utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* triton_cast

* Update utils.py

* Qwen 2.5 Coder

* Fix/export mistral (#1281)

* Enhance install_python_non_blocking to handle protobuf installation and process management

* Revert "Enhance install_python_non_blocking to handle protobuf installation and process management"

This reverts commit f09974b.

* Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION to 'python' to address issue #1266

* Revert "Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION to 'python' to address issue #1266"

This reverts commit 9fc1307.

* Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION to 'python' to address issue #1266

* Update __init__.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* DOC Update - Update README.md with os.environ in example (#1269)

* Update README.md with os.environ in example

Added OS Environ in example to avoid device conflicts , for a user at least in jupyter notebook this allows to select GPU in a multi GPU setup. 
As currently the  unsloth init checks all GPU's and takes the first in the order which can be a issue when some GPU's are in use and the list still shows them. So to manually avoid this, this os config is required.
Small change but a bit time saver for those who straight away copies the tutorials

* Update README.md

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* fix/get_chat_template (#1246)

* Refactor `get_chat_template` to now support system message instead. It supposed to fix ollama tokenizer chattemplate to

* Remove type hinting

* Update chat_templates.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* fix/sft-trainer (#1276)

* Add patch for SFTTrainer to maintain backward compatibility with TRL changes

* Update trainer.py

* Update trainer.py

* Refactor trainer patch to maintain backward compatibility with TRL changes

* Update trainer.py

* Refactor trainer.py to exclude non-convertible trainers from backward compatibility patch

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Update __init__.py

* Update trainer.py

* Update trainer.py

* Update trainer.py

* Update tokenizer_utils.py

---------

Co-authored-by: timothelaborie <97834767+timothelaborie@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Edd <68678137+Erland366@users.noreply.github.com>
Co-authored-by: Datta Nimmaturi <datta.nimmaturi@nutanix.com>
Co-authored-by: dendarrion <37800703+dendarrion@users.noreply.github.com>
Co-authored-by: Erland366 <erland.pg366@gmail.com>
Co-authored-by: Edwin Fennell <edwinfennell1@gmail.com>
Co-authored-by: root <root@ieeres.chu.cam.ac.uk>
Co-authored-by: Uday Girish Maradana <einsteingirish@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants