-
Notifications
You must be signed in to change notification settings - Fork 507
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update tokenizer #3061
Update tokenizer #3061
Conversation
lvhan028
commented
Jan 21, 2025
- remove never used tokenizer "SentencePieceTokenizer"
- Let async_engine instead of inference engines init tokenizer
- converter places the tokenizer-related files under the root directory of workspace rather than the weight directory
Unit tests failed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it mean our convert.py should deprecate meta_llama
models?
Yes, you are right |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* main: (90 commits) Fix cogvlm and phi3vision (InternLM#3137) support release pipeline (InternLM#3069) [ci] fix some fail in daily testcase (InternLM#3134) Fix internvl2.5 error after eviction (InternLM#3122) fix UT of deepseek chat template (InternLM#3125) Update benchmark script and user guide (InternLM#3110) bump version to v0.7.0.post3 (InternLM#3115) fix postional argument (InternLM#3086) remove logitswarper (InternLM#3109) [Fix] fix the URL judgment problem in Windows (InternLM#3103) fix user guide about cogvlm deployment (InternLM#3088) add option max-concurrent-requests for api_server(InternLM#2961) bump version to v0.7.0.post2 (InternLM#3094) Fix xcomposer2d5 (InternLM#3087) Add system role to deepseek chat template (InternLM#3031) Update tokenizer (InternLM#3061) Add deepseek-r1 chat template (InternLM#3072) bump version to v0.7.0.post1 (InternLM#3076) More arguments in api_client, update docstrings (InternLM#3077) fix sliding window mgr (InternLM#3068) ... # Conflicts: # lmdeploy/turbomind/turbomind.py