-
Notifications
You must be signed in to change notification settings - Fork 150
Issues: predibase/lorax
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Some error records and questions
question
Further information is requested
#115
opened Dec 8, 2023 by
KrisWongz
1 of 4 tasks
Extend adapters to support MLP head for embeddings, classification
enhancement
New feature or request
#10
opened Nov 10, 2023 by
tgaddair
Add support for additional custom models
enhancement
New feature or request
#12
opened Nov 10, 2023 by
tgaddair
7 tasks
Panic when adapter cannot be loaded
bug
Something isn't working
#14
opened Nov 12, 2023 by
tgaddair
4 tasks
Project Roadmap
enhancement
New feature or request
#57
opened Nov 22, 2023 by
tgaddair
16 of 36 tasks
Use special tokens specific to the fine-tuned adapter during decoding
enhancement
New feature or request
#71
opened Nov 27, 2023 by
tgaddair
Return number of input tokens when New feature or request
good first issue
Good for newcomers
details=True
enhancement
#83
opened Nov 29, 2023 by
tgaddair
Does lorax currently support GPT2 finetuned adapters?
enhancement
New feature or request
#84
opened Nov 30, 2023 by
abhijithnair1
2 of 4 tasks
getting error during inference "Unsupported head size: 32"
bug
Something isn't working
#86
opened Nov 30, 2023 by
abhijithnair1
2 of 4 tasks
how does this differ from s-Lora?
question
Further information is requested
#90
opened Nov 30, 2023 by
priyankat99
How to use --master-addr <MASTER_ADDR>|--master-port <MASTER_PORT>?
question
Further information is requested
#99
opened Dec 4, 2023 by
prd-tuong-nguyen
2 of 4 tasks
Is there any plan to support dynamic lora for qwen/chatglm models?
enhancement
New feature or request
#101
opened Dec 5, 2023 by
KrisWongz
Fuse allgather requests across adapters and q, k, v to reduce small network requests
enhancement
New feature or request
#6
opened Nov 10, 2023 by
tgaddair
Question regarding Punica integeration
question
Further information is requested
#107
opened Dec 6, 2023 by
psych0v0yager
Latency increase when run on multi-GPU
question
Further information is requested
#116
opened Dec 8, 2023 by
prd-tuong-nguyen
2 of 4 tasks
Add RoPE scaling CLI args
enhancement
New feature or request
good first issue
Good for newcomers
#142
opened Dec 19, 2023 by
tgaddair
Extend testing
enhancement
New feature or request
good first issue
Good for newcomers
#148
opened Dec 22, 2023 by
flozi00
Second GPU is not found when running --sharded true
question
Further information is requested
#150
opened Dec 24, 2023 by
psych0v0yager
2 of 4 tasks
Support custom tokenizer when loading a local model
bug
Something isn't working
#151
opened Dec 25, 2023 by
yinjiaoyuan
Support multiple ranks per SGMV op
enhancement
New feature or request
#160
opened Jan 4, 2024 by
tgaddair
ValueError: Adapter '/data/llama2-lora' is not compatible with model '/data/Llama-2-7b-chat-hf'. Use --model-id '/new-model/llama2-7b/Llama-2-7b-chat-hf' instead.
question
Further information is requested
#172
opened Jan 10, 2024 by
Senna1960321
2 of 4 tasks
Skip the download process from New feature or request
lorax-launcher
if model weights already on local disk
enhancement
#180
opened Jan 12, 2024 by
tgaddair
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.