-
Notifications
You must be signed in to change notification settings - Fork 401
Issues: SJTU-IPADS/PowerInfer
Meta: Implementing hybrid inference across key desktop platforms
#92
openedDec 27, 2023 by
hodlen
Open
New issue
Have a question about this project?Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of serviceand privacy statement.We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Am i doing something wrong?
question
Further information is requested
#216
openedAug 28, 2024 by
RealMrCactus
3 tasks done
Có WeChat hoặc QQ hoặc mặt khác giao lưu đàn hoặc là tính toán khai một cái sao?
#215
openedAug 15, 2024 by
lzcchl
Some question about Fig4.
question
Further information is requested
#213
openedJul 23, 2024 by
rhmaaa
Ta muốn như thế nào đạt được đoán trước văn kiện đâu
question
Further information is requested
#211
openedJul 15, 2024 by
LDLINGLINGLING
3 tasks
Feature request: Support for PHI3 mini
enhancement
New feature or request
#210
openedJul 14, 2024 by
raymond-infinitecode
3 tasks
Xin hỏi powerinfer có không kiêm dung llama.cpp mô hình đâu
question
Further information is requested
#209
openedJul 5, 2024 by
mailonghua
the output for Q4_gguf is strange again!!
bug-unconfirmed
Unconfirmed bugs
#208
openedJul 4, 2024 by
milktea888
About powerinfer-2
enhancement
New feature or request
#207
openedJul 2, 2024 by
Ther-nullptr
3 tasks done
Where is the TurboSparse-Mixtral mlp_predictor?
question
Further information is requested
#203
openedJun 27, 2024 by
MatthewCroughan
Xin hỏi có thể cùng vllm cộng đồng sử dụng sao
question
Further information is requested
#202
openedJun 26, 2024 by
yadandan
How to convert ProSparse-LLaMA-2-13B model to.gguf?
question
Further information is requested
#201
openedJun 23, 2024 by
Graysonicc
3 tasks done
Duy trì lượng hóa loại hình
question
Further information is requested
#196
openedJun 14, 2024 by
deleteeeee
Source for v2 (mobile inference engine)
question
Further information is requested
#194
openedJun 12, 2024 by
peeteeman
Need quite a long time to load the model
question
Further information is requested
#188
openedMay 21, 2024 by
meicale
Will this work with Falcon 2?
question
Further information is requested
#186
openedMay 14, 2024 by
aaronrmm
Về ở A100 hiện tạp thượng trắc đến hiệu quả dị thường nghi vấn
question
Further information is requested
#184
openedMay 4, 2024 by
bulaikexiansheng
Xin hỏi đại thần có duy trì LLama 3 70B kế hoạch sao?
enhancement
New feature or request
#183
openedMay 1, 2024 by
xiasw81
Ở A100-80G thượng vô pháp tìm được cuda tình huống
question
Further information is requested
#182
openedApr 24, 2024 by
bulaikexiansheng
Where is the definition or addition location of GGML_USE_HYBRID_THREADING?
question
Further information is requested
#172
openedMar 25, 2024 by
wfloveiu
two questions that i want to solve
question
Further information is requested
#167
openedMar 18, 2024 by
yeptttt
Will we have instruct fine-tuned model support in the future?
question
Further information is requested
#164
openedMar 13, 2024 by
ZeonfaiHo
3 tasks done
[Question]: High PPL on wikitext2 of ReLU-LLAMA-7B for language modeling tasks
question
Further information is requested
#162
openedMar 11, 2024 by
llCurious
3 tasks done
PreviousNext
ProTip!
Updated in the last three days:updated:>2024-09-06.