mirror of
https://github.com/zebrajr/ollama.git
synced 2026-01-15 12:15:09 +00:00
* flash attn: add auto mode for llama engine If the user does not specify fa in the environment, use auto-mode. * review comments * ensure kv cache quantized types have FA explicitly enabled additional review comments
21 KiB
21 KiB