TechnologyOpen Source RepoAI 10/10

jundot/omlx

4,827 starsPythonUpdated Mar 15, 2026
About this repository
LLM inference server with continuous batching & SSD caching for Apple Silicon — managed from the macOS menu bar
apple-siliconinference-serverllmmacosmlxopenai-api
GitHub · LLM·github.com
View repository on GitHubMore Technology
WOKHEI This is an open source repository surfaced by WokHei. The description above is from the original GitHub repo. Click "View repository" to explore the code, issues, and contributors.
Discussion
Join the discussion
Sign in for a verified badge and your comments appear instantly. Or post anonymously — anonymous comments are held briefly for moderation.
WokHei Digest
Workflow insights, delivered
Curated tips, tools, and tutorials for builders — twice a month, no spam.
Customize topics first