Hi everyone!
I built an open-source macOS app called M-Courtyard that provides a full GUI
for fine-tuning LLMs using mlx-lm on Apple Silicon.
It integrates directly with Hugging Face’s mlx-community models — you can
browse, download, and fine-tune models like Qwen 3, DeepSeek R1, Llama 3
and more, all through a visual interface.
Workflow:
- Import documents → Auto-generate training data
- Select a model from mlx-community (auto-download)
- Fine-tune with LoRA/DoRA — real-time loss visualization
- Test with built-in chat
- Export to Ollama
No CLI, no Python scripts needed.
GitHub: GitHub - tuwenbo0120/m-courtyard: M-Courtyard: Local AI Model Fine-tuning Assistant for Apple Silicon. Zero-code, zero-cloud, privacy-first desktop app powered by Tauri + React + mlx-lm.
Requirements: macOS 14+, Apple Silicon, 16GB+ RAM recommended
Would love to hear your thoughts, especially from fellow MLX users!