Instructions to use lmstudio-community/functiongemma-270m-it-MLX-5bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use lmstudio-community/functiongemma-270m-it-MLX-5bit with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="lmstudio-community/functiongemma-270m-it-MLX-5bit") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("lmstudio-community/functiongemma-270m-it-MLX-5bit") model = AutoModelForCausalLM.from_pretrained("lmstudio-community/functiongemma-270m-it-MLX-5bit") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - MLX
How to use lmstudio-community/functiongemma-270m-it-MLX-5bit with MLX:
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("lmstudio-community/functiongemma-270m-it-MLX-5bit") prompt = "Write a story about Einstein" messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) text = generate(model, tokenizer, prompt=prompt, verbose=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- vLLM
How to use lmstudio-community/functiongemma-270m-it-MLX-5bit with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "lmstudio-community/functiongemma-270m-it-MLX-5bit" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "lmstudio-community/functiongemma-270m-it-MLX-5bit", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/lmstudio-community/functiongemma-270m-it-MLX-5bit
- SGLang
How to use lmstudio-community/functiongemma-270m-it-MLX-5bit with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "lmstudio-community/functiongemma-270m-it-MLX-5bit" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "lmstudio-community/functiongemma-270m-it-MLX-5bit", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "lmstudio-community/functiongemma-270m-it-MLX-5bit" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "lmstudio-community/functiongemma-270m-it-MLX-5bit", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Pi new
How to use lmstudio-community/functiongemma-270m-it-MLX-5bit with Pi:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "lmstudio-community/functiongemma-270m-it-MLX-5bit"
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "mlx-lm": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "lmstudio-community/functiongemma-270m-it-MLX-5bit" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use lmstudio-community/functiongemma-270m-it-MLX-5bit with Hermes Agent:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "lmstudio-community/functiongemma-270m-it-MLX-5bit"
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default lmstudio-community/functiongemma-270m-it-MLX-5bit
Run Hermes
hermes
- MLX LM
How to use lmstudio-community/functiongemma-270m-it-MLX-5bit with MLX LM:
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm # Interactive chat REPL mlx_lm.chat --model "lmstudio-community/functiongemma-270m-it-MLX-5bit"
Run an OpenAI-compatible server
# Install MLX LM uv tool install mlx-lm # Start the server mlx_lm.server --model "lmstudio-community/functiongemma-270m-it-MLX-5bit" # Calling the OpenAI-compatible server with curl curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "lmstudio-community/functiongemma-270m-it-MLX-5bit", "messages": [ {"role": "user", "content": "Hello"} ] }' - Docker Model Runner
How to use lmstudio-community/functiongemma-270m-it-MLX-5bit with Docker Model Runner:
docker model run hf.co/lmstudio-community/functiongemma-270m-it-MLX-5bit
💫 Community Model> functiongemma-270m-it by google
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord.
Model creator: google
Original model: functiongemma-270m-it
MLX quantization: provided by LM Studio team using mlx_lm
Technical Details
5-bit quantized version of functiongemma-270m-it using MLX, optimized for Apple Silicon.
Special thanks
🙏 Special thanks to the Apple Machine Learning Research team for creating MLX.
Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
- Downloads last month
- 40
5-bit
Model tree for lmstudio-community/functiongemma-270m-it-MLX-5bit
Base model
google/functiongemma-270m-it