metadata
library_name: mlx
base_model: Tesslate/OmniCoder-9B
tags:
- qwen3.5
- code
- agent
- sft
- omnicoder
- tesslate
- mlx
license: apache-2.0
language:
- en
pipeline_tag: text-generation
model-index:
- name: OmniCoder-9B
results:
- task:
type: text-generation
dataset:
name: AIME 2025
type: custom
metrics:
- type: accuracy
value: 90
name: pass@5
- type: accuracy
value: 83.8
name: pass@1
- type: accuracy
value: 86.4
name: pass@3
- type: accuracy
value: 28.1
name: Pass Rate
arthurcollet/OmniCoder-9B-mlx-mxfp8
This model arthurcollet/OmniCoder-9B-mlx-mxfp8 was converted to MLX format from Tesslate/OmniCoder-9B using mlx-lm version 0.31.1.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("arthurcollet/OmniCoder-9B-mlx-mxfp8")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_dict=False,
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)