# تحميل المحوّلات باستخدام 🤗 PEFT

تقنية "التدريب الدقيق ذو الكفاءة البارامتيرية" (PEFT)](https://huggingface.co/blog/peft) تقوم بتجميد معلمات النموذج المُدرب مسبقًا أثناء الضبط الدقيق وتضيف عدد صغير من المعلمات القابلة للتدريب (المحولات) فوقه. يتم تدريب المحوّلات لتعلم معلومات خاصة بالمهام. وقد ثبت أن هذا النهج فعال للغاية من حيث استخدام الذاكرة مع انخفاض استخدام الكمبيوتر أثناء إنتاج نتائج قمماثلة للنموذج مضبوط دقيقًا بالكامل.

عادة ما تكون المحولات المدربة باستخدام PEFT أصغر بمقدار كبير من حيث الحجم من النموذج الكامل، مما يجعل من السهل مشاركتها وتخزينها وتحميلها.

  
  تبلغ أوزان المحول لطراز OPTForCausalLM المخزن على Hub حوالي 6 ميجابايت مقارنة بالحجم الكامل لأوزان النموذج، والتي يمكن أن تكون حوالي 700 ميجابايت.

إذا كنت مهتمًا بمعرفة المزيد عن مكتبة 🤗 PEFT، فراجع [الوثائق](https://huggingface.co/docs/peft/index).

## الإعداد

ابدأ بتثبيت 🤗 PEFT:

```bash
pip install peft
```

إذا كنت تريد تجربة الميزات الجديدة تمامًا، فقد تكون مهتمًا بتثبيت المكتبة من المصدر:

```bash
pip install git+https://github.com/huggingface/peft.git
```

## نماذج PEFT المدعومة

يدعم 🤗 Transformers بشكلٍ أصلي بعض طرق PEFT، مما يعني أنه يمكنك تحميل أوزان المحول المخزنة محليًا أو على Hub وتشغيلها أو تدريبها ببضع سطور من التعليمات البرمجية. الطرق المدعومة هي:

- [محولات الرتبة المنخفضة](https://huggingface.co/docs/peft/conceptual_guides/lora)
- [IA3](https://huggingface.co/docs/peft/conceptual_guides/ia3)
- [AdaLoRA](https://huggingface.co/papers/2303.10512)

إذا كنت تريد استخدام طرق PEFT الأخرى، مثل تعلم المحث أو ضبط المحث، أو حول مكتبة 🤗 PEFT بشكل عام، يرجى الرجوع إلى [الوثائق](https://huggingface.co/docs/peft/index).

## تحميل محول PEFT

لتحميل نموذج محول PEFT واستخدامه من 🤗 Transformers، تأكد من أن مستودع Hub أو الدليل المحلي يحتوي على ملف `adapter_config.json` وأوزان المحوّل، كما هو موضح في صورة المثال أعلاه. بعد ذلك، يمكنك تحميل نموذج محوّل PEFT باستخدام فئة `AutoModelFor`. على سبيل المثال، لتحميل نموذج محول PEFT للنمذجة اللغوية السببية:

1. حدد معرف النموذج  لPEFT
2. مرره إلى فئة `AutoModelForCausalLM`

```py
from transformers import AutoModelForCausalLM, AutoTokenizer

peft_model_id = "ybelkada/opt-350m-lora"
model = AutoModelForCausalLM.from_pretrained(peft_model_id)
```

يمكنك تحميل محول PEFT باستخدام فئة `AutoModelFor` أو فئة النموذج الأساسي مثل `OPTForCausalLM` أو `LlamaForCausalLM`.

يمكنك أيضًا تحميل محول PEFT عن طريق استدعاء طريقة `load_adapter`:

```py
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "facebook/opt-350m"
peft_model_id = "ybelkada/opt-350m-lora"

model = AutoModelForCausalLM.from_pretrained(model_id)
model.load_adapter(peft_model_id)
```

راجع قسم [وثائق API](#transformers.integrations.PeftAdapterMixin) أدناه لمزيد من التفاصيل.

## التحميل في 8 بت أو 4 بت

راجع قسم [وثائق API](#transformers.integrations.PeftAdapterMixin) أدناه لمزيد من التفاصيل.

## التحميل في 8 بت أو 4 بت

يدعم تكامل `bitsandbytes` أنواع بيانات الدقة 8 بت و4 بت، والتي تكون مفيدة لتحميل النماذج الكبيرة لأنها توفر  مساحة في الذاكرة (راجع دليل تكامل `bitsandbytes` [guide](./quantization#bitsandbytes-integration) لمعرفة المزيد). أضف المعلمات`load_in_8bit` أو `load_in_4bit` إلى `from_pretrained()` وقم بتعيين `device_map="auto"` لتوزيع النموذج بشكل فعال على الأجهزة لديك:

```py
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig

peft_model_id = "ybelkada/opt-350m-lora"
model = AutoModelForCausalLM.from_pretrained(peft_model_id, quantization_config=BitsAndBytesConfig(load_in_8bit=True))
```

## إضافة محول جديد

يمكنك استخدام الدالة `add_adapter` لإضافة محوّل جديد إلى نموذج يحتوي بالفعل على محوّل آخر طالما أن المحول الجديد  مطابقًا للنوع الحالي. على سبيل المثال، إذا كان لديك محول LoRA موجود مرتبط بنموذج:

```py
from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer
from peft import LoraConfig

model_id = "facebook/opt-350m"
model = AutoModelForCausalLM.from_pretrained(model_id)

lora_config = LoraConfig(
    target_modules=["q_proj", "k_proj"],
    init_lora_weights=False
)

model.add_adapter(lora_config, adapter_name="adapter_1")
```

لإضافة محول جديد:

```py
# قم بتعليق محول جديد بنفس التكوين
model.add_adapter(lora_config, adapter_name="adapter_2")
```

الآن يمكنك استخدام `set_adapter` لتعيين المحول الذي سيتم استخدامه:

```py
# استخدم adapter_1
model.set_adapter("adapter_1")
output = model.generate(**inputs)
print(tokenizer.decode(output_disabled[0], skip_special_tokens=True))

# استخدم adapter_2
model.set_adapter("adapter_2")
output_enabled = model.generate(**inputs)
print(tokenizer.decode(output_enabled[0], skip_special_tokens=True))
```

## تمكين وتعطيل المحولات

بمجرد إضافة محول إلى نموذج، يمكنك تمكين أو تعطيل وحدة المحول. لتمكين وحدة المحول:

```py
from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer
from peft import PeftConfig

model_id = "facebook/opt-350m"
adapter_model_id = "ybelkada/opt-350m-lora"
tokenizer = AutoTokenizer.from_pretrained(model_id)
text = "Hello"
inputs = tokenizer(text, return_tensors="pt")

model = AutoModelForCausalLM.from_pretrained(model_id)
peft_config = PeftConfig.from_pretrained(adapter_model_id)

# لبدء تشغيله بأوزان عشوائية
peft_config.init_lora_weights = False

model.add_adapter(peft_config)
model.enable_adapters()
output = model.generate(**inputs)
```

لإيقاف تشغيل وحدة المحول:

```py
model.disable_adapters()
output = model.generate(**inputs)
```

## تدريب محول PEFT

يدعم محول PEFT فئة `Trainer` بحيث يمكنك تدريب محول لحالتك الاستخدام المحددة. فهو يتطلب فقط إضافة بضع سطور أخرى من التعليمات البرمجية. على سبيل المثال، لتدريب محول LoRA:

إذا لم تكن معتادًا على ضبط نموذج دقيق باستخدام [`Trainer`، فراجع البرنامج التعليمي](training) لضبط نموذج مُدرب مسبقًا.

1. حدد تكوين المحول باستخدام نوع المهمة والمعاملات الزائدة (راجع `LoraConfig` لمزيد من التفاصيل حول وظيفة هذه  المعلمات).

```py
from peft import LoraConfig

peft_config = LoraConfig(
    lora_alpha=16,
    lora_dropout=0.1,
    r=64,
    bias="none",
    task_type="CAUSAL_LM"،
)
```

2. أضف المحول إلى النموذج.

```py
model.add_adapter(peft_config)
```

3. الآن يمكنك تمرير النموذج إلى `Trainer`!

```py
trainer = Trainer(model=model, ...)
trainer.train()
```

لحفظ محول المدرب وتحميله مرة أخرى:

```py
model.save_pretrained(save_dir)
model = AutoModelForCausalLM.from_pretrained(save_dir)
```

## إضافة طبقات قابلة للتدريب إضافية إلى محول PEFT

```py
model.save_pretrained(save_dir)
model = AutoModelForCausalLM.from_pretrained(save_dir)
```

## إضافة طبقات قابلة للتدريب إضافية إلى محول PEFT

يمكنك أيضًا إجراء تدريب دقيق لمحوّلات قابلة للتدريب إضافية فوق نموذج يحتوي بالفعل على محوّلات عن طريق تمرير معلم `modules_to_save` في تكوين PEFT الخاص بك. على سبيل المثال، إذا كنت تريد أيضًا ضبط دقيق لرأس النموذج اللغوي`lm_head` فوق نموذج بمحوّل LoRA:

```py
from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer
from peft import LoraConfig

model_id = "facebook/opt-350m"
model = AutoModelForCausalLM.from_pretrained(model_id)

lora_config = LoraConfig(
    target_modules=["q_proj", "k_proj"],
    modules_to_save=["lm_head"]،
)

model.add_adapter(lora_config)
```

## وثائق API[[transformers.integrations.PeftAdapterMixin]]

#### transformers.integrations.PeftAdapterMixin[[transformers.integrations.PeftAdapterMixin]]

[Source](https://github.com/huggingface/transformers/blob/v5.6.2/src/transformers/integrations/peft.py#L406)

A class containing all functions for loading and using adapters weights that are supported in PEFT library. For
more details about adapters and injecting them on a transformer-based model, check out the documentation of PEFT
library: https://huggingface.co/docs/peft/index

Currently supported PEFT methods are all non-prompt learning methods (LoRA, IA³, etc.). Other PEFT models such as
prompt tuning, prompt learning are out of scope as these adapters are not "injectable" into a torch module. For
using these methods, please refer to the usage guide of PEFT library.

With this mixin, if the correct PEFT version is installed (>= 0.18.0), it is possible to:

- Load an adapter stored on a local path or in a remote Hub repository, and inject it in the model
- Attach new adapters in the model and train them with Trainer or by your own.
- Attach multiple adapters and iteratively activate / deactivate them
- Activate / deactivate all adapters from the model.
- Get the `state_dict` of the active adapter.

load_adaptertransformers.integrations.PeftAdapterMixin.load_adapterhttps://github.com/huggingface/transformers/blob/v5.6.2/src/transformers/integrations/peft.py#L428[{"name": "peft_model_id", "val": ": str | None = None"}, {"name": "adapter_name", "val": ": str | None = None"}, {"name": "peft_config", "val": ": dict[str, typing.Any] | None = None"}, {"name": "adapter_state_dict", "val": ": dict[str, 'torch.Tensor'] | None = None"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "is_trainable", "val": ": bool = False"}, {"name": "hotswap", "val": ": typing.Union[bool, typing.Literal['auto']] = 'auto'"}, {"name": "local_files_only", "val": ": bool = False"}, {"name": "adapter_kwargs", "val": ": dict[str, typing.Any] | None = None"}, {"name": "load_config", "val": ": typing.Optional[ForwardRef('LoadStateDictConfig')] = None"}, {"name": "**kwargs", "val": ""}]- **peft_model_id** (`str`, *optional*) --
  The identifier of the model to look for on the Hub, or a local path to the saved adapter config file
  and adapter weights.
- **adapter_name** (`str`, *optional*) --
  The adapter name to use. If not set, will use the name "default".
- **load_config** (`LoadStateDictConfig`, *optional*) --
  A load configuration to reuse when pulling adapter weights, typically from `from_pretrained`.
- **kwargs** (`dict[str, Any]`, *optional*) --
  Additional `LoadStateDictConfig` fields passed as keyword arguments.
- **peft_config** (`dict[str, Any]`, *optional*) --
  The configuration of the adapter to add, supported adapters are all non-prompt learning configs (LoRA,
  IA³, etc). This argument is used in case users directly pass PEFT state dicts.
- **adapter_state_dict** (`dict[str, torch.Tensor]`, *optional*) --
  The state dict of the adapter to load. This argument is used in case users directly pass PEFT state
  dicts.
- **low_cpu_mem_usage** (`bool`, *optional*, defaults to `False`) --
  Reduce memory usage while loading the PEFT adapter. This should also speed up the loading process.
- **is_trainable** (`bool`, *optional*, defaults to `False`) --
  Whether the adapter should be trainable or not. If `False`, the adapter will be frozen and can only be
  used for inference.
- **hotswap**  -- (`"auto"` or `bool`, *optional*, defaults to `"auto"`)
  Whether to substitute an existing (LoRA) adapter with the newly loaded adapter in-place. This means
  that, instead of loading an additional adapter, this will take the existing adapter weights and replace
  them with the weights of the new adapter. This can be faster and more memory efficient. However, the
  main advantage of hotswapping is that when the model is compiled with torch.compile, loading the new
  adapter does not require recompilation of the model. When using hotswapping, the passed `adapter_name`
  should be the name of an already loaded adapter.

  If the new adapter and the old adapter have different ranks and/or LoRA alphas (i.e. scaling), you need
  to call an additional method before loading the adapter:

```py
model = AutoModel.from_pretrained(...)
max_rank = ...  # the highest rank among all LoRAs that you want to load
# call *before* compiling and loading the LoRA adapter
model.enable_peft_hotswap(target_rank=max_rank)
model.load_adapter(file_name_1, adapter_name="default")
# optionally compile the model now
model = torch.compile(model, ...)
output_1 = model(...)
# now you can hotswap the 2nd adapter, use the same name as for the 1st
# hotswap is activated by default since enable_peft_hotswap was called
model.load_adapter(file_name_2, adapter_name="default")
output_2 = model(...)
```

  By default, hotswap is disabled and requires passing `hotswap=True`. If you called
  `enable_peft_hotswap` first, it is enabled. You can still manually disable it in that case by passing
  `hotswap=False`.

  Note that hotswapping comes with a couple of limitations documented here:
  https://huggingface.co/docs/peft/main/en/package_reference/hotswap
- **adapter_kwargs** (`dict[str, Any]`, *optional*) --
  Additional keyword arguments passed along to the `from_pretrained` method of the adapter config and
  `find_adapter_config_file` method.0

Load adapter weights from file or remote Hub folder. If you are not familiar with adapters and PEFT methods, we
invite you to read more about them on PEFT official documentation: https://huggingface.co/docs/peft

Requires PEFT to be installed as a backend to load the adapter weights.

**Parameters:**

peft_model_id (`str`, *optional*) : The identifier of the model to look for on the Hub, or a local path to the saved adapter config file and adapter weights.

adapter_name (`str`, *optional*) : The adapter name to use. If not set, will use the name "default".

load_config (`LoadStateDictConfig`, *optional*) : A load configuration to reuse when pulling adapter weights, typically from `from_pretrained`.

kwargs (`dict[str, Any]`, *optional*) : Additional `LoadStateDictConfig` fields passed as keyword arguments.

peft_config (`dict[str, Any]`, *optional*) : The configuration of the adapter to add, supported adapters are all non-prompt learning configs (LoRA, IA³, etc). This argument is used in case users directly pass PEFT state dicts.

adapter_state_dict (`dict[str, torch.Tensor]`, *optional*) : The state dict of the adapter to load. This argument is used in case users directly pass PEFT state dicts.

low_cpu_mem_usage (`bool`, *optional*, defaults to `False`) : Reduce memory usage while loading the PEFT adapter. This should also speed up the loading process.

is_trainable (`bool`, *optional*, defaults to `False`) : Whether the adapter should be trainable or not. If `False`, the adapter will be frozen and can only be used for inference.

hotswap : (`"auto"` or `bool`, *optional*, defaults to `"auto"`) Whether to substitute an existing (LoRA) adapter with the newly loaded adapter in-place. This means that, instead of loading an additional adapter, this will take the existing adapter weights and replace them with the weights of the new adapter. This can be faster and more memory efficient. However, the main advantage of hotswapping is that when the model is compiled with torch.compile, loading the new adapter does not require recompilation of the model. When using hotswapping, the passed `adapter_name` should be the name of an already loaded adapter.  If the new adapter and the old adapter have different ranks and/or LoRA alphas (i.e. scaling), you need to call an additional method before loading the adapter:  ```py model = AutoModel.from_pretrained(...) max_rank = ...  # the highest rank among all LoRAs that you want to load # call *before* compiling and loading the LoRA adapter model.enable_peft_hotswap(target_rank=max_rank) model.load_adapter(file_name_1, adapter_name="default") # optionally compile the model now model = torch.compile(model, ...) output_1 = model(...) # now you can hotswap the 2nd adapter, use the same name as for the 1st # hotswap is activated by default since enable_peft_hotswap was called model.load_adapter(file_name_2, adapter_name="default") output_2 = model(...) ```  By default, hotswap is disabled and requires passing `hotswap=True`. If you called `enable_peft_hotswap` first, it is enabled. You can still manually disable it in that case by passing `hotswap=False`.  Note that hotswapping comes with a couple of limitations documented here: https://huggingface.co/docs/peft/main/en/package_reference/hotswap

adapter_kwargs (`dict[str, Any]`, *optional*) : Additional keyword arguments passed along to the `from_pretrained` method of the adapter config and `find_adapter_config_file` method.
#### add_adapter[[transformers.integrations.PeftAdapterMixin.add_adapter]]

[Source](https://github.com/huggingface/transformers/blob/v5.6.2/src/transformers/integrations/peft.py#L737)

If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT
official documentation: https://huggingface.co/docs/peft

Adds a fresh new adapter to the current model for training purpose. If no adapter name is passed, a default
name is assigned to the adapter to follow the convention of PEFT library (in PEFT we use "default" as the
default adapter name).

Note that the newly added adapter is not automatically activated. To activate it, use `model.set_adapter`.

**Parameters:**

adapter_config (`~peft.PeftConfig`) : The configuration of the adapter to add, supported adapters are non-prompt learning methods (LoRA, IA³, etc.).

adapter_name (`str`, *optional*, defaults to `"default"`) : The name of the adapter to add. If no name is passed, a default name is assigned to the adapter.
#### set_adapter[[transformers.integrations.PeftAdapterMixin.set_adapter]]

[Source](https://github.com/huggingface/transformers/blob/v5.6.2/src/transformers/integrations/peft.py#L777)

If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT
official documentation: https://huggingface.co/docs/peft

Sets a specific adapter by forcing the model to use a that adapter and disable the other adapters.

**Parameters:**

adapter_name (`Union[list[str], str]`) : The name of the adapter to set. Can be also a list of strings to set multiple adapters.
#### disable_adapters[[transformers.integrations.PeftAdapterMixin.disable_adapters]]

[Source](https://github.com/huggingface/transformers/blob/v5.6.2/src/transformers/integrations/peft.py#L818)

If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT
official documentation: https://huggingface.co/docs/peft

Disable all adapters that are attached to the model. This leads to inferring with the base model only.
#### enable_adapters[[transformers.integrations.PeftAdapterMixin.enable_adapters]]

[Source](https://github.com/huggingface/transformers/blob/v5.6.2/src/transformers/integrations/peft.py#L837)

If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT
official documentation: https://huggingface.co/docs/peft

Enable adapters that are attached to the model.
#### active_adapters[[transformers.integrations.PeftAdapterMixin.active_adapters]]

[Source](https://github.com/huggingface/transformers/blob/v5.6.2/src/transformers/integrations/peft.py#L855)

If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT
official documentation: https://huggingface.co/docs/peft

Gets the current active adapters of the model. In case of multi-adapter inference (combining multiple adapters
for inference) returns the list of all active adapters so that users can deal with them accordingly.

For previous PEFT versions (that does not support multi-adapter inference), `module.active_adapter` will return
a single string.
#### get_adapter_state_dict[[transformers.integrations.PeftAdapterMixin.get_adapter_state_dict]]

[Source](https://github.com/huggingface/transformers/blob/v5.6.2/src/transformers/integrations/peft.py#L884)

If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT
official documentation: https://huggingface.co/docs/peft

Gets the adapter state dict that should only contain the weights tensors of the specified adapter_name adapter.
If no adapter_name is passed, the active adapter is used.

**Parameters:**

adapter_name (`str`, *optional*) : The name of the adapter to get the state dict from. If no name is passed, the active adapter is used.

state_dict (nested dictionary of `torch.Tensor`, *optional*) : The state dictionary of the model. Will default to `self.state_dict()`, but can be used if special precautions need to be taken when recovering the state dictionary of a model (like when using model parallelism).

