module-attribute  ¶
 _all_lora_classes: set[type[BaseLayerWithLoRA]] = {
    VocabParallelEmbeddingWithLoRA,
    ColumnParallelLinearWithLoRA,
    MergedColumnParallelLinearWithLoRA,
    QKVParallelLinearWithLoRA,
    MergedQKVParallelLinearWithLoRA,
    RowParallelLinearWithLoRA,
    ReplicatedLinearWithLoRA,
    LogitsProcessorWithLoRA,
    ColumnParallelLinearWithShardedLoRA,
    QKVParallelLinearWithShardedLoRA,
    MergedColumnParallelLinearWithShardedLoRA,
    MergedQKVParallelLinearWithShardedLoRA,
    RowParallelLinearWithShardedLoRA,
    FusedMoEWithLoRA,
}
 
 from_layer(
    layer: Module,
    max_loras: int,
    lora_config: LoRAConfig,
    packed_modules_list: list,
    model_config: PretrainedConfig | None = None,
) -> Module
Source code in vllm/lora/utils.py
  
 from_layer_logits_processor(
    layer: LogitsProcessor,
    lm_head: ParallelLMHead,
    max_loras: int,
    lora_config: LoRAConfig,
    model_config: PretrainedConfig | None = None,
) -> LogitsProcessorWithLoRA
Source code in vllm/lora/utils.py
  
  Resolves the given lora_path to an absolute local path.
If the lora_path is identified as a Hugging Face model identifier, it will download the model and return the local snapshot path. Otherwise, it treats the lora_path as a local file path and converts it to an absolute path.
lora_path (str): The path to the lora model, which can be an absolute path, a relative path, or a Hugging Face model identifier.
Returns: str: The resolved absolute local path to the lora model.
Source code in vllm/lora/utils.py
  
  In vLLM, all linear layers support LoRA.
Source code in vllm/lora/utils.py
  
  Checks if the model contains FusedMoE layers and warns the user.
Source code in vllm/lora/utils.py
  
  PEFT supports passing target_modules in the form of regular expressions, such as model.*(q_proj|k_proj|v_proj)$. This function is mainly used to determine whether the suffix in the regular expression is present in the expected_lora_modules.
Source code in vllm/lora/utils.py
  
 parse_fine_tuned_lora_name(
    name: str,
    weights_mapper: Optional[WeightsMapper] = None,
) -> tuple[str, bool]
Parse the name of lora weights.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| name | str | the name of the fine-tuned LoRA, e.g. base_model.model.dense1.weight | required | 
| weights_mapper | Optional[WeightsMapper] | maps the name of weight, e.g.  | None | 
return: tuple(module_name, is_lora_a): module_name: the name of the module, e.g. model.dense1, is_lora_a whether the tensor is lora_a or lora_b.
Source code in vllm/lora/utils.py
  
  Source code in vllm/lora/utils.py
  
  Replace a submodule in a model with a new module.