当前位置:   article > 正文

Flash-Attention代码调用尝试_flashattention 使用方法

flashattention 使用方法

Flash-Attention代码调用尝试

本文主要介绍通过如何通过源码方式使用flash-attention,以实现更自由的调用。

1.介绍

Flash-attention原理:

论文:
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, Christopher Ré
Paper: https://arxiv.org/abs/2205.14135
IEEE Spectrum article about our submission to the MLPerf 2.0 benchmark using FlashAttention. FlashAttention

FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning
Tri Dao

Paper: https://tridao.me/publications/flash2/flash2.pdf

源码:
https://github.com/Dao-AILab/flash-attention

FlashAttention-2 硬件支持

Ampere, Ada, or Hopper GPUs (e.g., A100, RTX 3090, RTX 4090, H100).
Turing GPUs 只能使用FlashAttention 1.x.

Datatype fp16 and bf16 (bf16 requires Ampere, Ada, or Hopper GPUs).
All head dimensions up to 256. Head dim > 192 backward requires A100/A800 or H100/H800.

pytorch2.0版本已内置了flash-attention1

2.环境配置

CUDA 11.6 and above
PyTorch 1.12 and above

以及LLM基本运行环境
我的环境是:
transformers 4.33.1
torch 2.0.1+cu118
torchaudio 2.0.2+cu118
torchvision 0.15.2+cu118
accelerate 0.22.0
sentencepiece 0.1.99

install flash-attention
在线安装:
pip install flash-attn --no-build-isolation

源码编译安装:
python setup.py install

3.代码实现

模型chatglm2-6b

3.1 模型调用

模型加载时使用modeling_chatglm.py 而非transformers的AutoModel加载,因为要对modeling中的AttenCore进行修改

from transformers import AutoTokenizer
from modeling_chatglm import ChatGLMModel, ChatGLMForConditionalGeneration

tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = ChatGLMForConditionalGeneration.from_pretrained(model_path, trust_remote_code=True).cuda()
model = model.eval()
...
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
3.2Attention实现源码修改
...
FLASH_ATTN_FLAG=True
print("inference by Flash attention src:", FLASH_ATTN_FLAG)
...

class CoreAttention(torch.nn.Module):
...
    def forward(self, query_layer, key_layer, value_layer, attention_mask):
            pytorch_major_version = int(torch.__version__.split('.')[0])

            if pytorch_major_version >= 2:

                if FLASH_ATTN_FLAG:
                    from flash_attn import flash_attn_qkvpacked_func,flash_attn_func
                    query_layer, key_layer, value_layer = [k.permute(1, 0, 2, 3) for k in [query_layer, key_layer, value_layer]]
                    dropout_p=0.0
                    softmax_scale=0.0                    
                    context_layer = flash_attn_func(query_layer, key_layer, value_layer, dropout_p, causal=True)
                    context_layer = context_layer.permute(1, 0, 2, 3)
                #chatglm2-6b Official code
                else:
                    query_layer, key_layer, value_layer = [k.permute(1, 2, 0, 3) for k in [query_layer, key_layer, value_layer]]

                    if attention_mask is None and query_layer.shape[2] == key_layer.shape[2]:
                        context_layer = torch.nn.functional.scaled_dot_product_attention(query_layer, key_layer, value_layer,
                                                                                        is_causal=True)
                    else:
                        if attention_mask is not None:
                            attention_mask = ~attention_mask
                        context_layer = torch.nn.functional.scaled_dot_product_attention(query_layer, key_layer, value_layer,
                                                                                        attention_mask)
                    context_layer = context_layer.permute(2, 0, 1, 3)
                new_context_layer_shape = context_layer.size()[:-2] + (self.hidden_size_per_partition,)
                context_layer = context_layer.reshape(*new_context_layer_shape)

...

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37

4.结果

优化方案input tokensspeed显存占用(mb)
pytorch180033.815472
pytorch2.0180036.514200
flash attention2180036.714200
pytorch70001837322
pytorch2.0700029.917030
flash attention2700034.217102
pytorch20000OOMOOM
pytorch2.02000013.524122
flash attention22000018.624194
pytorch32396OOMOOM
pytorch2.032396830448
flash attention23239614.130520
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/你好赵伟/article/detail/270341
推荐阅读
相关标签
  

闽ICP备14008679号