付费资源
10积分
网盘密匙:Qpipi
允许生成图像/视频商用
允许再次训练
不允许转售AI模型
不允许生成计算服务费
许可证:CreativeML Open RAIL-M
🎀包含版本🆕ip-adapter-faceid_sd15、ip-adapter-faceid_sd15_lora、ip-adapter-faceid-plus_sd15、ip-adapter-faceid-plus_sd15_lora、ip-adapter-faceid-plusv2_sd15、ip-adapter-faceid-plusv2_sd15_lora、ip-adapter-faceid-portrait_sd15、ip-adapter-faceid-portrait-v11_sd15、ip-adapter-faceid_sdxl、ip-adapter-faceid_sdxl_loraip-adapter-faceid-plusv2_sdxl、ip-adapter-faceid-plusv2_sdxl_lora、ip-adapter-faceid-portrait_sdxl、ip-adapter-faceid-portrait_sdxl_unnorm
IP-Adapter-FaceID Model SD-SDXL,IPAdapter-FaceID 模型库

IP-Adapter-FaceID Model SD-SDXL,IPAdapter-FaceID 模型库

IPAdapter FaceID 的实验版本:我们使用来自人脸识别模型的人脸 ID 嵌入而不是 CLIP 图像嵌入,此外,我们使用 LoRA 来提高 ID 一致性。

IP Adapter FaceID 可以生成以面部为条件的各种样式图像,只需文本提示。

图片[1]_IP-Adapter-FaceID Model SD-SDXL,IPAdapter-FaceID 模型库_Qpipi

最近更新

IP-Adapter-FaceID-Plus:人脸ID嵌入(用于人脸ID)+CLIP图像嵌入(用于人脸结构)

图片[2]_IP-Adapter-FaceID Model SD-SDXL,IPAdapter-FaceID 模型库_Qpipi

更新

IP-Adapter-FaceID-PlusV2:人脸ID嵌入(用于人脸ID)+可控CLIP图像嵌入(用于人脸结构)

您可以调整面部结构的重量以获得不同的世代!

图片[3]_IP-Adapter-FaceID Model SD-SDXL,IPAdapter-FaceID 模型库_Qpipi

更新

IP-Adapter-FaceID-SDXL:IP-Adapter-FaceID 的实验性 SDXL 版本

图片[4]_IP-Adapter-FaceID Model SD-SDXL,IPAdapter-FaceID 模型库_Qpipi

更新

IP-Adapter-FaceID-PlusV2-SDXL:IP-Adapter-FaceID-PlusV2 的实验性 SDXL 版本

更新

IP-Adapter-FaceID-Portrait:与 IP-Adapter-FaceID 相同,但用于肖像生成(没有 lora!没有 controlnet!具体来说,它接受多个面部图像以增强相似度(默认值为 5)。

图片[5]_IP-Adapter-FaceID Model SD-SDXL,IPAdapter-FaceID 模型库_Qpipi

用法简介

IP 适配器 FaceID

首先,您应该使用 insightface 来提取人脸 ID 嵌入:


import cv2
from insightface.app import FaceAnalysis
import torch

app = FaceAnalysis(name="buffalo_l", providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
app.prepare(ctx_id=0, det_size=(640, 640))

image = cv2.imread("person.jpg")
faces = app.get(image)

faceid_embeds = torch.from_numpy(faces[0].normed_embedding).unsqueeze(0)

然后,您可以生成以人脸嵌入为条件的图像:


import torch
from diffusers import StableDiffusionPipeline, DDIMScheduler, AutoencoderKL
from PIL import Image

from ip_adapter.ip_adapter_faceid import IPAdapterFaceID

base_model_path = "SG161222/Realistic_Vision_V4.0_noVAE"
vae_model_path = "stabilityai/sd-vae-ft-mse"
ip_ckpt = "ip-adapter-faceid_sd15.bin"
device = "cuda"

noise_scheduler = DDIMScheduler(
    num_train_timesteps=1000,
    beta_start=0.00085,
    beta_end=0.012,
    beta_schedule="scaled_linear",
    clip_sample=False,
    set_alpha_to_one=False,
    steps_offset=1,
)
vae = AutoencoderKL.from_pretrained(vae_model_path).to(dtype=torch.float16)
pipe = StableDiffusionPipeline.from_pretrained(
    base_model_path,
    torch_dtype=torch.float16,
    scheduler=noise_scheduler,
    vae=vae,
    feature_extractor=None,
    safety_checker=None
)

# load ip-adapter
ip_model = IPAdapterFaceID(pipe, ip_ckpt, device)

# generate image
prompt = "photo of a woman in red dress in a garden"
negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality, blurry"

images = ip_model.generate(
    prompt=prompt, negative_prompt=negative_prompt, faceid_embeds=faceid_embeds, num_samples=4, width=512, height=768, num_inference_steps=30, seed=2023
)

您也可以使用普通的 IP 适配器和普通的 LoRA 来加载模型:

import torch
from diffusers import StableDiffusionPipeline, DDIMScheduler, AutoencoderKL
from PIL import Image

from ip_adapter.ip_adapter_faceid_separate import IPAdapterFaceID

base_model_path = "SG161222/Realistic_Vision_V4.0_noVAE"
vae_model_path = "stabilityai/sd-vae-ft-mse"
ip_ckpt = "ip-adapter-faceid_sd15.bin"
lora_ckpt = "ip-adapter-faceid_sd15_lora.safetensors"
device = "cuda"

noise_scheduler = DDIMScheduler(
    num_train_timesteps=1000,
    beta_start=0.00085,
    beta_end=0.012,
    beta_schedule="scaled_linear",
    clip_sample=False,
    set_alpha_to_one=False,
    steps_offset=1,
)
vae = AutoencoderKL.from_pretrained(vae_model_path).to(dtype=torch.float16)
pipe = StableDiffusionPipeline.from_pretrained(
    base_model_path,
    torch_dtype=torch.float16,
    scheduler=noise_scheduler,
    vae=vae,
    feature_extractor=None,
    safety_checker=None
)

# load lora and fuse
pipe.load_lora_weights(lora_ckpt)
pipe.fuse_lora()

# load ip-adapter
ip_model = IPAdapterFaceID(pipe, ip_ckpt, device)

# generate image
prompt = "photo of a woman in red dress in a garden"
negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality, blurry"

images = ip_model.generate(
    prompt=prompt, negative_prompt=negative_prompt, faceid_embeds=faceid_embeds, num_samples=4, width=512, height=768, num_inference_steps=30, seed=2023
)

IP 适配器-FaceID-SDXL

首先,您应该使用 insightface 来提取人脸 ID 嵌入:


import cv2
from insightface.app import FaceAnalysis
import torch

app = FaceAnalysis(name="buffalo_l", providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
app.prepare(ctx_id=0, det_size=(640, 640))

image = cv2.imread("person.jpg")
faces = app.get(image)

faceid_embeds = torch.from_numpy(faces[0].normed_embedding).unsqueeze(0)

然后,您可以生成以人脸嵌入为条件的图像:


import torch
from diffusers import StableDiffusionXLPipeline, DDIMScheduler
from PIL import Image

from ip_adapter.ip_adapter_faceid import IPAdapterFaceIDXL

base_model_path = "SG161222/RealVisXL_V3.0"
ip_ckpt = "ip-adapter-faceid_sdxl.bin"
device = "cuda"

noise_scheduler = DDIMScheduler(
    num_train_timesteps=1000,
    beta_start=0.00085,
    beta_end=0.012,
    beta_schedule="scaled_linear",
    clip_sample=False,
    set_alpha_to_one=False,
    steps_offset=1,
)
pipe = StableDiffusionXLPipeline.from_pretrained(
    base_model_path,
    torch_dtype=torch.float16,
    scheduler=noise_scheduler,
    add_watermarker=False,
)

# load ip-adapter
ip_model = IPAdapterFaceIDXL(pipe, ip_ckpt, device)

# generate image
prompt = "A closeup shot of a beautiful Asian teenage girl in a white dress wearing small silver earrings in the garden, under the soft morning light"
negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality, blurry"

images = ip_model.generate(
    prompt=prompt, negative_prompt=negative_prompt, faceid_embeds=faceid_embeds, num_samples=2,
    width=1024, height=1024,
    num_inference_steps=30, guidance_scale=7.5, seed=2023
)

IP 适配器-FaceID-Plus

首先,您应该使用 insightface 来提取人脸 ID 嵌入和人脸图像:


import cv2
from insightface.app import FaceAnalysis
from insightface.utils import face_align
import torch

app = FaceAnalysis(name="buffalo_l", providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
app.prepare(ctx_id=0, det_size=(640, 640))

image = cv2.imread("person.jpg")
faces = app.get(image)

faceid_embeds = torch.from_numpy(faces[0].normed_embedding).unsqueeze(0)
face_image = face_align.norm_crop(image, landmark=faces[0].kps, image_size=224) # you can also segment the face

然后,您可以生成以人脸嵌入为条件的图像:


import torch
from diffusers import StableDiffusionPipeline, DDIMScheduler, AutoencoderKL
from PIL import Image

from ip_adapter.ip_adapter_faceid import IPAdapterFaceIDPlus

v2 = False
base_model_path = "SG161222/Realistic_Vision_V4.0_noVAE"
vae_model_path = "stabilityai/sd-vae-ft-mse"
image_encoder_path = "laion/CLIP-ViT-H-14-laion2B-s32B-b79K"
ip_ckpt = "ip-adapter-faceid-plus_sd15.bin" if not v2 else "ip-adapter-faceid-plusv2_sd15.bin"
device = "cuda"

noise_scheduler = DDIMScheduler(
    num_train_timesteps=1000,
    beta_start=0.00085,
    beta_end=0.012,
    beta_schedule="scaled_linear",
    clip_sample=False,
    set_alpha_to_one=False,
    steps_offset=1,
)
vae = AutoencoderKL.from_pretrained(vae_model_path).to(dtype=torch.float16)
pipe = StableDiffusionPipeline.from_pretrained(
    base_model_path,
    torch_dtype=torch.float16,
    scheduler=noise_scheduler,
    vae=vae,
    feature_extractor=None,
    safety_checker=None
)

# load ip-adapter
ip_model = IPAdapterFaceIDPlus(pipe, image_encoder_path, ip_ckpt, device)

# generate image
prompt = "photo of a woman in red dress in a garden"
negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality, blurry"

images = ip_model.generate(
     prompt=prompt, negative_prompt=negative_prompt, face_image=face_image, faceid_embeds=faceid_embeds, shortcut=v2, s_scale=1.0,
     num_samples=4, width=512, height=768, num_inference_steps=30, seed=2023
)

IP 适配器-FaceID-Portrait


import cv2
from insightface.app import FaceAnalysis
import torch

app = FaceAnalysis(name="buffalo_l", providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
app.prepare(ctx_id=0, det_size=(640, 640))


images = ["1.jpg", "2.jpg", "3.jpg", "4.jpg", "5.jpg"]

faceid_embeds = []
for image in images:
    image = cv2.imread("person.jpg")
    faces = app.get(image)
    faceid_embeds.append(torch.from_numpy(faces[0].normed_embedding).unsqueeze(0).unsqueeze(0))
  faceid_embeds = torch.cat(faceid_embeds, dim=1)
import torch
from diffusers import StableDiffusionPipeline, DDIMScheduler, AutoencoderKL
from PIL import Image

from ip_adapter.ip_adapter_faceid_separate import IPAdapterFaceID

base_model_path = "SG161222/Realistic_Vision_V4.0_noVAE"
vae_model_path = "stabilityai/sd-vae-ft-mse"
ip_ckpt = "ip-adapter-faceid-portrait_sd15.bin"
device = "cuda"

noise_scheduler = DDIMScheduler(
    num_train_timesteps=1000,
    beta_start=0.00085,
    beta_end=0.012,
    beta_schedule="scaled_linear",
    clip_sample=False,
    set_alpha_to_one=False,
    steps_offset=1,
)
vae = AutoencoderKL.from_pretrained(vae_model_path).to(dtype=torch.float16)
pipe = StableDiffusionPipeline.from_pretrained(
    base_model_path,
    torch_dtype=torch.float16,
    scheduler=noise_scheduler,
    vae=vae,
    feature_extractor=None,
    safety_checker=None
)


# load ip-adapter
ip_model = IPAdapterFaceID(pipe, ip_ckpt, device, num_tokens=16, n_cond=5)

# generate image
prompt = "photo of a woman in red dress in a garden"
negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality, blurry"

images = ip_model.generate(
    prompt=prompt, negative_prompt=negative_prompt, faceid_embeds=faceid_embeds, num_samples=4, width=512, height=512, num_inference_steps=30, seed=2023
)

局限性和偏见

  • 这些模型没有实现完美的照片级真实感和 ID 一致性。
  • 由于训练数据、基础模型和人脸识别模型的限制,模型的泛化受到限制。

非商业用途

AS InsightFace预训练模型可用于非商业研究目的,IP-Adapter-FaceID模型仅用于研究目的,不用于商业用途。

下载后放入你的 /ComfyUI/models/loras 目录

💡如有问题或建议,🥳请在社区评论告诉我们。🎨享受精彩的AI绘画乐趣!| 使用Qpipi读图提示功能,获取图片TAG Prompt提示 | Stable Diffusion AI绘图软件常见问题解答 | AI绘画新人必备工具指南

⭕ 注意:请勿使用浏览器的"阅读模式",会导致无法显示下载等内容。

© 版权声明
THE END
❤️ 喜欢就支持一下吧!点赞支持作者喔 👍
点赞8分享
IP-Adapter-FaceID Model SD-SDXL,IPAdapter-FaceID 模型库_Qpipi
IP-Adapter-FaceID Model SD-SDXL,IPAdapter-FaceID 模型库
此内容为付费资源,请付费后查看
10积分
网盘密匙:Qpipi
允许生成图像/视频商用
允许再次训练
不允许转售AI模型
不允许生成计算服务费
许可证:CreativeML Open RAIL-M
🎀包含版本🆕ip-adapter-faceid_sd15、ip-adapter-faceid_sd15_lora、ip-adapter-faceid-plus_sd15、ip-adapter-faceid-plus_sd15_lora、ip-adapter-faceid-plusv2_sd15、ip-adapter-faceid-plusv2_sd15_lora、ip-adapter-faceid-portrait_sd15、ip-adapter-faceid-portrait-v11_sd15、ip-adapter-faceid_sdxl、ip-adapter-faceid_sdxl_loraip-adapter-faceid-plusv2_sdxl、ip-adapter-faceid-plusv2_sdxl_lora、ip-adapter-faceid-portrait_sdxl、ip-adapter-faceid-portrait_sdxl_unnorm
付费资源
✍️ 评论 抢沙发

请登录后发表评论

    暂无评论内容