Comments

Log in with itch.io to leave a comment.

Viewing most recent comments 1 to 35 of 75 · Next page · Last page
(1 edit)

How to write the request json when I call the img2img interface of the sd  by Multi-frame rendering script, my original request json was as follows:

payload = {

                  "init_images": [image_base64],

                  # "prompt": prompt,

                  "prompt": prompts[i],

                  "negative_prompt": negative_prompt,

                  "styles": [],

                  "seed": 2694589603,

                  "subseed": -1,

                  "subseed_strength": 0,

                  "seed_resize_from_h": height,

                  "seed_resize_from_w": width,

                  "sampler_index": "Euler a",

                  "sampler_name": "Euler a",

                  "batch_size": 1,

                  "n_iter": 1,

                  "steps": 30,

                  "clip_skip": 1,

                  "cfg_scale": 8,

                  "width": width,

                  "height": height,

                  "restore_faces": True,

                  "tiling": False,

                  "do_not_save_samples": False,

                  "do_not_save_grid": False,

                  "eta": 0,

                  "denoising_strength": 0.4,

                  "s_min_uncond": 0,

                  "s_churn": 0,

                  "s_tmax": 0,

                  "s_tmin": 0,

                  "s_noise": 0,

                  #"override_settings": {

                    # "sd_model_checkpoint": "chosenMix_chosenMix.ckpt [dd0aacadb6]",

                    # "sd_vae": "pastel-waifu-diffusion.vae.pt"

                  #},

                  "override_settings_restore_afterwards": True,

                  "refiner_checkpoint": "",

                  "refiner_switch_at": 0,

                  "disable_extra_networks": False,

                  "comments": {},

                  "resize_mode": 0,

                  "image_cfg_scale": 1.5,

                  "mask": None,

                  "mask_blur_x": 4,

                  "mask_blur_y": 4,

                  "mask_blur": 0,

                  "inpainting_fill": 0,

                  "inpaint_full_res": True,

                  "inpaint_full_res_padding": 1,

                  "inpainting_mask_invert": 1,

                  "initial_noise_multiplier": 1.0,

                  "latent_mask": None,

                  "include_init_images": False,

                  "script_name": "",

                  "script_args": [],

                  "send_images": True,

                  "save_images": False,

                  "alwayson_scripts": {

                    "ControlNet": {

                      "args": [

                        {

                          "input_image": image_base64,

                          "enabled": True,

                          "lowvram": False,

                          "pixel_perfect": True,

                          "mask": None,

                          "module": "canny",

                          "model": "control_canny-fp16 [e3fe7712]",

                          "weight": 2,

                          "guidance_start": 0,

                          "guidance_end": 1,

                          "threshold_a": 100,

                          "threshold_b": 200,

                          "control_mode": 2,

                          "resize_mode": 1,

                          "processor_res": width

                        }

                      ]

                    },

                    "face editor ex": {

                      "args": [

                        {

                          "prompt_for_face": "best quality, highly detailed, extremely detailed, shiny skin, shiny clothes, perfect face, beautiful face, white face, anime, cute, manga, green eyes, red lips",

                          "affected_areas": ["Face", "Hair"],

                          "use_minimal_area": True,

                          "save_original_image": True,

                          "face_size": width,

                          "upscaler": "R-ESRGAN 4x+"

                        }

                      ]

                    }

                  }

                }

Struggling similarly to others, while it seems to be batch outputting photos, it's really just a slight redrawing of the same frame over and over again. No matter what ControlNet or settings I use, it refuses to actually follow the guide frames at all.

Just a heads up for anyone using mediapipe controlnet with this. Make sure you set max faces to 3. If you don't you, will find the results of the animation to be a mess because it will only produce a face map for one of the images its using in the process

Do you know any edits to the script that can be done to upload more guide frames (e.g. 2000 frames). If i upload too many guideframes, the script doesn't generate anything. I can only upload approx 80 frames

when you upload something to gradio, it creates a temporary copy of the file and works with that, see https://www.gradio.app/docs/file#behavior so maybe it's just lack of disk space on your system partition? 

However, I never tried the script with that many frames. And I also wouldn't recommend it because of color degradation that WILL happen over time.

(1 edit)

It takes a very, very long time to render with this script. If the usual generation with ControlNet takes 1 minute, then with the script, the first frame is 2 minutes, the second frame is 5 minutes, the 3rd frame is 25 minutes already, and so on. I can't wait for it to finish (res is 576/1024, 1controlnet)

ps 3060 12gb

(+1)(-1)

Hey Xanthius! I did some changes to the script that I want to share with the community - are you okay with that?

(+1)

find my modified script here:
https://github.com/finefin/SD-scripts

  • Use every Nth frame: skip guide frames (for preview or ebsynth)
  • Render grid: enable to render the grid
  • Rows in grid: how many horizontal rows the grid should have
  • Fixed file upload

Thank you!

(1 edit)

will test it! 

Error loading script: multi_frame_render-beta-fine-version.py

 File "D:\SDiff\webui\scripts\multi_frame_render-beta-fine-version.py", line 91

    <title>SD-scripts/multi_frame_render-beta-fine-version.py at main · finefin/SD-scripts · GitHub</title>

                                                                     ^

SyntaxError: invalid character '·' (U+00B7)

(1 edit) (+1)

How to install it properly?

Error loading script: multi_frame_render-beta-fine-version.py

Traceback (most recent call last):

  File "D:\SDiff\webui\modules\scripts.py", line 257, in load_scripts

    script_module = script_loading.load_module(scriptfile.path)

  File "D:\SDiff\webui\modules\script_loading.py", line 11, in load_module

    module_spec.loader.exec_module(module)

  File "<frozen importlib._bootstrap_external>", line 879, in exec_module

  File "<frozen importlib._bootstrap_external>", line 1017, in get_code

  File "<frozen importlib._bootstrap_external>", line 947, in source_to_code

  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed

  File "D:\SDiff\webui\scripts\multi_frame_render-beta-fine-version.py", line 7

    <!DOCTYPE html>

    ^

SyntaxError: invalid syntax

You downloaded the wrong thing. You need to get the raw file: https://raw.githubusercontent.com/finefin/SD-scripts/main/multi_frame_render-bet...

Thx Bro, i re download it, and it helps.

But when i'm starting generate with script nothing happens

(AttributeError: module 'modules.shared' has no attribute 'log')

What i,m doing wrong now?;))

I did not change the log output at all. Maybe you need to update your SD install? I use SD.next by Vladmandic and don't have any problems.

You could try to comment out that function call on line 69: put a "//" in front of the line like this:

// shared.log.info (....

If you do that, you wont see any progress in the UI, tho.

Hello there! I got some issues with this tool
First time when i used it - works perfectly, generate what i lookin for and all that.
But after that, it starts to generate first pic over and over again, no matter what settings i use, no matter how much pic i selected as animation, every time same results with first generated picture, only with little differences between them
Did somebody deal with that issue?

I think i fixed that by choose "Balanced" in Control mode in ControlNet. Previously ControlNet was more important 

(1 edit)

Greetings and beofre anything thanks for that much appreciated script !
From what I've read in the comments, there is a limit to the amount of images we can feed at once... I've tried the script update tips as indicated by @sxy9699 but to no avail, I still get the same error when trying to upload longer sequences (above 1000 images).


Is there anyway to get around this limitation and be able to feed it 1000's of pictures, much like in Deforum ?

I am enjoying the quality of and coherence I get with your script and that technique but I'd like to be able to throw huge sequences at it...

Thanks for your time, patience and, hopefully, help !

I use the script to try to generate a video, the first image is fine but the second image takes forever to come out. Is there a way to solve the problems? thank you!

there are many reasons why this can happen - take a look at the console output to narrow them down ;)

Don't know if it is the problem of controlnet?

looks like you're running out of memory. you can try to lower the resolution, use less ControlNets and enable low vram mode.

I have seen a number of folks use your script for great result, and I am excited to use it, however I am having some challenges. I found your script from a video by Geek At Play, and he uses different settings than you do. I have tried both and still don't have satisfactory results. When I follow your instructions, using the 0 denoise for the first frame and any of the loopback settings, I get a great first frame, but the second and subsequent frames are completely different. I imagine that it has something to do with controlnet, but I'm not sure what is wrong. Any thoughts would be very helpful.  

First of all make sure you select "pixel perfect" in your ControlNet(s).

I use the A1111-fork by Vlad and in some cases I have to de- and reactivate ControlNets and LoRAs after sending my first frame from txt2img. Sometimes only a complete re-start of the GUI helps. I usually do a few test renders before I activate the video script in order to see if it's working.

You should set the initial denoise to 0 if you want to keep your 1st frame, otherwise it will be re-rendered.

The normal 'denoise' setting should be something between 0.9 and 1, tho. If you set it below that you will generate garbage and if you set this to 0 you get the same frame over and over again.

Sadly, I'm just not getting there. I have explored MANY settings! Although I have had varying and interesting results, I have yet to come close to anything I want. When I in fact use denoise like you suggest of .9 to 1.0 (full on baby) I get frames that look nothing like the original except in pose. I have tried all three loopbacks and the second frame is always dog poop.  I have followed your instructions to the letter and I have also followed "Geek at Play's" tutorial and my results are very different than either.  I am clearly missing something very basic, but I have no clue what it is.

I would think that what I am try to accomplish should be doable: I have 489 frames of a man playing violin, I need to make him into a Lycan (wolfman) playing violin. I can test with any frame in Image to Image and get the output that I desire, But when I run the script, I get all kinds of weirdness. The strangest was when it put glasses and a gas mask on him. Nothing in the prompt, nor the guide frames that would suggest that! 

Is there anything that you can think of that may solve my issues, or should I just give up and go through the longer and more arduous process of doing this in Deforum with hybrid video. 

Thanks again for developing a tool that is working well for so many!     

Evidently I'm a glutton for punishment! I spent several hours that I just didn't have to spend trying to get this to work, mainly because my best alternative SD-CN-Animation, is broken and therefore not an option.  What is happening every time, regardless of any settings I use, is that in the second generation generation, the subject is moved to the left edge of the frame. Maybe you can help me to understand this weirdness. . . .

oh yes, I had a "move to left by half the width" issue once or twice and I don't really know why that happens. I think it was caused by the checkpoint or a LoRA that I used. And did you take a look at the console output? It sometimes gives you a hint on what is maybe missing or not working correctly.

Your script is the best for SD at the moment, thank you very much!

same problem, can you do so we can just add FOLDER name for UPLOAD? why upload 5000 files, ?? please ? 

Getting "UnidentifiedImageError: cannot identify image file" error when I click the generate button. Guide frames are selected. I can see the temp image file was created but with 0kb in size. Thanks in advance!

you can fix this error by editing the python script as described here: https://itch.io/post/7576730

thanks, but not working, I find out , that i can drop images. but that is also not working , I am using google drive colab 

reference_imgs = gr.File(file_count="directory", label="Upload Guide Frames", show_label=True)

please can you add url insted like a code? so I can add the dir of the google drive and dont upload?

oh, you're trying to run in on Collab? sorry, I can't help you with that, I only run this script locally.

@Xanthius. Good work on the script. I am working on a new AAA film and I have got a job for you. How do I get in touch?

Hey! I saw that Xanthius didn't reply to your comment for a month now and wanted to offer my expertise in this field, if you are still interested. I experiment a lot with AI stylization, so if you want to get in touch with me [aleksejfifa15@gmail.com] is my email adress.

(1 edit)

here's another short animation that I made :)

Warning: loud noise! turn down audio volume!

(1 edit)

You mentioned in this tutorial that you've done non faces using this technique, and I'm really interested in locking in a decent rotoscoping method using stable diffusion, hopefully one with only slight alterations/flickering. Right now I'm trying out the "loving vincent" style van gogh rotoscoping using this technique with some drone footage I took in switzerland, however I ran into several issues. When I follow these exact steps in the tutorial, it seems each frame is getting darker and darker and the lines eventually become super dark and take over the image, all of the brush work gets flattened out and it starts essentially glitching after like 10 frames.

So I tinkered around a bit and found that if I change the last setting (loopback source) and set it to "firstgen" instead of "previous frame", the issue is resolved, however it seems like its not really moving, but just kind of looping the first frame over and over again, no more darkening, it gets additive for about 10 frames and then pretty much just stays still and doesn't move with slight flickering.

My videos camera pans slightly, and that isn't being reflected when I render this. Basically I'm just trying to batch the process in a way where Stable diffusion is painting over ever frame of an image sequence, but using control net in a way where it locks in the style for subsequent frames in the sequence. Any thoughts or recommendations using your method?

Hello! Sent a few $ over yesterday and grabbed the script. Getting errors thought.... is it still being updated at all?

Dropping by just to cheer you on! I've tried the other methods (SD-CN-Animation and TemporalNet) and your script is still by far the most reliable and flexible method.

Could you make a video tutorial please? I can't seem to get how it's done due to struggles with just text descriptions.

You can check my video:

the most amazing script ever, thank you so much for this

Any word on the update to upload custom masks for the frames?

This would allow Higher resolution workflows though tiling sections together and much finer control.
Also the batch idea for the guide image would be great to help guide the animation with control net better.
fantastic script!!
I'm figuring out some workflows will update with results soon.

Still works fine with my updated a1111 :) though there seems to be a limit to how many init images I can upload.

Would you be able to update your script to work with the latest version of a1111? The one mentioned below that supports your script still didn't support AMD cards so the rendering is unbearably low. Thanks in advance!

Works great but you have to use an older webui version (git checkout a9eef1fbb1dcdce4f0eb0b8e0f79dcd4c96713e1) as described in MonsterMMORPG's video, hope this can get updated to the latest version of webui!

what would be the minimum vram for this i keep getting out of memory even when running control next on low vram, it works fine normally for like batch img2img with control net so confused why

The most up to date and detailed tutorial here. Moreover, I have shared file path instead of upload. Works on runpod as well

im not able to get this to work, i simple get the same image for the output. does anyone know a solution to this error? 

allow other scripts to control this extension on controlnet

I'm getting an error.  All my extensions are up to date and it's saving the images, but every image save I'm getting -

ERROR:sd_dynamic_prompts.callbacks:Error save prompt file                        | 20/18320 [01:21<22:37:33,  4.45s/it]

Traceback (most recent call last):

  File "F:\stable-diffusion-webui\extensions\sd-dynamic-prompts\sd_dynamic_prompts\callbacks.py", line 24, in on_save

    png_info = image_save_params.pnginfo["parameters"]

KeyError: 'parameters'

INFO:sd_dynamic_prompts.dynamic_prompting:Prompt matrix will create 1 images in a total of 1 batches.

Loading model from cache: control_sd15_canny [fef5e48e]

---

I do have JPG as my default save format, I changed it to PNG and still have the same error.   I'm going to let it render out, it's going to be a while, When it's done I'm going to disable the dynamic_prompts and see if that does it.

Disabling dynamic_prompts removed the error.

The most up to date and detailed tutorial released. Moreover, I have shared file path instead of upload. Works on runpod as well

I see there is a new third frame option called "GuideImg". What does that do? Can you please update the docs in this page about this? thanks

GuideImg just means that it uses the Guide Image for that frame, which is the same image passed to the ControlNet

thanks!

(+1)

Getting "UnidentifiedImageError: cannot identify image file" error when I click the generate button. Guide frames are selected. I can see the temp image file was created but with 0kb in size. Thanks in advance!

(2 edits) (+1)

sxy9699 said:

I found an easy way to solve this problem: simply change the code in multi_frame_render.py

        reference_imgs = gr.UploadButton(label="Upload Guide Frames", file_types = ['.png','.jpg','.jpeg'], live=True, file_count = "multiple") 

to

reference_imgs = gr.File(file_count="directory", label="Upload Guide Frames", show_label=True)

Thanks for this quick fix!
In case you want to upload single files instead of a whole folder change the line to this:

reference_imgs = gr.File(file_count="multiple", file_types = ['.png','.jpg','.jpeg'], label="Upload Guide Frames", show_label=True, live=True)

The script doesn't work with the current version of Automatic 1111 web UI. After checkout an old version of web UI, it works well. For more details, you can watch the video bellow posted by MonsterMMORPG, start from 13:53. 

(+2)

Hi Xanthius! Love your work here - I'm the founder of https://kaiber.ai - a generative AI video platform. We recently worked on Linkin Park's LOST music video - was wondering if you were interested in consulting/contract work. Drop me a line at eric@kaiber.ai if so!

(+4)

The most up to date and detailed tutorial released

Thanks!

(+1)

Ah yes, it works finally! thanks again for the the code tweak!
I would have never figured out how to fix this by myself!
I've been using the classic batch img to img and it gave excellent results already but I can't wait to see and share what I can get from Xanthius's script!

(1 edit)

Great tutorial! Do you know how this would work with Google Colab? I'm very new to this and haven't found a way to roll back to the earlier commit. I'm also getting the "UnidentifiedImageError" with the latest version. 

Thanks!

(+1)

Dear developer please add folder path option instead of upload.

(+4)

With new update script doesn't work anymore, keep reciving this error: 


UnidentifiedImageError: cannot identify image file 'C:\\Users\\Username\\AppData\\Local\\Temp\\out-00008lqje9jfr.png'

Same error, I decided to assemble a 1500€ PC because this script's results looked so good (when it works). I even tried to modify the line of code associated with the error in order to remove the "8lqje9jfr" by replacing the 77th  line of code by this: 

p.control_net_input_image = Image.open(reference_imgs[i].name.replace("\\\\", "\\")[:-12] + reference_imgs[i].name.replace("\\\\", "\\")[-4:]).convert("RGB").resize((initial_width, p.height), Image.ANTIALIAS)

but that's not it.
Tried different image sequence renderers as I thought the images where faulty, I doubt it is.
Tried jpeg, png, not it.
I tried with and without controlnet still no.
Maybe it was the resolution I made sure it all matched, not it.

I will use the standard control net for animation for now until it's solved.

I am making a tutorial video right now for this script. I made it work easy but not with latest automatic1111. the biggest problem of the script that it is uploading instead of directory. working on that too

it will be on my channel https://www.youtube.com/@SECourses 

(+1)

Keep us posted! until then, I'm not giving up.
I'm stilll trying to understand what is happening. I stupidly thought that these kind of files "'C:\\Users\\Username\\AppData\\Local\\Temp\\out-00008lqje9jfr.png'" were basically our guiding frames, but evidently not. It's not even in the same directory.
When I look at the properties of said file, it's actually an empty file, it says: 0byte file size... that might be why it's unidentified.

here tutorial released 

The same happens to me

here a tutorial

here tutorial released : 

(+1)

I found an easy way to solve this problem: simply change the code in multi_frame_render.py

        reference_imgs = gr.UploadButton(label="Upload Guide Frames", file_types = ['.png','.jpg','.jpeg'], live=True, file_count = "multiple") 


to

reference_imgs = gr.File(file_count="directory", label="Upload Guide Frames", show_label=True)

like this:

it works for me!

(1 edit)

thanks a lot! It worked.

Same same

Viewing most recent comments 1 to 35 of 75 · Next page · Last page