|
venv "D:\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
+ C: }4 s$ ~6 u/ U4 u0 i. Ufatal: No names found, cannot describe anything.% o/ U3 }+ y! g9 A& u0 v
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]6 `0 {& j% _- Z' B
Version: <none>
( U! O: V! y; J4 `7 tCommit hash: ebf229bd1727a0f8f0d149829ce82e2012ba7318
0 u2 o1 M: z6 W6 UInstalling requirements. G% o' m y2 X" b
Launching Web UI with arguments: --autolaunch
/ g' J4 Q! p: Q: gNo module 'xformers'. Proceeding without it.
+ s! Y% v0 I S: A; c; j; ]! r: PWarning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
2 b1 b! b& w* Q# |, H3 `$ }9 MLoading weights [fc2511737a] from D:\stable-diffusion-webui-directml\models\Stable-diffusion\chilloutmix_NiPrunedFp32Fix.safetensors
2 \6 o2 D ]/ {. `Creating model from config: D:\stable-diffusion-webui-directml\configs\v1-inference.yaml4 [% |/ M7 e7 }6 U3 j7 @- m/ u0 Z
LatentDiffusion: Running in eps-prediction mode
; _; H1 t+ _& L8 P8 \: L* GRunning on local URL: http://127.0.0.1:7860* \' J* {7 N1 T# B; G ~
9 q* z4 Z. L7 H+ R( T, x! @To create a public link, set `share=True` in `launch()`.
4 d5 q3 y, ?/ s" ~- k+ m4 o$ {4 o$ _DiffusionWrapper has 859.52 M params.4 O5 K9 F9 W& R6 _: J" c
Startup time: 15.1s (import torch: 3.6s, import gradio: 2.8s, import ldm: 1.0s, other imports: 4.4s, setup codeformer: 0.1s, load scripts: 1.7s, create ui: 0.7s, gradio launch: 0.7s).
) a9 z' h5 T& V" H. r- R; t9 ]Loading VAE weights found near the checkpoint: D:\stable-diffusion-webui-directml\models\VAE\chilloutmix_NiPrunedFp32Fix.vae.ckpt
/ i% Z: O* ?3 Q4 N! s w+ O0 _Applying optimization: InvokeAI... done." ?8 k% e3 W( y7 h
Textual inversion embeddings loaded(0):9 N) T4 j7 g5 E6 h/ d! j* c
Model loaded in 5.6s (load weights from disk: 1.1s, create model: 0.8s, apply weights to model: 0.9s, apply half(): 0.5s, load VAE: 0.6s, move model to device: 1.5s). 5 V1 ~, ~5 l1 T( l8 o' g
4 m; {5 ?, U4 C! _% F
version: • python: 3.10.6 • torch: 2.0.0+cpu • xformers: N/A • gradio: 3.31.0 • checkpoint: fc2511737a, a! K6 _3 U; c" b* r
, D% G$ j6 }: ^+ [; K
### Installation on Windows 10/11 with NVidia-GPUs using release package
& @" u5 c$ M) W* |6 q1 I1 a1 C/ L2 x2 M+ g
1. Download `sd.webui.zip` from [v1.0.0-pre](https://github.com/AUTOMATIC1111 ... ases/tag/v1.0.0-pre) and extract it's contents.5 R$ a( I- I+ r. Z' q" R5 x6 y% Y
2. Run `update.bat`.) X9 ]8 d! X6 o& e3 g; \% x, w
3. Run `run.bat`.
1 U4 P; B1 T# T6 S4 ^" y E# S# G > For more details see [Install-and-Run-on-NVidia-GPUs](https://github.com/AUTOMATIC1111 ... -Run-on-NVidia-GPUs)% I5 Q4 \" l- h3 Y- B5 ^' s) f
) y) b) I( D7 M: b" J- D### Automatic Installation on Windows
" r* d3 N4 B! v( S& v* r7 y) j1 N; G$ u0 H$ I9 M8 y }2 [
1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH".6 G! B1 {$ k0 Q: ^
2. Install [git](https://git-scm.com/download/win). ~4 l4 l8 M+ a; A4 l. n# R
3. Download the stable-diffusion-webui-directml repository, for example by running `git clone https://github.com/lshqqytiger/s ... ebui-directml.git`.! t! A; x" f# d: r8 ~8 |
4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.$ f# y `/ M- v% D- t( F N2 t
/ V7 @4 J2 q- g7 ^2 o3 Q- l9 l+ L3 R### Automatic Installation on Linux1 f& }6 S: m# |& z t2 r
. S6 \2 {- m" X7 D! n1 M
1. Install the dependencies:
; g! A. ]: S9 p+ k3 k# E9 t- h' X0 Q: h" R
```bash6 v( T( }/ Y4 z4 g6 b
# Debian-based:
! s& [. w- Y9 }; p5 C" `sudo apt install wget git python3 python3-venv
0 F, ~0 q) I" o* L# Red Hat-based:
2 n1 _7 L7 M: A Nsudo dnf install wget git python3; [0 f" P/ m0 [
# Arch-based:
- x# J& U( T* u Q" j% `sudo pacman -S wget git python3
8 @: l% c1 c+ f2 [```
9 v l0 l& J: {1 T) T8 @0 R* }1 D/ l' D6 E
2. Navigate to the directory you would like the webui to be installed and execute the following command:
6 c* q! F- Z$ a- _6 |+ {2 K. ?( U9 {: v
```bash7 `$ a' q" V; w! c( U
bash <(wget -qO- https://raw.githubusercontent.co ... bui/master/webui.sh)% l# I3 z/ _1 V$ G5 w* D% L
```4 T" S* { |! s( W& B
+ H) C, J0 r/ j- j
3. Run `webui.sh`.2 q4 H: T A S; _5 s; V
4. Check `webui-user.sh` for options.0 I8 Y; Y0 {: T0 N& x
$ a2 Z- p- |: Z c" D
### Installation on Apple Silicon2 y- J' x" f6 E: [$ d% P5 }% w
2 q# u+ G8 |6 M9 p
Find the instructions [here](https://github.com/AUTOMATIC1111 ... on-on-Apple-Silicon).8 Y1 l; x; D( u" p5 P9 }
# c8 J. }* y/ p* ]4 B9 y1 U## Contributing) _/ P3 ~; b9 V! U# w9 c* c! S+ C, f
$ D7 Z' K9 L; c7 ~
Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111 ... i/wiki/Contributing)' W. ~. `; r5 Y P1 x4 W
7 @! X# n6 w6 S$ n1 c* a/ e# h5 a## Documentation
, P( o/ e& e1 N- T, m) C* k5 C7 D
9 Y7 E, D* @) U$ @) e R* vThe documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
* ?( c! N4 U5 O4 k
7 V" d# `- }) e- o$ r. k4 }3 h## Credits' k% q4 z. u: M
' c" L- T* q5 P4 u# a) p
Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file./ h! P5 Z, U9 g1 T9 B. M/ I
, M: l+ c- s+ ^3 A! a
- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers
Y: u6 H8 L# v7 @- k-diffusion - https://github.com/crowsonkb/k-diffusion.git
) F- W2 I2 F; }! P" V2 I! Z- GFPGAN - https://github.com/TencentARC/GFPGAN.git- X3 f. ^; c. m; ^ X: y; d1 k
- CodeFormer - https://github.com/sczhou/CodeFormer
8 x& ~+ u$ j" F$ `' K1 ~1 z2 ~3 G- ESRGAN - https://github.com/xinntao/ESRGAN
1 X; ^1 N R' I8 `& P/ w- SwinIR - https://github.com/JingyunLiang/SwinIR
0 Y g# ]. }5 J- k- Swin2SR - https://github.com/mv-lab/swin2sr
! z V# E T2 e+ h6 w0 V- LDSR - https://github.com/Hafiidz/latent-diffusion
/ e5 Z8 z3 _" S" k/ g% J+ M- MiDaS - https://github.com/isl-org/MiDaS
$ r/ m7 m# q) a1 x& _( [5 e- Ideas for optimizations - https://github.com/basujindal/stable-diffusion
( l# L; y# X9 a- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.. a0 O2 C) M1 o7 Q8 l
- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)! P" B E1 F9 E8 J8 e+ e
- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention). O6 i6 Y# K' K _
- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas)., z- Z' f& [5 d8 ^
- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd: z; U& c; {, Y3 L5 k
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
6 c+ ?: b1 ]: O' O+ ?& u- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator$ m X! i3 ~7 n1 l! k
- Idea for Composable Diffusion - https://github.com/energy-based- ... sion-Models-PyTorch9 z& D! ^$ G* N, b; D5 m
- xformers - https://github.com/facebookresearch/xformers: \* C& Z# B, @0 j
- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru
% p3 Y7 |5 G' h8 N- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)
# ~% B9 k. h, u# B& r& `- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix" D9 j4 Q R1 e( Y
- Security advice - RyotaK" \2 N# H5 S. f g: M( J; f: Q; Z
- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC
& z! I) K4 c/ w, d) _8 K& A# }- TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd
; ~6 g' e f Y- }1 n1 r# [# ^0 e" e. m- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
; ]8 U2 Y# ?. e6 f- Olive - https://github.com/microsoft/Olive
0 \+ u. K+ ?+ W8 o- (You)9 [' N2 L J6 m7 t3 a3 L/ j' o
+ d5 j& K, H* u& c; Q& v
3 {* u: k5 @- R3 ~8 | f3 @! [3 ]https://cloud.189.cn/t/FvqQbeZfYrui (访问码:9ur0)
必先安装运行环境:(安装包在网盘根目录下) 1.Git-2.41.0-64-bit 2.python-3.10.6-amd64,注意安装时选中安装"Add Python to PATH" 3.启动器运行依赖。net-dotnet-6.0.11 以上运行环境安装完毕后,打开根目录"webui-user.bat"等待数秒即可打开stable-diffusion-webui,若然是系统自带IE浏览器打开的话,需要手动指定版本更高的Edge进入地址:http://127.0.0.1:7860 |
|