|
venv "D:\stable-diffusion-webui-directml\venv\Scripts\Python.exe"* _6 _1 c/ D% K# s3 D6 d5 Z. r
fatal: No names found, cannot describe anything.
: ~ q0 n4 T w$ z! k; _9 ePython 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]$ N) b4 \& n# n* V
Version: <none>+ d% G+ @+ l) b
Commit hash: ebf229bd1727a0f8f0d149829ce82e2012ba7318
0 F" z. R) r/ @* z- }/ [7 F/ eInstalling requirements
) _" J6 T( u$ }+ x9 I GLaunching Web UI with arguments: --autolaunch
5 s* ?" b" w" X* |" ]6 R8 ] G9 W' oNo module 'xformers'. Proceeding without it. _' o5 v6 A% Z6 N
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled" `0 o" e8 M& o( m
Loading weights [fc2511737a] from D:\stable-diffusion-webui-directml\models\Stable-diffusion\chilloutmix_NiPrunedFp32Fix.safetensors
7 j) s1 h9 D6 D8 y. fCreating model from config: D:\stable-diffusion-webui-directml\configs\v1-inference.yaml, ?, S# W6 T: \- f& E
LatentDiffusion: Running in eps-prediction mode
5 C, f# W4 @3 ?: \. A9 P$ CRunning on local URL: http://127.0.0.1:7860
; I. s- C. i9 o2 O2 ]
/ O; f5 }) U; YTo create a public link, set `share=True` in `launch()`.4 R" r1 G& ]8 H- h) R+ c
DiffusionWrapper has 859.52 M params.
# L' X/ R) V/ I" T1 BStartup time: 15.1s (import torch: 3.6s, import gradio: 2.8s, import ldm: 1.0s, other imports: 4.4s, setup codeformer: 0.1s, load scripts: 1.7s, create ui: 0.7s, gradio launch: 0.7s).
6 N' u( n f$ b6 v+ iLoading VAE weights found near the checkpoint: D:\stable-diffusion-webui-directml\models\VAE\chilloutmix_NiPrunedFp32Fix.vae.ckpt% {* a2 n; ]+ w- }
Applying optimization: InvokeAI... done.
a& W3 a* c k0 ~% L' ?6 GTextual inversion embeddings loaded(0):+ d" K; J+ ]+ l9 E( ^8 M
Model loaded in 5.6s (load weights from disk: 1.1s, create model: 0.8s, apply weights to model: 0.9s, apply half(): 0.5s, load VAE: 0.6s, move model to device: 1.5s). " r7 f8 R$ H0 u6 J2 y m% K5 b4 Y& I
% m) t0 E4 C$ L! g+ P& b
version: • python: 3.10.6 • torch: 2.0.0+cpu • xformers: N/A • gradio: 3.31.0 • checkpoint: fc2511737a8 A3 n; _1 l& s) H6 {- Z
" U4 v8 J& I2 C
### Installation on Windows 10/11 with NVidia-GPUs using release package
8 M, _. d' o7 s, P+ L' c. g& _; u* w+ F4 m, h/ t8 y
1. Download `sd.webui.zip` from [v1.0.0-pre](https://github.com/AUTOMATIC1111 ... ases/tag/v1.0.0-pre) and extract it's contents.6 K+ ^/ q0 y! d. F: G2 z
2. Run `update.bat`.3 d+ j" N& {! `# Z
3. Run `run.bat`.% h8 I' T; c7 h2 V" H$ {2 {
> For more details see [Install-and-Run-on-NVidia-GPUs](https://github.com/AUTOMATIC1111 ... -Run-on-NVidia-GPUs): n7 ~7 ?% Q- `9 b: x& W2 w4 @
$ c l' _$ a+ y. ]7 w: ?# e& E2 Y### Automatic Installation on Windows
! E0 W7 a- m) y- P! k, [- M! f& {1 v, h C
1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH"., y) u3 M5 m/ d+ D' d
2. Install [git](https://git-scm.com/download/win).
' w1 b$ r0 Q' K& I+ T3. Download the stable-diffusion-webui-directml repository, for example by running `git clone https://github.com/lshqqytiger/s ... ebui-directml.git`. y+ |1 |5 t% v- ^5 Y" [
4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.
# @$ _8 A" o; c; U+ s& t& i* G# @7 L9 s& k) \1 O
### Automatic Installation on Linux
9 N: q( I3 o8 y8 Z+ l! _' |
( i6 F) g. H, l4 M# ~; r% }1. Install the dependencies:! p& h" n# m6 W9 r% w s) r1 A
9 [; d: W) F) q1 v1 B4 z* ~```bash
. d0 O" e3 \- Z. Q8 T3 g! t+ f# Debian-based:7 n6 T* U) D( R
sudo apt install wget git python3 python3-venv
4 c$ W* T @- J8 M# Red Hat-based:
) J: k0 B6 S2 `1 ]0 K' N- q2 w3 L- qsudo dnf install wget git python30 X: Z6 W, n8 C( d9 y
# Arch-based:
7 T* ~9 S2 w4 D0 lsudo pacman -S wget git python3: `% n' r7 _: l z G9 x8 R+ O0 ~
```
r/ P% z) w* g, H/ j
. v% K. w4 \. J2. Navigate to the directory you would like the webui to be installed and execute the following command:3 D8 }# P; l8 b# V
: W9 @5 I7 @8 C/ I```bash
9 ~% ^2 ]- f9 @0 E- Jbash <(wget -qO- https://raw.githubusercontent.co ... bui/master/webui.sh)
s; t/ k4 `9 n! [+ H```& N: U+ R y c. \& K
3 K+ P5 z# X+ E/ j6 q, Q
3. Run `webui.sh`.2 L/ o. ^. V: `& t0 Z& a! n
4. Check `webui-user.sh` for options.( u5 Z7 z& h2 m4 {+ ?: w, T1 ~
6 F3 t ]( A) q; h# P/ ?$ B
### Installation on Apple Silicon( p/ ^8 D3 M6 f9 C
- }; J9 b; \3 pFind the instructions [here](https://github.com/AUTOMATIC1111 ... on-on-Apple-Silicon).3 e! n9 r, {2 B* W' h' x
7 V* Y; T# I! w9 O# y/ S
## Contributing- h$ ]2 O5 n- v" z- [2 n
- |1 [6 [1 _' M+ h
Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111 ... i/wiki/Contributing)
. l, S4 i8 H3 S, Z2 ?3 M" @3 R
/ v3 w+ j L* O## Documentation @0 z% G" K- N* f$ ]9 J
- n" N6 j" `7 a u& c7 [! Q3 G1 ]
The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
3 Q+ H& Z$ a# t& ~1 }# Q( E+ m' \4 d% m9 h5 D+ N" S
## Credits
: t6 G% d" a" K( M% C: c: ^' l: t& B: W& Q: Q* T' c$ F- z- B
Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
; D% W% g5 V* Z% d. Y9 d
9 J0 P9 i, |5 J) X4 \; e- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers* [. w5 \3 |- h8 \
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git+ ?4 l# w3 p. ?: t
- GFPGAN - https://github.com/TencentARC/GFPGAN.git) J* x5 m* d0 `# J$ V W9 r: N3 p J
- CodeFormer - https://github.com/sczhou/CodeFormer
1 C* Q1 E, T' n+ D- ESRGAN - https://github.com/xinntao/ESRGAN
$ n3 _0 X: T* {4 j: _9 N- SwinIR - https://github.com/JingyunLiang/SwinIR8 Q6 D2 R' U) \7 c9 ], H6 t5 C
- Swin2SR - https://github.com/mv-lab/swin2sr
( x5 P; `* l2 h+ ~; z- LDSR - https://github.com/Hafiidz/latent-diffusion# X* F9 K& H+ b0 B
- MiDaS - https://github.com/isl-org/MiDaS; C( D0 i& }, K& h0 B
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion
. l: {8 Y1 P8 f; `; z9 C( x- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.* m/ f" w2 o% }3 D
- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)1 y3 d" Y) G6 m) N9 L7 k
- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)
- U" }( Q. e% d# i% \0 s8 n- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).. b ]9 m/ S z' F" `' {5 b- K
- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
2 F: I) \7 c# Q- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot$ V% l- J" r/ ]6 F1 o2 _5 w
- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator8 n# N, ^1 z8 Z
- Idea for Composable Diffusion - https://github.com/energy-based- ... sion-Models-PyTorch. s% U$ X( B/ c! \6 S5 l i' F3 S
- xformers - https://github.com/facebookresearch/xformers! m4 L4 z% h) @* \9 m% u0 y- @7 g
- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru ?% b" t2 I7 M0 ?9 d3 p& a
- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)
, F$ N t4 K& X3 t/ q- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix( Q( [0 @1 Y% _) f
- Security advice - RyotaK5 m3 x$ d) d# L7 ~
- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC- Q0 J! [* o E. x* X H: j. ?3 J
- TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd
$ S5 d( Z+ u2 @3 l4 ?8 |- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.8 R* W9 Y( I9 u9 k* e3 O' Z9 M
- Olive - https://github.com/microsoft/Olive
8 e+ b) `0 e: r0 _& `& E) C9 J- (You)
) a* |* G' o" D# z3 S% G' d
7 V: z. b/ Y+ r+ o3 X' q k9 V3 p3 e/ a
https://cloud.189.cn/t/FvqQbeZfYrui (访问码:9ur0)
必先安装运行环境:(安装包在网盘根目录下) 1.Git-2.41.0-64-bit 2.python-3.10.6-amd64,注意安装时选中安装"Add Python to PATH" 3.启动器运行依赖。net-dotnet-6.0.11 以上运行环境安装完毕后,打开根目录"webui-user.bat"等待数秒即可打开stable-diffusion-webui,若然是系统自带IE浏览器打开的话,需要手动指定版本更高的Edge进入地址:http://127.0.0.1:7860 |
|