Skip to content

Commit 7608faf

Browse files
committed
Update README.md
1 parent cb17413 commit 7608faf

File tree

1 file changed

+7
-19
lines changed

1 file changed

+7
-19
lines changed

README.md

+7-19
Original file line numberDiff line numberDiff line change
@@ -2,39 +2,27 @@
22

33
The example app for running text-to-image or image-to-image models to generate images using [Apple's Core ML Stable Diffusion implementation](https://github.com/apple/ml-stable-diffusion)
44

5-
### Performance
6-
7-
The speed can be unpredictable. Sometimes a model will suddenly run a lot slower than before. It appears as if Core ML is trying to be smart in how to schedule things, but doesn’t always optimal.
8-
9-
## SwiftUI example for the package
10-
11-
[CoreML stable diffusion image generation](https://github.com/The-Igor/coreml-stable-diffusion-swift)
5+
![The concept](https://github.com/The-Igor/coreml-stable-diffusion-swift-example/blob/main/img/img_08.gif)
126

13-
![The concept](https://github.com/The-Igor/coreml-stable-diffusion-swift-example/blob/main/img/img_08.gif)
14-
15-
## How to use
7+
## How to get generated image
168

179
1. Place at least one of your prepared split_einsum models into the ‘Local Models’ folder. Find the ‘Document’ folder through the interface by tapping on the ‘Local Models’ button. If the folder is empty, then create a folder named ‘models’. Refer to the folders’ hierarchy in the image below for guidance.
1810
The example app supports only ``split_einsum`` models. In terms of performance ``split_einsum`` is the fastest way to get result.
1911
2. Pick up the model that was placed at the local folder from the list. Click update button if you added a model while app was launched
2012
3. Enter a prompt or pick up a picture and press "Generate" (You don't need to prepare image size manually) It might take up to a minute or two to get the result
2113

22-
23-
![The concept](https://github.com/The-Igor/coreml-stable-diffusion-swift-example/blob/main/img/img_03.png)
14+
![The concept](https://github.com/The-Igor/coreml-stable-diffusion-swift-example/blob/main/img/img_03.png)
2415

2516
## Model set example
2617
[coreml-stable-diffusion-2-base](https://huggingface.co/pcuenq/coreml-stable-diffusion-2-base/blob/main/coreml-stable-diffusion-2-base_split_einsum_compiled.zip )
2718

19+
### Performance
2820

29-
## Documentation(API)
30-
- You need to have Xcode 13 installed in order to have access to Documentation Compiler (DocC)
31-
32-
- Go to Product > Build Documentation or **⌃⇧⌘ D**
21+
The speed can be unpredictable. Sometimes a model will suddenly run a lot slower than before. It appears as if Core ML is trying to be smart in how to schedule things, but doesn’t always optimal.
3322

34-
![The concept](https://github.com/The-Igor/coreml-stable-diffusion-swift-example/blob/main/img/img_01.png)
23+
## SwiftUI example [for the package](https://github.com/The-Igor/coreml-stable-diffusion-swift)
3524

3625

37-
## Case study
38-
[Deploying Transformers on the Apple Neural Engine](https://machinelearning.apple.com/research/neural-engine-transformers)
26+
## Case study [Deploying Transformers on the Apple Neural Engine](https://machinelearning.apple.com/research/neural-engine-transformers)
3927

4028

0 commit comments

Comments
 (0)