You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+4-6
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,12 @@
1
1
# CoreML stable diffusion image generation example app
2
2
### Please star the repository if you believe continuing the development of this package is worthwhile. This will help me understand which package deserves more effort.
The example app for running text-to-image or image-to-image models to generate images using [Apple's Core ML Stable Diffusion implementation](https://github.com/apple/ml-stable-diffusion)
@@ -17,7 +18,7 @@ Pick up the model that was placed at the local folder from the list. Click updat
17
18
### Step 3
18
19
Enter a prompt or pick up a picture and press "Generate" (You don't need to prepare image size manually) It might take up to a minute or two to get the result
### Typical set of files for a model und the purpose of each file
23
24
@@ -41,9 +42,6 @@ Enter a prompt or pick up a picture and press "Generate" (You don't need to prep
41
42
42
43
The speed can be unpredictable. Sometimes a model will suddenly run a lot slower than before. It appears as if Core ML is trying to be smart in how to schedule things, but doesn’t always optimal.
43
44
44
-
### The package [source](https://github.com/The-Igor/coreml-stable-diffusion-swift)
45
-
46
-
47
45
### Deploying Transformers on the Apple Neural Engine [Case study](https://machinelearning.apple.com/research/neural-engine-transformers)
0 commit comments