|
2 | 2 |
|
3 | 3 | The example app for running text-to-image or image-to-image models to generate images using [Apple's Core ML Stable Diffusion implementation](https://github.com/apple/ml-stable-diffusion)
|
4 | 4 |
|
5 | | -### Performance |
6 | | - |
7 | | - The speed can be unpredictable. Sometimes a model will suddenly run a lot slower than before. It appears as if Core ML is trying to be smart in how to schedule things, but doesn’t always optimal. |
8 | | - |
9 | | -## SwiftUI example for the package |
10 | | - |
11 | | -[CoreML stable diffusion image generation](https://github.com/The-Igor/coreml-stable-diffusion-swift) |
| 5 | + |
12 | 6 |
|
13 | | -  |
14 | | - |
15 | | - ## How to use |
| 7 | + ## How to get generated image |
16 | 8 |
|
17 | 9 | 1. Place at least one of your prepared split_einsum models into the ‘Local Models’ folder. Find the ‘Document’ folder through the interface by tapping on the ‘Local Models’ button. If the folder is empty, then create a folder named ‘models’. Refer to the folders’ hierarchy in the image below for guidance.
|
18 | 10 | The example app supports only ``split_einsum`` models. In terms of performance ``split_einsum`` is the fastest way to get result.
|
19 | 11 | 2. Pick up the model that was placed at the local folder from the list. Click update button if you added a model while app was launched
|
20 | 12 | 3. Enter a prompt or pick up a picture and press "Generate" (You don't need to prepare image size manually) It might take up to a minute or two to get the result
|
21 | 13 |
|
22 | | - |
23 | | -  |
| 14 | + |
24 | 15 |
|
25 | 16 | ## Model set example
|
26 | 17 | [coreml-stable-diffusion-2-base](https://huggingface.co/pcuenq/coreml-stable-diffusion-2-base/blob/main/coreml-stable-diffusion-2-base_split_einsum_compiled.zip )
|
27 | 18 |
|
| 19 | +### Performance |
28 | 20 |
|
29 | | -## Documentation(API) |
30 | | -- You need to have Xcode 13 installed in order to have access to Documentation Compiler (DocC) |
31 | | - |
32 | | -- Go to Product > Build Documentation or **⌃⇧⌘ D** |
| 21 | + The speed can be unpredictable. Sometimes a model will suddenly run a lot slower than before. It appears as if Core ML is trying to be smart in how to schedule things, but doesn’t always optimal. |
33 | 22 |
|
34 | | - |
| 23 | +## SwiftUI example [for the package](https://github.com/The-Igor/coreml-stable-diffusion-swift) |
35 | 24 |
|
36 | 25 |
|
37 | | -## Case study |
38 | | -[Deploying Transformers on the Apple Neural Engine](https://machinelearning.apple.com/research/neural-engine-transformers) |
| 26 | +## Case study [Deploying Transformers on the Apple Neural Engine](https://machinelearning.apple.com/research/neural-engine-transformers) |
39 | 27 |
|
40 | 28 |
|
0 commit comments