Salad offers developers an easy-to-use and fully managed Inference API for Sable Diffusion. With Salad Inference Endpoints API, developers can scale their inferences infinitely without worrying about configuring their infrastructure. In fact, our recent Stable Diffusion 2.1 benchmark proves that Salad is the most cost-effective solution among similar “one-click deployment” services.
Generate more than 1000 images per dollar, and only pay for what you need! Scale from 10,000 images per hour, to zero, and back again!
The secret to our savings and elasticity lies in our unique recipe: community cloud. SaladCloud is a fully people-powered alternative to traditional cloud services. With over 10,000+ consumer GPUs available at any moment, our infrastructure offers high availability, top performance, and scalable container solutions for a fraction of the cost.
To ensure an accurate comparison we kept our benchmark consistent across providers. We used these same parameters for each test:
Furthermore, each of these services represent a true one-click deployment: fully-managed and serverless. If you’re interested in launching your own models and saving even more on deployment, check our pricing for Salad Container Engine instead. With SCE, you can achieve 3000+ images per dollar!
The proof is in the pudding, and we welcome you to test these findings yourself. To use our Stable Diffusion 2.1 Endpoint, just follow these steps:
And that’s all folks, easy as pie. For more detailed instructions on using our endpoint, check out our documentation.
Want to run your own custom models on Salad? Interested in scaling up to thousands of nodes for less than half the cost of other providers? Sign up for our Salad Container Engine service on the SaladCloud portal. And feel free to join our SaladCloud Discord server and talk to the team about your needs directly.