Save 80% on AI modeling by switching to Salad's distributed cloud. Our fully managed container engine delivers more inferences per dollar and dedicated GPU/CPU.
Cutting-edge AI models require expensive GPU processing. Training runs can cost thousands, leaving little room to iterate and optimize. Distributed data parallel (DDP) architectures can accelerate the training process, but networking GPU clusters and sourcing cloud resources can be prohibitively expensive.
Salad Container Engine (SCE) leverages latent GPU hardware to offer performant modeling for 20% the cost of legacy cloud infrastructure. Our fully managed container platform features dedicated edge processing and accommodates standard Docker workflows and tooling.
One Salad customer trained specialized NLP models with PyTorch on public web data to perform various tasks, such as predicting price fluctuation on ecommerce platforms.
Before deploying on Salad, their team trained models on one AWS p3.8xlarge EC2 instance with four V-100 GPU accelerators at an on-demand rate of $12.24/hr. Each training run took roughly 24 hours to complete, bringing the daily cost of compute to $293.76. The team averaged 12 runs and around $3,525 per month.
The team was eager to improve performance by testing different model architectures in successive iterations, and to try creating additional models for experimental applications. Unfortunately, the costs associated with their AWS configuration prevented them from increasing output or experimenting.
By switching to Salad Container Engine (SCE), they quickly realized an 80% reduction in overall training cost during a run of the same duration. These savings afforded new opportunities to experiment. In subsequent trials, the customer successfully conducted four times as many runs, increased net model performance, and validated new specialized models for their product portfolio.