Replicate lets you run machine learning models with a few lines of code, without needing to understand how machine learning works.
Use our Python library:
import replicate
output = replicate.run(
"stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf",
input={"prompt": "an astronaut riding on a horse"},
)
...or query the API directly with your tool of choice:
$ curl -s -X POST \
-d '{"version": "db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", \
"input": { "prompt": "an astronaut riding on a horse" } }' \
-H "Authorization: Token $REPLICATE_API_TOKEN" \
-H 'Content-Type: application/json' \
https://api.replicate.com/v1/predictions
Thousands of models, ready to use
Machine learning can do some extraordinary things. Replicate's community of machine learning hackers have shared thousands of models that you can run.
Models that can understand and generate text
Video creation and editing
Models that create and edit videos
Super resolution
Upscaling models that create high-quality images from low-quality images
Image and video generation models trained with diffusion processes
Image to text
Image and video generation models trained with diffusion processes
Image and video generation models trained with diffusion processes
Explore models or, learn more about our API
Imagine what you can build
With Replicate and tools like Next.js and Vercel, you can wake up with an idea and watch it hit the front page of Hacker News by the time you go to bed.
Here are a few of our favorite projects built on Replicate. They're all open-source, so you can use them as a starting point for your own projects.
See how well you age with AI.
Paint by Text by Zeke Sikelianos
Edit your photos using written instructions, with the help of an AI.
restorePhotos.io by Hassan El Mghari
Restore old photos using AI.
02Push
You're building new products with machine learning. You don't have time to fight Python dependency hell, get mired in GPU configuration, or cobble together a Dockerfile.
That's why we built Cog, an open-source tool that lets you package machine learning models in a standard, production-ready container.
First, define the environment your model runs in with cog.yaml
:
build:
gpu: true
system_packages:
- "libgl1-mesa-glx"
- "libglib2.0-0"
python_version: "3.10"
python_packages:
- "torch==1.13.1"
predict: "predict.py:Predictor"
Next, define how predictions are run on your model with predict.py
:
from cog import BasePredictor, Input, Path
import torch
class Predictor(BasePredictor):
def setup(self):
"""Load the model into memory to make running multiple predictions efficient"""
self.model = torch.load("./weights.pth")
# The arguments and types the model takes as input
def predict(self,
image: Path = Input(description="Grayscale input image")
) -> Path:
"""Run a single prediction on the model"""
processed_image = preprocess(image)
output = self.model(processed_image)
return postprocess(output)
Now, you can run predictions on this model locally:
$ cog predict -i @input.jpg
--> Building Docker image...
--> Running Prediction...
--> Output written to output.jpg
Or, build a Docker image for deployment:
$ cog build -t my-colorization-model
--> Building Docker image...
--> Built my-colorization-model:latest
Finally, push your model to Replicate, and you can run it in the cloud with a few lines of code:
$ cog push
Pushed model to replicate.com/your-username/my-colorization-model
import replicate
output = replicate.run(
"your-username/your-model:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf",
image=open("input.jpg"),
)
Push a model or, learn more about Cog
03Scale
Deploying machine learning models at scale is horrible. If you've tried, you know. API servers, weird dependencies, enormous model weights, CUDA, GPUs, batching. If you're building a product fast, you don't want to be dealing with this stuff.
Replicate makes it easy to deploy machine learning models. You can use open-source models off the shelf, or you can deploy your own custom, private models at scale.
Automatic API
Define your model with Cog, and we'll automatically generate a scalable API server for it with standard practices and deploy on a big cluster of GPUs.
Automatic scale
If you get a ton of traffic, Replicate scales up automatically to handle the demand. If you don't get any traffic, we scale down to zero and don't charge you a thing.