🚀 Deploy an ML Model on Hugging Face Spaces in 10 Minutes!
Hey there, data nerd! 🤓
Ever felt like your machine learning models are hiding away in your local machine, waiting for their big moment? Well, today’s the day we give them a stage to shine – Hugging Face Spaces! In just 10 minutes, you’ll have your model deployed and ready to impress. Let’s dive right in!
💡 Prerequisites
- An ML model (duh!)
- A Hugging Face account (create one if you haven’t already)
- Some basic Python knowledge
- A touch of patience (but not too much, we’re fast!)
🛠️ The Tutorial
1. Prepare your model
First things first, make sure your model is saved in the right format. For this example, let’s use a simple text classification model trained with Hugging Face’s Transformers library.
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load your tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
model = AutoModelForSequenceClassification.from_pretrained("path/to/your/model")
# Save the model in the format Hugging Face Spaces expects
torch.save(model.state_dict(), "model.pt")
💡 Tip: If you’re using a different model, make sure it’s saved as a .pt file containing the state dictionary.
2. Create your Space
Log into your Hugging Face account and go to Spaces (https://huggingface.co/spaces). Click on “New” to create a new space.
Name it something awesome (like “World-Dominating-Model”), then click on “Create model”. Select “Custom model”, give it a name, and click “Create”.
3. Upload your model
In your new Space, go to the “Model” tab. Click on “Upload” and select the .pt file you saved earlier.
🚨 Warning: Make sure to keep your model’s privacy settings as intended! You can change this in the “Settings” tab.
4. Create an inference script
Now, let’s create a Python script that Hugging Face Spaces will use to run inferences with our model. Create a new file called app.py and add the following:
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
model = AutoModelForSequenceClassification.from_pretrained("path/to/your/model")
def predict(text):
inputs = tokenizer(text, return_tensors="pt", truncation=True)
outputs = model(**inputs)
return torch.argmax(outputs.logits).item()
if __name__ == "__main__":
# Example usage:
print(predict("I love this tutorial!"))
💡 Tip: You can customize the predict function to fit your specific use case.
5. Upload and run your script
In your Space, go back to the “Model” tab. Click on “Upload”, select your app.py file, and click “Upload”. Then, click on “Run”.
Select “Python” as the runtime, give it a version (like “3.8”), and click “Save”. Hugging Face Spaces will install the necessary dependencies ( Transformers, torch) for you.
🚨 Warning: If your model requires specific environment variables or dependencies, you’ll need to provide them in the “Environment Variables” or “Additional files” sections respectively.
6. Test your deployment
Now, let’s test if everything works! Go to the “Inference” tab and enter some text. Your model should make a prediction based on what it was trained for.
🎉 Expected Result
You should now have a live, deployed machine learning model that’s ready to accept inputs and make predictions! High-five your monitor (or cat, if you’re feeling lonely) – you just went from local hero to global superstar in 10 minutes!
🌟 Going Further
Now that you’ve got the hang of it, why not try:
- 🤝 Deploying a model with custom data processing steps.
- 🎨 Creating an awesome UI for your Space using HTML, CSS, and JavaScript.
- ⚒️ Building a complete ML web app using Hugging Face’s Inference API.
Happy coding (and showing off)! ✌️🤘
💬 Comments
Comments are coming soon! We're setting up our discussion system.
In the meantime, feel free to contact us with your feedback.