...

Integrating PyTorch models into Jaqpot with ONNX

avatar
Alex Arvanitidis@alarv
21 days ago

Integrating PyTorch models into Jaqpot with ONNX

Jaqpot has supported Scikit-learn models as first-class citizens. But now, we also support plain PyTorch models—including simple classification and regression networks—by converting them to ONNX and deploying them through our API.

This post explains how you can train and upload a standard PyTorch regression model (not GNNs or vision models) to Jaqpot using ONNX.


Why ONNX?

PyTorch models can be complex to serve due to dynamic graphs and tight library coupling. ONNX allows you to export a trained PyTorch model to a standard format, which Jaqpot can safely load and use for predictions.

It also helps ensure your models remain compatible with Jaqpot’s backend even after updates.


Supported Use Cases

  • Any tensor input/output
  • Images as input/output
  • Any torch model + preprocessor that can be exported to ONNX format

Jaqpot already supports image models (e.g., torchvision-style CNNs) via ONNX. We'll soon publish a separate example showing how to upload and deploy a convolutional neural network with image preprocessing.


Example: Tabular Regression Model

import torch import torch.nn as nn from torch.utils.data import Dataset, DataLoader import numpy as np from sklearn.preprocessing import StandardScaler from jaqpot_api_client import ModelTask, Feature, FeatureType, ModelVisibility from jaqpotpy.models.torch_models.torch_onnx import TorchONNXModel from jaqpotpy import Jaqpot # Create a simple dataset class for regression class RegressionDataset(Dataset): def __init__(self, X, y): self.X = X self.y = y def __len__(self): return len(self.X) def __getitem__(self, idx): return self.X[idx], self.y[idx] # Define the neural network model class RegressionNet(nn.Module): def __init__(self): super(RegressionNet, self).__init__() self.layers = nn.Sequential( nn.Linear(5, 64), nn.ReLU(), nn.Linear(64, 32), nn.ReLU(), nn.Linear(32, 1) ) def forward(self, x): return self.layers(x) # Generate synthetic data np.random.seed(42) X = np.random.randn(1000, 5) y = 2 * X[:, 0] + 3 * X[:, 1] - X[:, 2] + 0.5 * X[:, 3] + np.random.randn(1000) * 0.1 scaler = StandardScaler() X_scaled = scaler.fit_transform(X) X_tensor = torch.FloatTensor(X_scaled) y_tensor = torch.FloatTensor(y).reshape(-1, 1) # Create dataset and dataloader dataset = RegressionDataset(X_tensor, y_tensor) train_loader = DataLoader(dataset, batch_size=32, shuffle=True) # Train the model model = RegressionNet() criterion = nn.MSELoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.01) for epoch in range(100): model.train() total_loss = 0 for batch_X, batch_y in train_loader: optimizer.zero_grad() outputs = model(batch_X) loss = criterion(outputs, batch_y) loss.backward() optimizer.step() total_loss += loss.item() if (epoch + 1) % 10 == 0: print(f"Epoch [{epoch+1}/100], Loss: {total_loss/len(train_loader):.4f}") # Prepare for Jaqpot deployment input_tensor = torch.randn(1, 5) independent_features = [ Feature(key=f"feature_{i}", name=f"Feature {i}", feature_type=FeatureType.FLOAT) for i in range(5) ] dependent_features = [Feature(key="target", name="Target", feature_type=FeatureType.FLOAT)] jaqpot_model = TorchONNXModel( model=model, input_example=input_tensor, task=ModelTask.REGRESSION, independent_features=independent_features, dependent_features=dependent_features, onnx_preprocessor=None, ) jaqpot = Jaqpot() jaqpot.login() jaqpot_model.deploy_on_jaqpot( jaqpot=jaqpot, name="PyTorch Regression Model", description="A simple neural network for regression with 5 input features", visibility=ModelVisibility.PUBLIC, )

What's Next?

We're working on:

  • Example for convolutional networks (images via torchvision)

If you're training PyTorch models for standard ML tasks, you can now put them on Jaqpot without much boilerplate.

Check out more examples in GitHub. This exact model is already deployed and can be tested live on app.jaqpot.org

📚 For more, check out the docs