Model Serving Made Easy

Sep 23, 2021 | - views

BentoML is a flexible, high-performance framework for serving, managing, and deploying machine learning models. BentoML bridges the gap between Data Science and DevOps. By providing a standard interface for describing a prediction service, BentoML abstracts away how to run model inference efficiently and how model serving workloads can integrate with cloud infrastructures. See how it works!



import bentoml
from bentoml.adapters import DataframeInput
from bentoml.frameworks.sklearn import SklearnModelArtifact
from sklearn.datasets import load_boston
from sklearn.ensemble import RandomForestRegressor

class SklearnModelService(bentoml.BentoService):
    @bentoml.api(input=DataframeInput(), batch=True)
    def predict(self, df):
        result = self.artifacts.model.predict(df)
        return result

def main():
    x, y = load_boston(return_X_y=True)
    estimator = RandomForestRegressor(), y)

    service = SklearnModelService()
    service.pack('model', estimator)

if __name__ == '__main__':
[2021-09-23 10:21:38,391] INFO - BentoService bundle 'SklearnModelService:20210923102137_1C3285' saved to: /home/mark/bentoml/repository/SklearnModelService/20210923102137_1C3285

Go to /home/mandreev/bentoml/repository/SklearnModelService/20210923102137_1C3285 and build docker

docker run -p 5000:5000 $(docker build -q /home/mark/bentoml/repository/SklearnModelService/20210923102137_1C3285)

Try to fetch predict with REST:

import requests

N = 13

if __name__ == '__main__':
    response ="", json=[[1] * N])
Swagger ui available at

Related articles: