Solving the Model Serving Component of the MLOps Stack with Chaoyu Yang
October 26, 2022
In this episode of MLOps Live, Sabine and Stephen are joined by Chaoyu Yang, Co-Founder & CEO of BentoML. They discuss model deployment setups and trade-offs for reasonable scale teams, the prevalence of open-source platforms, and how to deploy the best model serving practices for early-stage and established teams.
Since MLOps is still in its infancy, it isn't easy to find established best practices and model deployment examples to operationalize machine learning solutions, as the latter can vary depending on factors such as the type of business use case, the size of the organization, the structure of the organization, and the availability of resources. Regardless of the machine learning model deployment pipeline, Chaoyu provides insights into the model deployment struggles experienced by various ML Engineers and their teams and the solutions they implemented to solve these model serving components.
Subscribe to our
YouTube channel to watch this episode!
Learn more about Chaoyu Yang:
Episode Resources:
If you enjoyed this episode then please either: