Русские видео

Сейчас в тренде

Иностранные видео


Скачать с ютуб End to End MLOps Basics // Raviraja Ganta // MLOps Meetup #82 в хорошем качестве

End to End MLOps Basics // Raviraja Ganta // MLOps Meetup #82 2 года назад


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса savevideohd.ru



End to End MLOps Basics // Raviraja Ganta // MLOps Meetup #82

MLOps Community Meetup #82! Last Wednesday we talked to Raviraja Ganta, Founding Engineer - NLP of Enterpret. //Abstract MLOps, or DevOps for machine learning, enables data science and IT teams to collaborate and increase the pace of model development and deployment by monitoring, validation, and governance of machine learning models. To understand MLOps, we must first understand the ML systems lifecycle from developing ML models to deploying and monitoring them. // Bio Raviraja is currently working at Enterpret as a Founding Engineer - NLP. His interests are in Unsupervised Algorithms, Semantic Similarity, and Productionising the NLP models. Raviraja likes to follow the latest research works happening in the NLP domain. Besides work, Raviraja likes cooking 🥘 , cycling 🚴‍♀️ , and kdramas 🎥. // Related links https://github.com/graviraja/MLOps-Ba... https://gravirajag.dev/ ---------- ✌️Connect With Us ✌️------------ Join our Slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, Feature Store, Machine Learning Monitoring and Blogs: https://mlops.community/ Connect with Demetrios on LinkedIn:   / dpbrinkm   Connect with Raviraja on LinkedIn:   / ravirajag   Timestamps: [00:00] Introduction to Raviraja Ganta [03:05] Basics of End to End MLOps [03:20] Raviraja's background [03:46] Agenda [04:04] Why MLOps? [04:33] Survey by ML practitioners at Full Stack Deep Learning [05:16] What is MLOps? [05:49] MLOps Lifecycle [06:17] ML Development [06:41] Training Operationalization [07:03] Continuous Training [07:34] Model Deployment [07:53] Prediction Serving [08:28] Continuous Monitoring [08:48] Data and Model Management [09:37] Disclaimer!! [10:12] ML Development [10:33] ML Development - Pytorch Lightning [12:00] Model Monitoring [12:46] Model Monitoring - Weights and Biases [13:32] Configuration Management [14:42] Configuration Management - Hydra [15:28] Data and Model Management [15:56] Data and Model Management - DVC [16:56] Model Packaging [18:07] Model Packaging - ONNX [18:46] Code Packaging [19:10] Code Packaging - Docker [19:40] Continuous Integration and Continuous Delivery (Deployment) - CI/CD [20:17] CI/CD - Github Actions [20:49] Container Registry [21:30] Container Registry - ECR [24:53] Model Deployment [24:07] Model Deployment -Serverless [26:45] Choosing the best tool from those with the same functionality [29:15] DVC sister tool (CML) playing with Github Action [32:07] Updating test data [34:11] AWS Lambda vs Containers for serverless inference service [38:00] AWS Lambda vs ECT EC2 AKS on saving money [39:26] Model Monitoring [40:19] Model Monitoring - Problems [42:16] Model Monitoring - Kibana [43:43] Summary [45:50] Next Steps [49:11] Complete Code [49:42] Q&A [50:16] Amazon Sagemaker as an option [50:57] Managing Dependencies [51:45] Tool that solves the same problem in a different way vs Optimize Efficiency which takes a different approach [54:27] Single provider that provides an end-to-end solution

Comments