Русские видео

Сейчас в тренде

Иностранные видео


Скачать с ютуб Soft Margin Support Vector Machine SVM || Lesson 84 || Machine Learning || Learning Monkey || в хорошем качестве

Soft Margin Support Vector Machine SVM || Lesson 84 || Machine Learning || Learning Monkey || 4 года назад


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса savevideohd.ru



Soft Margin Support Vector Machine SVM || Lesson 84 || Machine Learning || Learning Monkey ||

#machinelearning#learningmonkey In this class, we discuss Soft Margin Support Vector Machine SVM. In Soft Margin Support Vector Machine SVM we don't make the assumption linearly separable. So the data points positive and negative can be on either side of the data. So we need to make a new optimization problem. The new optimization problem is not only considered the maximization of the margin. It has to consider the loss function. First, we have to define the loss function. Let's assume the sample dataset. and our support vector machine has identified a line that separates the positive and negative data. if a positive point identified on the side where functional margin to the point is negative means loss. The point is misclassified. The line that passes through the nearest positive point we call it a positive plane. The line that passes through the nearest negative point we call it a negative plane. We calculate the loss of positive points from the positive plane. The same way we calculate negative points from the negative plane in the opposite direction. So the point that is having yi(wTxi + b) value positive and value above one means correctly classified. If not incorrectly classified. How we calculate loss? Calculate yi(wTxi+b) value if the value greater than one loss is zero. If less than one the loss value is 1 - zi. Where zi is yi(wTxi+b). The loss function is given as Max(0, 1-zi). If the zi value greater than one Max(0, 1-zi) will be zero. If less than one Max(0, 1-zi) will be 1 - zi. This loss we call it as hinge loss. So the support vector machine the loss function is hinge loss. The optimization problem is to minimize the norm of w and loss function. The summation of both values is minimized. Link for playlists:    / @learningmonkey   Link for our website: https://learningmonkey.in Follow us on Facebook @   / learningmonkey   Follow us on Instagram @   / learningmonkey1   Follow us on Twitter @   / _learningmonkey   Mail us @ [email protected]

Comments