Русские видео

Сейчас в тренде

Иностранные видео


Скачать с ютуб Getting creative with tree-based models, some of their less explored settings | Numerai Quant Club в хорошем качестве

Getting creative with tree-based models, some of their less explored settings | Numerai Quant Club 1 год назад


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса savevideohd.ru



Getting creative with tree-based models, some of their less explored settings | Numerai Quant Club

Getting creative with tree-based models, and some of their less explored settings. Notebook: https://github.com/numerai/quant-club... Join Numerai’s Chief Scientist and Minister of Research, Michael Oliver, as he digs deep into the world of Numerai, research, data and everything in between. This will be a recurring event through 2023. More details: https://forum.numer.ai/t/numerai-quan... [00:30] Any advice for jumping back in after a break from the TRN? Can you explain the neutralization process the metal model predictions undergo for the main tournament? [03:20] Is there development on more metrics for figuring out TC, Corr, etc.? [06:10] Can you summarize why the change to dailies? Is there an update on current max capacity/predicted max in the future? [08:58] Overview: getting creative with tree based models [10:14] Looking at monotone constraints [12:40] Looking at linear tree settings [14:43] Different ways to look at what trees are doing (making a linear model that functions as a decision tree) [16:12] Discretized features [17:05] Fitting a decision tree [20:25] Why would you want to do this [21:13] Analogy of DNN [22:44] Looking at leaf weights [24:33] What OneHotEncoder looks like [29:02] Linear regression weights [29:29] Clarification on terminal nodes per tree/index between 0-64? [31:22] Shrinkage as a linear regression technique, pulling everything towards zero at the same amount, similar to L1 [34:05]How well do the shrunken weights do? [35:50] Looking at how many data points are in each bucket [39:38] Looking at performance [41:27] Net week’s overview: using this representation to get insights/predictions [43:34] Could fitting a distribution for samples within each of the leaves be useful for estimating uncertainty of predictions? Will we get similar performance by setting number of instances in each leaf instead of regularizing the linear regression at the end? [44:54] Why not just use L1? Moving node towards parent node value vs. zero. Want to join in on the hardest #datascience competition on the planet? Or are you a quant with your own data? https://rocketchat.numer.ai https://docs.numer.ai/tournament/learn Numerai: https://numer.ai Numerai Signals: https://signals.numer.ai

Comments