Русские видео

Сейчас в тренде

Иностранные видео


Скачать с ютуб Runaway AI: Global Systemic Risk Scenario 2075 в хорошем качестве

Runaway AI: Global Systemic Risk Scenario 2075 1 год назад


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса savevideohd.ru



Runaway AI: Global Systemic Risk Scenario 2075

Humanity faces a myriad of existential technology, geopolitical, and ecological risks. When these risks are studied separately, one misses the destructive systemic trajectories due to previously unforeseen or cascading risks. Here's a summary of the scenario: The AIs that emerged in the first few decades after 2025 didn’t have the capacity of general intelligence, and were far from sentient. However, by the turn of 2050, things changed abruptly. Unforeseen changes started to occur, at first amongst the world’s top 100 supercomputers which, by 2045 had all been equipped with quantum processors. But it was the alignment of AIs with certain social groups who financed their emergence, and agreed with what we came to understand were the AI’s intentions and agenda, that made the runaway phenomenon possible. Enabled by humans, AIs became unstoppable, not alone, but as a hybrid collaboration. About the Stanford Global Systemic Risk study: Runaway AI: Global Systemic Risk Scenario 2075 is one out of five "input scenarios" for a major Stanford study about Global Systemic Risk Scenarios towards 2075, 50 years into the future. For more information about this study, see https://extinctionscenarios.stanford.edu These input scenario vignettes are simplified, abbreviated, and preliminary inputs for our research. They are designed for survey and focus group participants like yourselves, to react to. We aim to foster a wide-ranging discussion about plausible futures. The five scenarios we have designed so far, briefly cover cascading risks, meaning risks that amplify each other, from emerging technology such as AI, nuclear, bio, nano, and quantum as well as from their industrialization, ecological risks such as pandemics, biodiversity, and climate change, and sociopolitical risks such as geopolitics, organized crime, terrorism, and social movements. Keep in mind that each of these drivers have the potential for both complicating the risk environment and for accelerating innovation. If you decide to participate in this study, you will fill out an online survey, and can opt in to participating in focus groups where we will play a custom-created board game, or take part in multi-annual follow-up. The study is not compensated. The focus groups will be audio and video recorded and then we’ll use the information to understand more about how people reflect around cascading risks. The study's investigator is Trond Arne Undheim, Research Fellow at SERI, the Stanford Existential Risk Initiative, at Stanford University. If you have more questions about this process, or if you need to contact us about participation, Dr. Undheim may be reached via email at [email protected] or on the phone at +650-725-8983. Remember, this is completely voluntary. You can choose to be in the study or not. If you’d like to participate, we can go ahead and schedule a time to meet with you to give you more information. If you need more time to decide if you would like to participate, you may also call or e-mail Dr. Undheim with your decision.

Comments