Русские видео

Сейчас в тренде

Иностранные видео


Скачать с ютуб NEW TextGrad by Stanford: Better than DSPy в хорошем качестве

NEW TextGrad by Stanford: Better than DSPy 3 недели назад


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса savevideohd.ru



NEW TextGrad by Stanford: Better than DSPy

In this TEXTGRAD framework, each AI system is transformed into a computation graph, where variables are inputs and outputs of complex (not necessarily differentiable) function calls. The feedback to the variables (dubbed ‘textual gradients’) are provided in the form of informative and interpretable natural language criticism to the variables; describing how a variable should be changed to improve the system. The gradients are propagated through arbitrary functions, such as LLM API calls, simulators, or external numerical solvers. (Stanford Univ) Stanford University's latest research, TextGrad, builds upon DSPy, an existing self-improving machine learning system, to enable automatic differentiation via text. Unlike traditional methods like AutoGrad, which has access to tensors within neural network layers, TextGrad operates without such access. Instead, it extends PyTorch to work with proprietary large language models (LLMs) like GPT-4 Omni, focusing on prompt engineering for optimizing specific LLM tasks. By leveraging API calls between LLMs, TextGrad automates the process of finding the best prompts, enhancing logical reasoning and performance beyond DSPy. TextGrad's functionality mirrors AutoGrad but adapts it for text. In traditional neural networks, AutoGrad records operations during a forward pass and computes gradients during a backward pass to optimize parameters using the chain rule of calculus. TextGrad applies a similar approach to text, utilizing a feedback loop where a more intelligent LLM critiques and optimizes prompts generated by a less capable LLM. This process is facilitated by a new PyTorch extension that makes TextGrad open source and accessible. The implementation includes several Jupyter notebooks that illustrate how to apply this methodology to various tasks, demonstrating significant performance improvements over DSPy. The practical implications of TextGrad are profound. For instance, in prompt optimization, an initial prompt achieving 77% accuracy can be refined using TextGrad to reach 92% accuracy. The system is versatile, applicable not just to prompt optimization but also to other domains like code (CodeLLMs) and molecular design optimization. By integrating LLMs' self-evaluation and improvement capabilities, TextGrad enhances performance in complex tasks, although it requires careful management of complexity levels between interacting LLMs to avoid failures. TextGrad represents a significant step forward in AI research, promising more efficient and effective optimization of multiple AI (agents) systems. All rights w/ authors: https://arxiv.org/pdf/2406.07496 TextGrad: Automatic “Differentiation” via Text Recommend: 4 Colab notebooks for TextGrad by Stanford (Python, PyTorch): https://github.com/zou-group/textgrad #airesearch #promptengineering #newtechnology

Comments