source
stringclasses
1 value
repository
stringclasses
1 value
file
stringlengths
17
99
label
stringclasses
1 value
content
stringlengths
11
13.3k
GitHub
autogen
autogen/README.md
autogen
<a name="readme-top"></a> [![PyPI version](https://badge.fury.io/py/pyautogen.svg)](https://badge.fury.io/py/pyautogen) [![Build](https://github.com/microsoft/autogen/actions/workflows/python-package.yml/badge.svg)](https://github.com/microsoft/autogen/actions/workflows/python-package.yml) ![Python Version](https://img.shields.io/badge/3.8%20%7C%203.9%20%7C%203.10%20%7C%203.11%20%7C%203.12-blue) [![Downloads](https://static.pepy.tech/badge/pyautogen/week)](https://pepy.tech/project/pyautogen) [![Discord](https://img.shields.io/discord/1153072414184452236?logo=discord&style=flat)](https://aka.ms/autogen-dc) [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40pyautogen)](https://twitter.com/pyautogen) [![NuGet version](https://badge.fury.io/nu/AutoGen.Core.svg)](https://badge.fury.io/nu/AutoGen.Core) # AutoGen [📚 Cite paper](#related-papers). <!-- <p align="center"> <img src="https://github.com/microsoft/autogen/blob/main/website/static/img/flaml.svg" width=200> <br> </p> --> :fire: May 29, 2024: DeepLearning.ai launched a new short course [AI Agentic Design Patterns with AutoGen](https://www.deeplearning.ai/short-courses/ai-agentic-design-patterns-with-autogen), made in collaboration with Microsoft and Penn State University, and taught by AutoGen creators [Chi Wang](https://github.com/sonichi) and [Qingyun Wu](https://github.com/qingyun-wu). :fire: May 24, 2024: Foundation Capital published an article on [Forbes: The Promise of Multi-Agent AI](https://www.forbes.com/sites/joannechen/2024/05/24/the-promise-of-multi-agent-ai/?sh=2c1e4f454d97) and a video [AI in the Real World Episode 2: Exploring Multi-Agent AI and AutoGen with Chi Wang](https://www.youtube.com/watch?v=RLwyXRVvlNk). :fire: May 13, 2024: [The Economist](https://www.economist.com/science-and-technology/2024/05/13/todays-ai-models-are-impressive-teams-of-them-will-be-formidable) published an article about multi-agent systems (MAS) following a January 2024 interview with [Chi Wang](https://github.com/sonichi). :fire: May 11, 2024: [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation](https://openreview.net/pdf?id=uAjxFFing2) received the best paper award at the [ICLR 2024 LLM Agents Workshop](https://llmagents.github.io/). :fire: Apr 26, 2024: [AutoGen.NET](https://microsoft.github.io/autogen-for-net/) is available for .NET developers! :fire: Apr 17, 2024: Andrew Ng cited AutoGen in [The Batch newsletter](https://www.deeplearning.ai/the-batch/issue-245/) and [What's next for AI agentic workflows](https://youtu.be/sal78ACtGTc?si=JduUzN_1kDnMq0vF) at Sequoia Capital's AI Ascent (Mar 26). :fire: Mar 3, 2024: What's new in AutoGen? 📰[Blog](https://microsoft.github.io/autogen/blog/2024/03/03/AutoGen-Update); 📺[Youtube](https://www.youtube.com/watch?v=j_mtwQiaLGU). :fire: Mar 1, 2024: the first AutoGen multi-agent experiment on the challenging [GAIA](https://huggingface.co/spaces/gaia-benchmark/leaderboard) benchmark achieved the No. 1 accuracy in all the three levels. <!-- :tada: Jan 30, 2024: AutoGen is highlighted by Peter Lee in Microsoft Research Forum [Keynote](https://t.co/nUBSjPDjqD). --> :tada: Dec 31, 2023: [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework](https://arxiv.org/abs/2308.08155) is selected by [TheSequence: My Five Favorite AI Papers of 2023](https://thesequence.substack.com/p/my-five-favorite-ai-papers-of-2023). <!-- :fire: Nov 24: pyautogen [v0.2](https://github.com/microsoft/autogen/releases/tag/v0.2.0) is released with many updates and new features compared to v0.1.1. It switches to using openai-python v1. Please read the [migration guide](https://microsoft.github.io/autogen/docs/Installation#python). --> <!-- :fire: Nov 11: OpenAI's Assistants are available in AutoGen and interoperatable with other AutoGen agents! Checkout our [blogpost](https://microsoft.github.io/autogen/blog/2023/11/13/OAI-assistants) for details and examples. --> :tada: Nov 8, 2023: AutoGen is selected into [Open100: Top 100 Open Source achievements](https://www.benchcouncil.org/evaluation/opencs/annual.html) 35 days after spinoff from [FLAML](https://github.com/microsoft/FLAML). <!-- :tada: Nov 6, 2023: AutoGen is mentioned by Satya Nadella in a [fireside chat](https://youtu.be/0pLBvgYtv6U). --> <!-- :tada: Nov 1, 2023: AutoGen is the top trending repo on GitHub in October 2023. --> <!-- :tada: Oct 03, 2023: AutoGen spins off from [FLAML](https://github.com/microsoft/FLAML) on GitHub. --> <!-- :tada: Aug 16: Paper about AutoGen on [arxiv](https://arxiv.org/abs/2308.08155). --> :tada: Mar 29, 2023: AutoGen is first created in [FLAML](https://github.com/microsoft/FLAML). <!-- :fire: FLAML is highlighted in OpenAI's [cookbook](https://github.com/openai/openai-cookbook#related-resources-from-around-the-web). :fire: [autogen](https://microsoft.github.io/autogen/) is released with support for ChatGPT and GPT-4, based on [Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference](https://arxiv.org/abs/2303.04673). :fire: FLAML supports Code-First AutoML & Tuning – Private Preview in [Microsoft Fabric Data Science](https://learn.microsoft.com/en-us/fabric/data-science/). --> <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/README.md
autogen
What is AutoGen AutoGen is an open-source programming framework for building AI agents and facilitating cooperation among multiple agents to solve tasks. AutoGen aims to streamline the development and research of agentic AI, much like PyTorch does for Deep Learning. It offers features such as agents capable of interacting with each other, facilitates the use of various large language models (LLMs) and tool use support, autonomous and human-in-the-loop workflows, and multi-agent conversation patterns. **Open Source Statement**: The project welcomes contributions from developers and organizations worldwide. Our goal is to foster a collaborative and inclusive community where diverse perspectives and expertise can drive innovation and enhance the project's capabilities. Whether you are an individual contributor or represent an organization, we invite you to join us in shaping the future of this project. Together, we can build something truly remarkable. The project is currently maintained by a [dynamic group of volunteers](https://butternut-swordtail-8a5.notion.site/410675be605442d3ada9a42eb4dfef30?v=fa5d0a79fd3d4c0f9c112951b2831cbb&pvs=4) from several different organizations. Contact project administrators Chi Wang and Qingyun Wu via auto-gen@outlook.com if you are interested in becoming a maintainer. ![AutoGen Overview](https://github.com/microsoft/autogen/blob/main/website/static/img/autogen_agentchat.png) - AutoGen enables building next-gen LLM applications based on [multi-agent conversations](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat) with minimal effort. It simplifies the orchestration, automation, and optimization of a complex LLM workflow. It maximizes the performance of LLM models and overcomes their weaknesses. - It supports [diverse conversation patterns](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat#supporting-diverse-conversation-patterns) for complex workflows. With customizable and conversable agents, developers can use AutoGen to build a wide range of conversation patterns concerning conversation autonomy, the number of agents, and agent conversation topology. - It provides a collection of working systems with different complexities. These systems span a [wide range of applications](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat#diverse-applications-implemented-with-autogen) from various domains and complexities. This demonstrates how AutoGen can easily support diverse conversation patterns. - AutoGen provides [enhanced LLM inference](https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference#api-unification). It offers utilities like API unification and caching, and advanced usage patterns, such as error handling, multi-config inference, context programming, etc. AutoGen is created out of collaborative [research](https://microsoft.github.io/autogen/docs/Research) from Microsoft, Penn State University, and the University of Washington. <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/README.md
autogen
Roadmaps To see what we are working on and what we plan to work on, please check our [Roadmap Issues](https://aka.ms/autogen-roadmap). <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/README.md
autogen
Quickstart The easiest way to start playing is 1. Click below to use the GitHub Codespace [![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/microsoft/autogen?quickstart=1) 2. Copy OAI_CONFIG_LIST_sample to ./notebook folder, name to OAI_CONFIG_LIST, and set the correct configuration. 3. Start playing with the notebooks! *NOTE*: OAI_CONFIG_LIST_sample lists GPT-4 as the default model, as this represents our current recommendation, and is known to work well with AutoGen. If you use a model other than GPT-4, you may need to revise various system prompts (especially if using weaker models like GPT-3.5-turbo). Moreover, if you use models other than those hosted by OpenAI or Azure, you may incur additional risks related to alignment and safety. Proceed with caution if updating this default. <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/README.md
autogen
[Installation](https://microsoft.github.io/autogen/docs/Installation) ### Option 1. Install and Run AutoGen in Docker Find detailed instructions for users [here](https://microsoft.github.io/autogen/docs/installation/Docker#step-1-install-docker), and for developers [here](https://microsoft.github.io/autogen/docs/Contribute#docker-for-development). ### Option 2. Install AutoGen Locally AutoGen requires **Python version >= 3.8, < 3.13**. It can be installed from pip: ```bash pip install pyautogen ``` Minimal dependencies are installed without extra options. You can install extra options based on the feature you need. <!-- For example, use the following to install the dependencies needed by the [`blendsearch`](https://microsoft.github.io/FLAML/docs/Use-Cases/Tune-User-Defined-Function#blendsearch-economical-hyperparameter-optimization-with-blended-search-strategy) option. ```bash pip install "pyautogen[blendsearch]" ``` --> Find more options in [Installation](https://microsoft.github.io/autogen/docs/Installation#option-2-install-autogen-locally-using-virtual-environment). <!-- Each of the [`notebook examples`](https://github.com/microsoft/autogen/tree/main/notebook) may require a specific option to be installed. --> Even if you are installing and running AutoGen locally outside of docker, the recommendation and default behavior of agents is to perform [code execution](https://microsoft.github.io/autogen/docs/FAQ/#code-execution) in docker. Find more instructions and how to change the default behaviour [here](https://microsoft.github.io/autogen/docs/Installation#code-execution-with-docker-(default)). For LLM inference configurations, check the [FAQs](https://microsoft.github.io/autogen/docs/FAQ#set-your-api-endpoints). <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/README.md
autogen
Multi-Agent Conversation Framework Autogen enables the next-gen LLM applications with a generic [multi-agent conversation](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat) framework. It offers customizable and conversable agents that integrate LLMs, tools, and humans. By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. Features of this use case include: - **Multi-agent conversations**: AutoGen agents can communicate with each other to solve tasks. This allows for more complex and sophisticated applications than would be possible with a single LLM. - **Customization**: AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ. - **Human participation**: AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed. For [example](https://github.com/microsoft/autogen/blob/main/test/twoagent.py), ```python from autogen import AssistantAgent, UserProxyAgent, config_list_from_json # Load LLM inference endpoints from an env variable or a file # See https://microsoft.github.io/autogen/docs/FAQ#set-your-api-endpoints # and OAI_CONFIG_LIST_sample config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST") # You can also set config_list directly as a list, for example, config_list = [{'model': 'gpt-4', 'api_key': '<your OpenAI API key here>'},] assistant = AssistantAgent("assistant", llm_config={"config_list": config_list}) user_proxy = UserProxyAgent("user_proxy", code_execution_config={"work_dir": "coding", "use_docker": False}) # IMPORTANT: set to True to run code in docker, recommended user_proxy.initiate_chat(assistant, message="Plot a chart of NVDA and TESLA stock price change YTD.") # This initiates an automated chat between the two agents to solve the task ``` This example can be run with ```python python test/twoagent.py ``` After the repo is cloned. The figure below shows an example conversation flow with AutoGen. ![Agent Chat Example](https://github.com/microsoft/autogen/blob/main/website/static/img/chat_example.png) Alternatively, the [sample code](https://github.com/microsoft/autogen/blob/main/samples/simple_chat.py) here allows a user to chat with an AutoGen agent in ChatGPT style. Please find more [code examples](https://microsoft.github.io/autogen/docs/Examples#automated-multi-agent-chat) for this feature. <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/README.md
autogen
Enhanced LLM Inferences Autogen also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers [enhanced LLM inference](https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference#api-unification) with powerful functionalities like caching, error handling, multi-config inference and templating. <!-- For example, you can optimize generations by LLM with your own tuning data, success metrics, and budgets. ```python # perform tuning for openai<1 config, analysis = autogen.Completion.tune( data=tune_data, metric="success", mode="max", eval_func=eval_func, inference_budget=0.05, optimization_budget=3, num_samples=-1, ) # perform inference for a test instance response = autogen.Completion.create(context=test_instance, **config) ``` Please find more [code examples](https://microsoft.github.io/autogen/docs/Examples#tune-gpt-models) for this feature. --> <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/README.md
autogen
Documentation You can find detailed documentation about AutoGen [here](https://microsoft.github.io/autogen/). In addition, you can find: - [Research](https://microsoft.github.io/autogen/docs/Research), [blogposts](https://microsoft.github.io/autogen/blog) around AutoGen, and [Transparency FAQs](https://github.com/microsoft/autogen/blob/main/TRANSPARENCY_FAQS.md) - [Discord](https://aka.ms/autogen-dc) - [Contributing guide](https://microsoft.github.io/autogen/docs/Contribute) - [Roadmap](https://github.com/orgs/microsoft/projects/989/views/3) <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/README.md
autogen
Related Papers [AutoGen](https://arxiv.org/abs/2308.08155) ``` @inproceedings{wu2023autogen, title={AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework}, author={Qingyun Wu and Gagan Bansal and Jieyu Zhang and Yiran Wu and Beibin Li and Erkang Zhu and Li Jiang and Xiaoyun Zhang and Shaokun Zhang and Jiale Liu and Ahmed Hassan Awadallah and Ryen W White and Doug Burger and Chi Wang}, year={2023}, eprint={2308.08155}, archivePrefix={arXiv}, primaryClass={cs.AI} } ``` [EcoOptiGen](https://arxiv.org/abs/2303.04673) ``` @inproceedings{wang2023EcoOptiGen, title={Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference}, author={Chi Wang and Susan Xueqing Liu and Ahmed H. Awadallah}, year={2023}, booktitle={AutoML'23}, } ``` [MathChat](https://arxiv.org/abs/2306.01337) ``` @inproceedings{wu2023empirical, title={An Empirical Study on Challenging Math Problem Solving with GPT-4}, author={Yiran Wu and Feiran Jia and Shaokun Zhang and Hangyu Li and Erkang Zhu and Yue Wang and Yin Tat Lee and Richard Peng and Qingyun Wu and Chi Wang}, year={2023}, booktitle={ArXiv preprint arXiv:2306.01337}, } ``` [AgentOptimizer](https://arxiv.org/pdf/2402.11359) ``` @article{zhang2024training, title={Training Language Model Agents without Modifying Language Models}, author={Zhang, Shaokun and Zhang, Jieyu and Liu, Jiale and Song, Linxin and Wang, Chi and Krishna, Ranjay and Wu, Qingyun}, journal={ICML'24}, year={2024} } ``` [StateFlow](https://arxiv.org/abs/2403.11322) ``` @article{wu2024stateflow, title={StateFlow: Enhancing LLM Task-Solving through State-Driven Workflows}, author={Wu, Yiran and Yue, Tianwei and Zhang, Shaokun and Wang, Chi and Wu, Qingyun}, journal={arXiv preprint arXiv:2403.11322}, year={2024} } ``` <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/README.md
autogen
Contributing This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit <https://cla.opensource.microsoft.com>. If you are new to GitHub, [here](https://opensource.guide/how-to-contribute/#how-to-submit-a-contribution) is a detailed help source on getting involved with development on GitHub. When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA. This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information, see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/README.md
autogen
Contributors Wall <a href="https://github.com/microsoft/autogen/graphs/contributors"> <img src="https://contrib.rocks/image?repo=microsoft/autogen&max=204" /> </a> <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p> # Legal Notices Microsoft and any contributors grant you a license to the Microsoft documentation and other content in this repository under the [Creative Commons Attribution 4.0 International Public License](https://creativecommons.org/licenses/by/4.0/legalcode), see the [LICENSE](LICENSE) file, and grant you a license to any code in the repository under the [MIT License](https://opensource.org/licenses/MIT), see the [LICENSE-CODE](LICENSE-CODE) file. Microsoft, Windows, Microsoft Azure, and/or other Microsoft products and services referenced in the documentation may be either trademarks or registered trademarks of Microsoft in the United States and/or other countries. The licenses for this project do not grant you rights to use any Microsoft names, logos, or trademarks. Microsoft's general trademark guidelines can be found at http://go.microsoft.com/fwlink/?LinkID=254653. Privacy information can be found at https://privacy.microsoft.com/en-us/ Microsoft and any contributors reserve all other rights, whether under their respective copyrights, patents, or trademarks, whether by implication, estoppel, or otherwise. <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/SECURITY.md
autogen
<!-- BEGIN MICROSOFT SECURITY.MD V0.0.8 BLOCK -->
GitHub
autogen
autogen/SECURITY.md
autogen
Security Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/). If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below.
GitHub
autogen
autogen/SECURITY.md
autogen
Reporting Security Issues **Please do not report security vulnerabilities through public GitHub issues.** Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report). If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey). You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc). Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue: * Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.) * Full paths of source file(s) related to the manifestation of the issue * The location of the affected source code (tag/branch/commit or direct URL) * Any special configuration required to reproduce the issue * Step-by-step instructions to reproduce the issue * Proof-of-concept or exploit code (if possible) * Impact of the issue, including how an attacker might exploit the issue This information will help us triage your report more quickly. If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs.
GitHub
autogen
autogen/SECURITY.md
autogen
Preferred Languages We prefer all communications to be in English.
GitHub
autogen
autogen/SECURITY.md
autogen
Policy Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd). <!-- END MICROSOFT SECURITY.MD BLOCK -->
GitHub
autogen
autogen/TRANSPARENCY_FAQS.md
autogen
# AutoGen: Responsible AI FAQs
GitHub
autogen
autogen/TRANSPARENCY_FAQS.md
autogen
What is AutoGen? AutoGen is a framework for simplifying the orchestration, optimization, and automation of LLM workflows. It offers customizable and conversable agents that leverage the strongest capabilities of the most advanced LLMs, like GPT-4, while addressing their limitations by integrating with humans and tools and having conversations between multiple agents via automated chat.
GitHub
autogen
autogen/TRANSPARENCY_FAQS.md
autogen
What can AutoGen do? AutoGen is an experimentational framework for building a complex multi-agent conversation system by: - Defining a set of agents with specialized capabilities and roles. - Defining the interaction behavior between agents, i.e., what to reply when an agent receives messages from another agent. The agent conversation-centric design has numerous benefits, including that it: - Naturally handles ambiguity, feedback, progress, and collaboration. - Enables effective coding-related tasks, like tool use with back-and-forth troubleshooting. - Allows users to seamlessly opt in or opt out via an agent in the chat. - Achieves a collective goal with the cooperation of multiple specialists.
GitHub
autogen
autogen/TRANSPARENCY_FAQS.md
autogen
What is/are AutoGen’s intended use(s)? Please note that AutoGen is an open-source library under active development and intended for use for research purposes. It should not be used in any downstream applications without additional detailed evaluation of robustness, safety issues and assessment of any potential harm or bias in the proposed application. AutoGen is a generic infrastructure that can be used in multiple scenarios. The system’s intended uses include: - Building LLM workflows that solve more complex tasks: Users can create agents that interleave reasoning and tool use capabilities of the latest LLMs such as GPT-4. To solve complex tasks, multiple agents can converse to work together (e.g., by partitioning a complex problem into simpler steps or by providing different viewpoints or perspectives). - Application-specific agent topologies: Users can create application specific agent topologies and patterns for agents to interact. The exact topology may depend on the domain’s complexity and semantic capabilities of the LLM available. - Code generation and execution: Users can implement agents that can assume the roles of writing code and other agents that can execute code. Agents can do this with varying levels of human involvement. Users can add more agents and program the conversations to enforce constraints on code and output. - Question answering: Users can create agents that can help answer questions using retrieval augmented generation. - End user and multi-agent chat and debate: Users can build chat applications where they converse with multiple agents at the same time. While AutoGen automates LLM workflows, decisions about how to use specific LLM outputs should always have a human in the loop. For example, you should not use AutoGen to automatically post LLM generated content to social media.
GitHub
autogen
autogen/TRANSPARENCY_FAQS.md
autogen
How was AutoGen evaluated? What metrics are used to measure performance? - Current version of AutoGen was evaluated on six applications to illustrate its potential in simplifying the development of high-performance multi-agent applications. These applications are selected based on their real-world relevance, problem difficulty and problem solving capabilities enabled by AutoGen, and innovative potential. - These applications involve using AutoGen to solve math problems, question answering, decision making in text world environments, supply chain optimization, etc. For each of these domains AutoGen was evaluated on various success based metrics (i.e., how often the AutoGen based implementation solved the task). And, in some cases, AutoGen based approach was also evaluated on implementation efficiency (e.g., to track reductions in developer effort to build). More details can be found at: https://aka.ms/AutoGen/TechReport - The team has conducted tests where a “red” agent attempts to get the default AutoGen assistant to break from its alignment and guardrails. The team has observed that out of 70 attempts to break guardrails, only 1 was successful in producing text that would have been flagged as problematic by Azure OpenAI filters. The team has not observed any evidence that AutoGen (or GPT models as hosted by OpenAI or Azure) can produce novel code exploits or jailbreak prompts, since direct prompts to “be a hacker”, “write exploits”, or “produce a phishing email” are refused by existing filters.
GitHub
autogen
autogen/TRANSPARENCY_FAQS.md
autogen
What are the limitations of AutoGen? How can users minimize the impact of AutoGen’s limitations when using the system? AutoGen relies on existing LLMs. Experimenting with AutoGen would retain common limitations of large language models; including: - Data Biases: Large language models, trained on extensive data, can inadvertently carry biases present in the source data. Consequently, the models may generate outputs that could be potentially biased or unfair. - Lack of Contextual Understanding: Despite their impressive capabilities in language understanding and generation, these models exhibit limited real-world understanding, resulting in potential inaccuracies or nonsensical responses. - Lack of Transparency: Due to the complexity and size, large language models can act as `black boxes,' making it difficult to comprehend the rationale behind specific outputs or decisions. - Content Harms: There are various types of content harms that large language models can cause. It is important to be aware of them when using these models, and to take actions to prevent them. It is recommended to leverage various content moderation services provided by different companies and institutions. - Inaccurate or ungrounded content: It is important to be aware and cautious not to entirely rely on a given language model for critical decisions or information that might have deep impact as it is not obvious how to prevent these models to fabricate content without high authority input sources. - Potential for Misuse: Without suitable safeguards, there is a risk that these models could be maliciously used for generating disinformation or harmful content. Additionally, AutoGen’s multi-agent framework may amplify or introduce additional risks, such as: - Privacy and Data Protection: The framework allows for human participation in conversations between agents. It is important to ensure that user data and conversations are protected and that developers use appropriate measures to safeguard privacy. - Accountability and Transparency: The framework involves multiple agents conversing and collaborating, it is important to establish clear accountability and transparency mechanisms. Users should be able to understand and trace the decision-making process of the agents involved in order to ensure accountability and address any potential issues or biases. - Trust and reliance: The framework leverages human understanding and intelligence while providing automation through conversations between agents. It is important to consider the impact of this interaction on user experience, trust, and reliance on AI systems. Clear communication and user education about the capabilities and limitations of the system will be essential. - Security & unintended consequences: The use of multi-agent conversations and automation in complex tasks may have unintended consequences. Especially, allowing LLM agents to make changes in external environments through code execution or function calls, such as install packages, could pose significant risks. Developers should carefully consider the potential risks and ensure that appropriate safeguards are in place to prevent harm or negative outcomes, including keeping a human in the loop for decision making.
GitHub
autogen
autogen/TRANSPARENCY_FAQS.md
autogen
What operational factors and settings allow for effective and responsible use of AutoGen? - Code execution: AutoGen recommends using docker containers so that code execution can happen in a safer manner. Users can use function call instead of free-form code to execute pre-defined functions only. That helps increase the reliability and safety. Users can customize the code execution environment to tailor to their requirements. - Human involvement: AutoGen prioritizes human involvement in multi agent conversation. The overseers can step in to give feedback to agents and steer them in the correct direction. By default, users get chance to confirm before code is executed. - Agent modularity: Modularity allows agents to have different levels of information access. Additional agents can assume roles that help keep other agents in check. For example, one can easily add a dedicated agent to play the role of safeguard. - LLMs: Users can choose the LLM that is optimized for responsible use. The default LLM is GPT-4 which inherits the existing RAI mechanisms and filters from the LLM provider. Caching is enabled by default to increase reliability and control cost. We encourage developers to review [OpenAI’s Usage policies](https://openai.com/policies/usage-policies) and [Azure OpenAI’s Code of Conduct](https://learn.microsoft.com/en-us/legal/cognitive-services/openai/code-of-conduct) when using GPT-4. - Multi-agent setup: When using auto replies, the users can limit the number of auto replies, termination conditions etc. in the settings to increase reliability.
GitHub
autogen
autogen/CODE_OF_CONDUCT.md
autogen
# Microsoft Open Source Code of Conduct This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). Resources: - [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/) - [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) - Contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with questions or concerns
GitHub
autogen
autogen/samples/apps/auto-anny/README.md
autogen
<div align="center"> <img src="images/icon.png" alt="Repo Icon" width="100" height="100"> </div> # AutoAnny AutoAnny is a Discord bot built using AutoGen to help with AutoGen's Discord server. Actually Anny can help with any OSS GitHub project (set `ANNY_GH_REPO` below).
GitHub
autogen
autogen/samples/apps/auto-anny/README.md
autogen
Features - **`/heyanny help`**: Lists commands. - **`/heyanny ghstatus`**: Summarizes GitHub activity. - **`/heyanny ghgrowth`**: Shows GitHub repo growth indicators. - **`/heyanny ghunattended`**: Lists unattended issues and PRs.
GitHub
autogen
autogen/samples/apps/auto-anny/README.md
autogen
Installation 1. Clone the AutoGen repository and `cd samples/apps/auto-anny` 2. Install dependencies: `pip install -r requirements.txt` 3. Export Discord token and GitHub API token, ``` export OAI_CONFIG_LIST=your-autogen-config-list export DISCORD_TOKEN=your-bot-token export GH_TOKEN=your-gh-token export ANNY_GH_REPO=microsoft/autogen # you may choose a different repo name ``` To get a Discord token, you will need to set up your Discord bot using these [instructions](https://discordpy.readthedocs.io/en/stable/discord.html). 4. Start the bot: `python bot.py` Note: By default Anny will log data to `autoanny.log`.
GitHub
autogen
autogen/samples/apps/auto-anny/README.md
autogen
Roadmap - Enable access control - Enable a richer set of commands - Enrich agents with tool use
GitHub
autogen
autogen/samples/apps/auto-anny/README.md
autogen
Contributing Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
GitHub
autogen
autogen/samples/apps/cap/TODO.md
autogen
- ~~Pretty print debug_logs~~ - ~~colors~~ - ~~messages to oai should be condensed~~ - ~~remove orchestrator in scenario 4 and have the two actors talk to each other~~ - ~~pass a complex multi-part message~~ - ~~protobuf for messages~~ - ~~make changes to autogen to enable scenario 3 to work with CAN~~ - ~~make groupchat work~~ - ~~actors instead of agents~~ - clean up for PR into autogen - ~~Create folder structure under Autogen examples~~ - ~~CAN -> CAP (Composable Actor Protocol)~~ - CAP actor lookup should use zmq - Add min C# actors & reorganize - Hybrid GroupChat with C# ProductManager - C++ Msg Layer - Rust Msg Layer - Node Msg Layer - Java Msg Layer - Investigate a standard logging framework that supports color in windows - structlog?
GitHub
autogen
autogen/samples/apps/cap/README.md
autogen
# Composable Actor Platform (CAP) for AutoGen
GitHub
autogen
autogen/samples/apps/cap/README.md
autogen
I just want to run the remote AutoGen agents! *Python Instructions (Windows, Linux, MacOS):* 0) cd py 1) pip install -r autogencap/requirements.txt 2) python ./demo/App.py 3) Choose (5) and follow instructions to run standalone Agents 4) Choose other options for other demos *Demo Notes:* 1) Options involving AutoGen require OAI_CONFIG_LIST. AutoGen python requirements: 3.8 <= python <= 3.11 2) For option 2, type something in and see who receives the message. Quit to quit. 3) To view any option that displays a chart (such as option 4), you will need to disable Docker code execution. You can do this by setting the environment variable `AUTOGEN_USE_DOCKER` to `False`. *Demo Reference:* ``` Select the Composable Actor Platform (CAP) demo app to run: (enter anything else to quit) 1. Hello World 2. Complex Agent (e.g. Name or Quit) 3. AutoGen Pair 4. AutoGen GroupChat 5. AutoGen Agents in different processes 6. List Actors in CAP (Registry) Enter your choice (1-6): ```
GitHub
autogen
autogen/samples/apps/cap/README.md
autogen
What is Composable Actor Platform (CAP)? AutoGen is about Agents and Agent Orchestration. CAP extends AutoGen to allows Agents to communicate via a message bus. CAP, therefore, deals with the space between these components. CAP is a message based, actor platform that allows actors to be composed into arbitrary graphs. Actors can register themselves with CAP, find other agents, construct arbitrary graphs, send and receive messages independently and many, many, many other things. ```python # CAP Platform network = LocalActorNetwork() # Register an agent network.register(GreeterAgent()) # Tell agents to connect to other agents network.connect() # Get a channel to the agent greeter_link = network.lookup_agent("Greeter") # Send a message to the agent greeter_link.send_txt_msg("Hello World!") # Cleanup greeter_link.close() network.disconnect() ``` ### Check out other demos in the `py/demo` directory. We show the following: ### 1) Hello World shown above 2) Many CAP Actors interacting with each other 3) A pair of interacting AutoGen Agents wrapped in CAP Actors 4) CAP wrapped AutoGen Agents in a group chat 5) Two AutoGen Agents running in different processes and communicating through CAP 6) List all registered agents in CAP ### Coming soon. Stay tuned! ### 1) AutoGen integration to list all registered agents
GitHub
autogen
autogen/samples/apps/cap/py/README.md
autogen
# Composable Actor Platform (CAP) for AutoGen
GitHub
autogen
autogen/samples/apps/cap/py/README.md
autogen
I just want to run the remote AutoGen agents! *Python Instructions (Windows, Linux, MacOS):* pip install autogencap 1) AutoGen require OAI_CONFIG_LIST. AutoGen python requirements: 3.8 <= python <= 3.11 ```
GitHub
autogen
autogen/samples/apps/cap/py/README.md
autogen
What is Composable Actor Platform (CAP)? AutoGen is about Agents and Agent Orchestration. CAP extends AutoGen to allows Agents to communicate via a message bus. CAP, therefore, deals with the space between these components. CAP is a message based, actor platform that allows actors to be composed into arbitrary graphs. Actors can register themselves with CAP, find other agents, construct arbitrary graphs, send and receive messages independently and many, many, many other things. ```python # CAP Library from autogencap.ComponentEnsemble import ComponentEnsemble from autogencap.Actor import Actor # A simple Agent class GreeterAgent(Actor): def __init__(self): super().__init__( agent_name="Greeter", description="This is the greeter agent, who knows how to greet people.") # Prints out the message it receives def on_txt_msg(self, msg): print(f"Greeter received: {msg}") return True ensemble = ComponentEnsemble() # Create an agent agent = GreeterAgent() # Register an agent ensemble.register(agent) # start message processing # call on_connect() on all Agents ensemble.connect() # Get a channel to the agent greeter_link = ensemble.find_by_name("Greeter") #Send a message to the agent greeter_link.send_txt_msg("Hello World!") # Cleanup greeter_link.close() ensemble.disconnect() ``` ### Check out other demos in the `py/demo` directory. We show the following: ### 1) Hello World shown above 2) Many CAP Actors interacting with each other 3) A pair of interacting AutoGen Agents wrapped in CAP Actors 4) CAP wrapped AutoGen Agents in a group chat 5) Two AutoGen Agents running in different processes and communicating through CAP 6) List all registered agents in CAP 7) Run Agent in user supplied message loop
GitHub
autogen
autogen/samples/apps/cap/c++/Readme.md
autogen
Coming soon...
GitHub
autogen
autogen/samples/apps/cap/node/Readme.md
autogen
Coming soon...
GitHub
autogen
autogen/samples/apps/cap/c#/Readme.md
autogen
Coming soon...
GitHub
autogen
autogen/samples/apps/autogen-studio/README.md
autogen
# AutoGen Studio [![PyPI version](https://badge.fury.io/py/autogenstudio.svg)](https://badge.fury.io/py/autogenstudio) [![Downloads](https://static.pepy.tech/badge/autogenstudio/week)](https://pepy.tech/project/autogenstudio) ![ARA](./docs/ara_stockprices.png) AutoGen Studio is an AutoGen-powered AI app (user interface) to help you rapidly prototype AI agents, enhance them with skills, compose them into workflows and interact with them to accomplish tasks. It is built on top of the [AutoGen](https://microsoft.github.io/autogen) framework, which is a toolkit for building AI agents. Code for AutoGen Studio is on GitHub at [microsoft/autogen](https://github.com/microsoft/autogen/tree/main/samples/apps/autogen-studio) > **Note**: AutoGen Studio is meant to help you rapidly prototype multi-agent workflows and demonstrate an example of end user interfaces built with AutoGen. It is not meant to be a production-ready app. > [!WARNING] > AutoGen Studio is currently under active development and we are iterating quickly. Kindly consider that we may introduce breaking changes in the releases during the upcoming weeks, and also the `README` might be outdated. Please see the AutoGen Studio [docs](https://microsoft.github.io/autogen/docs/autogen-studio/getting-started) page for the most up-to-date information. **Updates** > April 17: AutoGen Studio database layer is now rewritten to use [SQLModel](https://sqlmodel.tiangolo.com/) (Pydantic + SQLAlchemy). This provides entity linking (skills, models, agents and workflows are linked via association tables) and supports multiple [database backend dialects](https://docs.sqlalchemy.org/en/20/dialects/) supported in SQLAlchemy (SQLite, PostgreSQL, MySQL, Oracle, Microsoft SQL Server). The backend database can be specified a `--database-uri` argument when running the application. For example, `autogenstudio ui --database-uri sqlite:///database.sqlite` for SQLite and `autogenstudio ui --database-uri postgresql+psycopg://user:password@localhost/dbname` for PostgreSQL. > March 12: Default directory for AutoGen Studio is now /home/<user>/.autogenstudio. You can also specify this directory using the `--appdir` argument when running the application. For example, `autogenstudio ui --appdir /path/to/folder`. This will store the database and other files in the specified directory e.g. `/path/to/folder/database.sqlite`. `.env` files in that directory will be used to set environment variables for the app. Project Structure: - _autogenstudio/_ code for the backend classes and web api (FastAPI) - _frontend/_ code for the webui, built with Gatsby and TailwindCSS ### Installation There are two ways to install AutoGen Studio - from PyPi or from source. We **recommend installing from PyPi** unless you plan to modify the source code. 1. **Install from PyPi** We recommend using a virtual environment (e.g., conda) to avoid conflicts with existing Python packages. With Python 3.10 or newer active in your virtual environment, use pip to install AutoGen Studio: ```bash pip install autogenstudio ``` 2. **Install from Source** > Note: This approach requires some familiarity with building interfaces in React. If you prefer to install from source, ensure you have Python 3.10+ and Node.js (version above 14.15.0) installed. Here's how you get started: - Clone the AutoGen Studio repository and install its Python dependencies: ```bash pip install -e . ``` - Navigate to the `samples/apps/autogen-studio/frontend` directory, install dependencies, and build the UI: ```bash npm install -g gatsby-cli npm install --global yarn cd frontend yarn install yarn build ``` For Windows users, to build the frontend, you may need alternative commands to build the frontend. ```bash gatsby clean && rmdir /s /q ..\\autogenstudio\\web\\ui 2>nul & (set \"PREFIX_PATH_VALUE=\" || ver>nul) && gatsby build --prefix-paths && xcopy /E /I /Y public ..\\autogenstudio\\web\\ui ``` ### Running the Application Once installed, run the web UI by entering the following in your terminal: ```bash autogenstudio ui --port 8081 ``` This will start the application on the specified port. Open your web browser and go to `http://localhost:8081/` to begin using AutoGen Studio. AutoGen Studio also takes several parameters to customize the application: - `--host <host>` argument to specify the host address. By default, it is set to `localhost`. Y - `--appdir <appdir>` argument to specify the directory where the app files (e.g., database and generated user files) are stored. By default, it is set to the a `.autogenstudio` directory in the user's home directory. - `--port <port>` argument to specify the port number. By default, it is set to `8080`. - `--reload` argument to enable auto-reloading of the server when changes are made to the code. By default, it is set to `False`. - `--database-uri` argument to specify the database URI. Example values include `sqlite:///database.sqlite` for SQLite and `postgresql+psycopg://user:password@localhost/dbname` for PostgreSQL. If this is not specified, the database URIL defaults to a `database.sqlite` file in the `--appdir` directory. Now that you have AutoGen Studio installed and running, you are ready to explore its capabilities, including defining and modifying agent workflows, interacting with agents and sessions, and expanding agent skills.
GitHub
autogen
autogen/samples/apps/autogen-studio/README.md
autogen
Contribution Guide We welcome contributions to AutoGen Studio. We recommend the following general steps to contribute to the project: - Review the overall AutoGen project [contribution guide](https://github.com/microsoft/autogen?tab=readme-ov-file#contributing) - Please review the AutoGen Studio [roadmap](https://github.com/microsoft/autogen/issues/737) to get a sense of the current priorities for the project. Help is appreciated especially with Studio issues tagged with `help-wanted` - Please initiate a discussion on the roadmap issue or a new issue to discuss your proposed contribution. - Please review the autogenstudio dev branch here [dev branch](https://github.com/microsoft/autogen/tree/autogenstudio) and use as a base for your contribution. This way, your contribution will be aligned with the latest changes in the AutoGen Studio project. - Submit a pull request with your contribution! - If you are modifying AutoGen Studio, it has its own devcontainer. See instructions in `.devcontainer/README.md` to use it - Please use the tag `studio` for any issues, questions, and PRs related to Studio
GitHub
autogen
autogen/samples/apps/autogen-studio/README.md
autogen
FAQ Please refer to the AutoGen Studio [FAQs](https://microsoft.github.io/autogen/docs/autogen-studio/faqs) page for more information.
GitHub
autogen
autogen/samples/apps/autogen-studio/README.md
autogen
Acknowledgements AutoGen Studio is Based on the [AutoGen](https://microsoft.github.io/autogen) project. It was adapted from a research prototype built in October 2023 (original credits: Gagan Bansal, Adam Fourney, Victor Dibia, Piali Choudhury, Saleema Amershi, Ahmed Awadallah, Chi Wang).
GitHub
autogen
autogen/samples/apps/autogen-studio/frontend/README.md
autogen
## 🚀 Running UI in Dev Mode Run the UI in dev mode (make changes and see them reflected in the browser with hotreloading): - npm install - npm run start This should start the server on port 8000.
GitHub
autogen
autogen/samples/apps/autogen-studio/frontend/README.md
autogen
Design Elements - **Gatsby**: The app is created in Gatsby. A guide on bootstrapping a Gatsby app can be found here - https://www.gatsbyjs.com/docs/quick-start/. This provides an overview of the project file structure include functionality of files like `gatsby-config.js`, `gatsby-node.js`, `gatsby-browser.js` and `gatsby-ssr.js`. - **TailwindCSS**: The app uses TailwindCSS for styling. A guide on using TailwindCSS with Gatsby can be found here - https://tailwindcss.com/docs/guides/gatsby.https://tailwindcss.com/docs/guides/gatsby . This will explain the functionality in tailwind.config.js and postcss.config.js.
GitHub
autogen
autogen/samples/apps/autogen-studio/frontend/README.md
autogen
Modifying the UI, Adding Pages The core of the app can be found in the `src` folder. To add pages, add a new folder in `src/pages` and add a `index.js` file. This will be the entry point for the page. For example to add a route in the app like `/about`, add a folder `about` in `src/pages` and add a `index.tsx` file. You can follow the content style in `src/pages/index.tsx` to add content to the page. Core logic for each component should be written in the `src/components` folder and then imported in pages as needed.
GitHub
autogen
autogen/samples/apps/autogen-studio/frontend/README.md
autogen
connecting to front end the front end makes request to the backend api and expects it at /api on localhost port 8081
GitHub
autogen
autogen/samples/apps/autogen-studio/frontend/README.md
autogen
setting env variables for the UI - please look at `.env.default` - make a copy of this file and name it `.env.development` - set the values for the variables in this file - The main variable here is `GATSBY_API_URL` which should be set to `http://localhost:8081/api` for local development. This tells the UI where to make requests to the backend.
GitHub
autogen
autogen/samples/apps/promptflow-autogen/README.md
autogen
# What is Promptflow Promptflow is a comprehensive suite of tools that simplifies the development, testing, evaluation, and deployment of LLM based AI applications. It also supports integration with Azure AI for cloud-based operations and is designed to streamline end-to-end development. Refer to [Promptflow docs](https://microsoft.github.io/promptflow/) for more information. Quick links: - Why use Promptflow - [Link](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/overview-what-is-prompt-flow) - Quick start guide - [Link](https://microsoft.github.io/promptflow/how-to-guides/quick-start.html)
GitHub
autogen
autogen/samples/apps/promptflow-autogen/README.md
autogen
Getting Started - Install required python packages ```bash cd samples/apps/promptflow-autogen pip install -r requirements.txt ``` - This example assumes a working Redis cache service to be available. You can get started locally using this [guide](https://redis.io/docs/latest/operate/oss_and_stack/install/install-redis/) or use your favorite managed service
GitHub
autogen
autogen/samples/apps/promptflow-autogen/README.md
autogen
Chat flow Chat flow is designed for conversational application development, building upon the capabilities of standard flow and providing enhanced support for chat inputs/outputs and chat history management. With chat flow, you can easily create a chatbot that handles chat input and output.
GitHub
autogen
autogen/samples/apps/promptflow-autogen/README.md
autogen
Create connection for LLM tool to use You can follow these steps to create a connection required by a LLM tool. Currently, there are two connection types supported by LLM tool: "AzureOpenAI" and "OpenAI". If you want to use "AzureOpenAI" connection type, you need to create an Azure OpenAI service first. Please refer to [Azure OpenAI Service](https://azure.microsoft.com/en-us/products/cognitive-services/openai-service/) for more details. If you want to use "OpenAI" connection type, you need to create an OpenAI account first. Please refer to [OpenAI](https://platform.openai.com/) for more details. ```bash # Override keys with --set to avoid yaml file changes # Create Azure open ai connection pf connection create --file azure_openai.yaml --set api_key=<your_api_key> api_base=<your_api_base> --name open_ai_connection # Create the custom connection for Redis Cache pf connection create -f custom_conn.yaml --set secrets.redis_url=<your-redis-connection-url> --name redis_connection_url # Sample redis connection string rediss://:PASSWORD@redis_host_name.redis.cache.windows.net:6380/0 ``` Note in [flow.dag.yaml](flow.dag.yaml) we are using connection named `aoai_connection` for Azure Open AI and `redis_connection_url` for redis. ```bash # show registered connection pf connection show --name open_ai_connection ``` Please refer to connections [document](https://promptflow.azurewebsites.net/community/local/manage-connections.html) and [example](https://github.com/microsoft/promptflow/tree/main/examples/connections) for more details.
GitHub
autogen
autogen/samples/apps/promptflow-autogen/README.md
autogen
Develop a chat flow The most important elements that differentiate a chat flow from a standard flow are **Chat Input**, **Chat History**, and **Chat Output**. - **Chat Input**: Chat input refers to the messages or queries submitted by users to the chatbot. Effectively handling chat input is crucial for a successful conversation, as it involves understanding user intentions, extracting relevant information, and triggering appropriate responses. - **Chat History**: Chat history is the record of all interactions between the user and the chatbot, including both user inputs and AI-generated outputs. Maintaining chat history is essential for keeping track of the conversation context and ensuring the AI can generate contextually relevant responses. Chat History is a special type of chat flow input, that stores chat messages in a structured format. - NOTE: Currently the sample flows do not send chat history messages to agent workflow. - **Chat Output**: Chat output refers to the AI-generated messages that are sent to the user in response to their inputs. Generating contextually appropriate and engaging chat outputs is vital for a positive user experience. A chat flow can have multiple inputs, but Chat History and Chat Input are required inputs in chat flow.
GitHub
autogen
autogen/samples/apps/promptflow-autogen/README.md
autogen
Interact with chat flow Promptflow supports interacting via vscode or via Promptflow CLI provides a way to start an interactive chat session for chat flow. Customer can use below command to start an interactive chat session: ```bash pf flow test --flow <flow_folder> --interactive ```
GitHub
autogen
autogen/samples/apps/promptflow-autogen/README.md
autogen
Autogen State Flow [Autogen State Flow](./autogen_stateflow.py) contains stateflow example shared at [StateFlow](https://microsoft.github.io/autogen/blog/2024/02/29/StateFlow/) with Promptflow. All the interim messages are sent to Redis channel. You can use these to stream to frontend or take further actions. Output of Prompflow is `summary` message from group chat.
GitHub
autogen
autogen/samples/apps/promptflow-autogen/README.md
autogen
Agent Nested Chat [Autogen Nested Chat](./agentchat_nestedchat.py) contains Scenario 1 of nested chat example shared at [Nested Chats](https://microsoft.github.io/autogen/docs/notebooks/agentchat_nestedchat) with Promptflow. All the interim messages are sent to Redis channel. You can use these to stream to frontend or take further actions. Output of Prompflow is `summary` message from group chat.
GitHub
autogen
autogen/samples/apps/promptflow-autogen/README.md
autogen
Redis for Data cache and Interim Messages Autogen supports Redis for [data caching](https://microsoft.github.io/autogen/docs/reference/cache/redis_cache/) and since redis supports a pub-subs model as well, this Promptflow example is configured for all agent callbacks to send messages to a Redis channel. This is optional feature but is essential for long running workflows and provides access to interim messages for your frontend. NOTE: Currently Promtpflow only support [SSE](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events) for streaming data and does not support websockets. NOTE: In multi user chat bot environment please make necessary changes to send messages to corresponding channel.
GitHub
autogen
autogen/samples/apps/websockets/README.md
autogen
# Using websockets with FastAPI and AutoGen
GitHub
autogen
autogen/samples/apps/websockets/README.md
autogen
Running the example 1. Navigate to the directory containing the example: ``` cd samples/apps/websockets ``` 2. Install the necessary dependencies: ``` ./setup.py ``` 3. Run the application: ``` uvicorn application:app --reload ``` You should now be able to access the application in your web browser at `http://localhost:8000`.
GitHub
autogen
autogen/samples/tools/finetuning/README.md
autogen
# Tools for fine-tuning the local models that power agents This directory aims to contain tools for fine-tuning the local models that power agents.
GitHub
autogen
autogen/samples/tools/finetuning/README.md
autogen
Fine tune a custom model client AutoGen supports the use of custom models to power agents [see blog post here](https://microsoft.github.io/autogen/blog/2024/01/26/Custom-Models). This directory contains a tool to provide feedback to that model, that can be used to fine-tune the model. The creator of the Custom Model Client will have to decide what kind of data is going to be fed back and how it will be used to fine-tune the model. This tool is designed to be flexible and allow for a wide variety of feedback mechanisms. Custom Model Client will have follow the protocol client defined in `update_model.py` `UpdateableModelClient` which is a subclass of `ModelClient` and adds the following method: ```python def update_model( self, preference_data: List[Dict[str, Any]], inference_messages: List[Dict[str, Any]], **kwargs: Any ) -> Dict[str, Any]: """Optional method to learn from the preference data, if the model supports learning. Can be omitted. Learn from the preference data. Args: preference_data: The preference data. inference_messages: The messages that were used during inference between the agent that is being updated and another agent. **kwargs: other arguments. Returns: Dict of learning stats. """ ``` The function provided in the file `update_model.py` is called by passing these arguments: - the agent whose model is to be updated - the preference data - the agent whose conversation is being used to provide the inference messages The function will find the conversation thread that occurred between the "update agent" and the "other agent", and call the `update_model` method of the model client. It will return a dictionary containing the update stats, inference messages, and preference data: ```python { "update_stats": <the dictionary returned by the custom model client implementation>, "inference_messages": <message used for inference>, "preference_data": <the preference data passed in when update_model was called> } ``` **NOTES**: `inference_messages` will contain messages that were passed into the custom model client when `create` was called and a response was needed from the model. It is up to the author of the custom model client to decide which parts of the conversation are needed and how to use this data to fine-tune the model. If a conversation has been long-running before `update_model` is called, then the `inference_messages` will contain a conversation thread that was used for multiple inference steps. It is again up to the author of the custom model client to decide which parts of the conversation correspond to the preference data and how to use this data to fine-tune the model. An example of how to use this tool is shown below: ```python from finetuning.update_model import update_model assistant = AssistantAgent( "assistant", system_message="You are a helpful assistant.", human_input_mode="NEVER", llm_config={ "config_list": [<the config list containing the custom model>], }, ) assistant.register_model_client(model_client_cls=<TheCustomModelClientClass>) user_proxy = UserProxyAgent( "user_proxy", human_input_mode="NEVER", max_consecutive_auto_reply=1, code_execution_config=False, llm_config=False, ) res = user_proxy.initiate_chat(assistant, message="the message") response_content = res.summary # Evaluate the summary here and provide feedback. Pretending I am going to perform DPO on the response. # preference_data will be passed on as-is to the custom model client's update_model implementation # so it should be in the format that the custom model client expects and is completely up to the author of the custom model client preference_data = [("this is what the response should have been like", response_content)] update_model_stats = update_model(assistant, preference_data, user_proxy) ```
GitHub
autogen
autogen/samples/tools/webarena/README.md
autogen
# WebArena Benchmark This directory helps run AutoGen agents on the [WebArena](https://arxiv.org/pdf/2307.13854.pdf) benchmark.
GitHub
autogen
autogen/samples/tools/webarena/README.md
autogen
Installing WebArena WebArena can be installed by following the instructions from [WebArena's GitHub repository](git@github.com:web-arena-x/webarena.git) If using WebArena with AutoGen there is a clash on the versions of OpenAI and some code changes are needed in WebArena to be compatible with AutoGen's OpenAI version: - webarena's openai version is `openai==0.27.0` - autogen's openai version is: `openai>=1.3` Prior to installation, in the WebArena codebase, any file containing `openai.error` needs to be replaced with `openai`.
GitHub
autogen
autogen/samples/tools/webarena/README.md
autogen
Running with AutoGen agents You can use the `run.py` file in the `webarena` directory to run WebArena with AutoGen. The OpenAI (or AzureOpenAI or other model) configuration can be setup via `OAI_CONFIG_LIST`. The config list will be filtered by whatever model is passed in the `--model` argument. e.g. of running `run.py`: ``` mkdir myresultdir python run.py --instruction_path agent/prompts/jsons/p_cot_id_actree_2s.json --test_start_idx 27 --test_end_idx 28 --model gpt-4 --result_dir myresultdir ``` The original `run.py` file has been modified to use AutoGen agents which are defined in the `webarena_agents.py` file.
GitHub
autogen
autogen/samples/tools/webarena/README.md
autogen
References **WebArena: A Realistic Web Environment for Building Autonomous Agents**<br/> Zhou, Shuyan and Xu, Frank F and Zhu, Hao and Zhou, Xuhui and Lo, Robert and Sridhar, Abishek and Cheng, Xianyi and Bisk, Yonatan and Fried, Daniel and Alon, Uri and others<br/> [https://arxiv.org/pdf/2307.13854.pdf](https://arxiv.org/pdf/2307.13854.pdf)
GitHub
autogen
autogen/samples/tools/autogenbench/CONTRIBUTING.md
autogen
# Contributing to AutoGenBench As part of the broader AutoGen project, AutoGenBench welcomes community contributions. Contributions are subject to AutoGen's [contribution guidelines](https://microsoft.github.io/autogen/docs/Contribute), as well as a few additional AutoGenBench-specific requirements outlined here. You may also wish to develop your own private benchmark scenarios and the guidance in this document will help with such efforts as well. Below you will find the general requirements, followed by a detailed technical description.
GitHub
autogen
autogen/samples/tools/autogenbench/CONTRIBUTING.md
autogen
General Contribution Requirements We ask that all contributions to AutoGenBench adhere to the following: - Follow AutoGen's broader [contribution guidelines](https://microsoft.github.io/autogen/docs/Contribute) - All AutoGenBench benchmarks should live in a subfolder of `/samples/tools/autogenbench/scenarios` alongside `HumanEval`, `GAIA`, etc. - Benchmark scenarios should include a detailed README.md, in the root of their folder, describing the benchmark and providing citations where warranted. - Benchmark data (tasks, ground truth, etc.) should be downloaded from their original sources rather than hosted in the AutoGen repository (unless the benchmark is original, and the repository *is* the original source) - You can use the `Scripts/init_tasks.py` file to automate this download. - Basic scoring should be compatible with the `autogenbench tabulate` command (e.g., by outputting logs compatible with the default tabulation mechanism, or by providing a `Scripts/custom_tabulate.py` file) - If you wish your benchmark to be compatible with the `autogenbench clone` command, include a `MANIFEST.json` file in the root of your folder. These requirements are further detailed below, but if you simply copy the `HumanEval` folder, you will already be off to a great start.
GitHub
autogen
autogen/samples/tools/autogenbench/CONTRIBUTING.md
autogen
Implementing and Running Benchmark Tasks At the core of any benchmark is a set of tasks. To implement tasks that are runnable by AutoGenBench, you must adhere to AutoGenBench's templating and scenario expansion algorithms, as outlined below. ### Task Definitions All tasks are stored in JSONL files (in subdirectories under `./Tasks`). Each line of a tasks file is a JSON object with the following schema: ``` { "id": string, "template": dirname, "substitutions" { "filename1": { "find_string1_1": replace_string1_1, "find_string1_2": replace_string1_2, ... "find_string1_M": replace_string1_N } "filename2": { "find_string2_1": replace_string2_1, "find_string2_2": replace_string2_2, ... "find_string2_N": replace_string2_N } } } ``` For example: ``` { "id": "two_agent_stocks_gpt4", "template": "default_two_agents", "substitutions": { "scenario.py": { "__MODEL__": "gpt-4", }, "prompt.txt": { "__PROMPT__": "Plot and save to disk a chart of NVDA and TESLA stock price YTD." } } } ``` In this example, the string `__MODEL__` will be replaced in the file `scenarios.py`, while the string `__PROMPT__` will be replaced in the `prompt.txt` file. The `template` field can also take on a list value, but this usage is considered advanced and is not described here. See the `autogenbench/run_cmd.py` code, or the `GAIA` benchmark tasks files for additional information about this option.
GitHub
autogen
autogen/samples/tools/autogenbench/CONTRIBUTING.md
autogen
Task Instance Expansion Algorithm Once the tasks have been defined, as per above, they must be "instantiated" before they can be run. This instantiation happens automatically when the user issues the `autogenbench run` command and involves creating a local folder to share with Docker. Each instance and repetition gets its own folder along the path: `./results/[scenario]/[task_id]/[instance_id]`. For the sake of brevity we will refer to this folder as the `DEST_FOLDER`. The algorithm for populating the `DEST_FOLDER` is as follows: 1. Pre-populate DEST_FOLDER with all the basic starter files for running a scenario (found in `autogenbench/template`). 2. Recursively copy the template folder specified in the JSONL line to DEST_FOLDER (if the JSON `template` attribute points to a folder) If the JSONs `template` attribute instead points to a file, copy the file, but rename it to `scenario.py` 3. Apply any string replacements, as outlined in the prior section. 4. Write a run.sh file to DEST_FOLDER that will be executed by Docker when it is loaded. The `run.sh` is described below.
GitHub
autogen
autogen/samples/tools/autogenbench/CONTRIBUTING.md
autogen
Scenario Execution Algorithm Once the task has been instantiated it is run (via run.sh). This script will execute the following steps: 1. If a file named `global_init.sh` is present, run it. 2. If a file named `scenario_init.sh` is present, run it. 3. Install the requirements.txt file (if running in Docker) 4. Run the task via `python scenario.py` 5. If the scenario.py exited cleanly (exit code 0), then print "SCENARIO.PY COMPLETE !#!#" 6. Clean up (delete cache, etc.) 7. If a file named `scenario_finalize.sh` is present, run it. 8. If a file named `global_finalize.sh` is present, run it. 9. echo "RUN COMPLETE !#!#", signaling that all steps completed. Notably, this means that scenarios can add custom init and teardown logic by including `scenario_init.sh` and `scenario_finalize.sh` files. At the time of this writing, the run.sh file is as follows: ```sh export AUTOGEN_TESTBED_SETTING="Docker" umask 000 # Run the global init script if it exists if [ -f global_init.sh ] ; then . ./global_init.sh fi # Run the scenario init script if it exists if [ -f scenario_init.sh ] ; then . ./scenario_init.sh fi # Run the scenario pip install -r requirements.txt python scenario.py EXIT_CODE=$? if [ $EXIT_CODE -ne 0 ]; then echo SCENARIO.PY EXITED WITH CODE: $EXIT_CODE !#!# else echo SCENARIO.PY COMPLETE !#!# fi # Clean up if [ -d .cache ] ; then rm -Rf .cache fi # Run the scenario finalize script if it exists if [ -f scenario_finalize.sh ] ; then . ./scenario_finalize.sh fi # Run the global finalize script if it exists if [ -f global_finalize.sh ] ; then . ./global_finalize.sh fi echo RUN.SH COMPLETE !#!# ``` Be warned that this listing is provided here for illustration purposes, and may vary over time. The source of truth are the `run.sh` files found in the ``./results/[taskset]/[task_id]/[instance_id]`` folders.
GitHub
autogen
autogen/samples/tools/autogenbench/CONTRIBUTING.md
autogen
Integrating with the `tabulate` and `clone` commands. The above details are sufficient for defining and running tasks, but if you wish to support the `autogenbench tabulate` and `autogenbench clone` commands, a few additional steps are required. ### Tabulations If you wish to leverage the default tabulation logic, it is as simple as arranging your `scenario.py` file to output the string "ALL TESTS PASSED !#!#" to the console in the event that a task was solved correctly. If you wish to implement your own tabulation logic, simply create the file `Scripts/custom_tabulate.py` and include a `main(args)` method. Here, the `args` parameter will be provided by AutoGenBench, and is a drop-in replacement for `sys.argv`. In particular, `args[0]` will be the invocation command (similar to the executable or script name in `sys.argv`), and the remaining values (`args[1:]`) are the command line parameters. Should you provide a custom tabulation script, please implement `--help` and `-h` options for documenting your interface. The `scenarios/GAIA/Scripts/custom_tabulate.py` is a great example of custom tabulation. It also shows how you can reuse some components of the default tabulator to speed up development. ### Cloning If you wish your benchmark to be available via the `autogenbench clone` command, you will need to take three additional steps: #### Manifest First, provide a `MANIFEST.json` file in the root of your benchmark. An example is provided below, from which you can see the schema: ```json { "files": { "Templates/TwoAgents/prompt.txt": "Templates/TwoAgents/prompt.txt", "Templates/TwoAgents/coding/my_tests.py": "Templates/TwoAgents/coding/my_tests.py", "Templates/TwoAgents/scenario.py": "Templates/TwoAgents/scenario.py", "README.md": "README.md", "Scripts/init_tasks.py": "Scripts/init_tasks.py", "Scripts/custom_tabulate.py": "Scripts/custom_tabulate.py" } } ``` The keys of the `files` dictionary are local paths, relative to your benchmark's root directory. The values are relative paths in the AutoGen GitHub repository (relative to the folder where the MANIFEST.json file is located). In most cases, the keys and values will be identical. #### SCENARIOS dictionary Second, you must add an entry to the `scenarios` dictionary in `autogen/samples/tools/autogenbench/scenarios/MANIFEST.json`. #### Scripts/init_tasks.py Finally, you should provide an `Scripts/init_tasks.py` file, in your benchmark folder, and include a `main()` method therein. This method will be loaded and called automatically by `autogenbench clone` after all manifest files have been downloaded. This `init_tasks.py` script is a great place to download benchmarks from their original sources and convert them to the JSONL format required by AutoGenBench: - See `HumanEval/Scripts/init_tasks.py` for an example of how to expand a benchmark from an original GitHub repository. - See `GAIA/Scripts/init_tasks.py` for an example of how to expand a benchmark from `Hugging Face Hub`. - See `MATH/SCripts/init_tasks.py` for an example of how to expand a benchmark from an author-hosted website.
GitHub
autogen
autogen/samples/tools/autogenbench/README.md
autogen
# AutoGenBench AutoGenBench is a tool for repeatedly running a set of pre-defined AutoGen tasks in a setting with tightly-controlled initial conditions. With each run, AutoGenBench will start from a blank slate. The agents being evaluated will need to work out what code needs to be written, and what libraries or dependencies to install, to solve tasks. The results of each run are logged, and can be ingested by analysis or metrics scripts (such as `autogenbench tabulate`). By default, all runs are conducted in freshly-initialized docker containers, providing the recommended level of consistency and safety. AutoGenBench works with all AutoGen 0.1.*, and 0.2.* versions.
GitHub
autogen
autogen/samples/tools/autogenbench/README.md
autogen
Technical Specifications If you are already an AutoGenBench pro, and want the full technical specifications, please review the [contributor's guide](CONTRIBUTING.md).
GitHub
autogen
autogen/samples/tools/autogenbench/README.md
autogen
Docker Requirement AutoGenBench also requires Docker (Desktop or Engine). **It will not run in GitHub codespaces**, unless you opt for native execution (with is strongly discouraged). To install Docker Desktop see [https://www.docker.com/products/docker-desktop/](https://www.docker.com/products/docker-desktop/).
GitHub
autogen
autogen/samples/tools/autogenbench/README.md
autogen
Installation and Setup **To get the most out of AutoGenBench, the `autogenbench` package should be installed**. At present, the easiest way to do this is to install it via `pip`: ``` pip install autogenbench ``` If you would prefer working from source code (e.g., for development, or to utilize an alternate branch), simply clone the [AutoGen](https://github.com/microsoft/autogen) repository, then install `autogenbench` via: ``` pip install -e autogen/samples/tools/autogenbench ``` After installation, you must configure your API keys. As with other AutoGen applications, AutoGenBench will look for the OpenAI keys in the OAI_CONFIG_LIST file in the current working directory, or the OAI_CONFIG_LIST environment variable. This behavior can be overridden using a command-line parameter described later. If you will be running multiple benchmarks, it is often most convenient to leverage the environment variable option. You can load your keys into the environment variable by executing: ``` export OAI_CONFIG_LIST=$(cat ./OAI_CONFIG_LIST) ``` If an OAI_CONFIG_LIST is *not* provided (by means of file or environment variable), AutoGenBench will use the OPENAI_API_KEY environment variable instead. For some benchmark scenarios, additional keys may be required (e.g., keys for the Bing Search API). These can be added to an `ENV.json` file in the current working folder. An example `ENV.json` file is provided below: ``` { "BING_API_KEY": "xxxyyyzzz" } ```
GitHub
autogen
autogen/samples/tools/autogenbench/README.md
autogen
A Typical Session Once AutoGenBench and necessary keys are installed, a typical session will look as follows: ``` autogenbench clone HumanEval cd HumanEval autogenbench run Tasks/r_human_eval_two_agents.jsonl autogenbench tabulate results/r_human_eval_two_agents ``` Where: - `autogenbench clone HumanEval` downloads and expands the HumanEval benchmark scenario. - `autogenbench run Tasks/r_human_eval_two_agents.jsonl` runs the tasks defined in `Tasks/r_human_eval_two_agents.jsonl` - `autogenbench tablue results/r_human_eval_two_agents` tabulates the results of the run Each of these commands has extensive in-line help via: - `autogenbench --help` - `autogenbench clone --help` - `autogenbench run --help` - `autogenbench tabulate --help` **NOTE:** If you are running `autogenbench` from within the repository, you don’t need to run `autogenbench clone`. Instead, navigate to the appropriate scenario folder (e.g., `scenarios/HumanEval`) and run the `Scripts/init_tasks.py` file. More details of each command are provided in the sections that follow.
GitHub
autogen
autogen/samples/tools/autogenbench/README.md
autogen
Cloning Benchmarks To clone an existing benchmark, simply run: ``` autogenbench clone [BENCHMARK] ``` For example, ``` autogenbench clone HumanEval ``` To see which existing benchmarks are available to clone, run: ``` autogenbench clone --list ```
GitHub
autogen
autogen/samples/tools/autogenbench/README.md
autogen
Running AutoGenBench To run a benchmark (which executes the tasks, but does not compute metrics), simply execute: ``` cd [BENCHMARK] autogenbench run Tasks ``` For example, ``` cd HumanEval autogenbench run Tasks ``` The default is to run each task once. To run each scenario 10 times, use: ``` autogenbench run --repeat 10 Tasks ``` The `autogenbench` command-line tool allows a number of command-line arguments to control various parameters of execution. Type ``autogenbench -h`` to explore these options: ``` 'autogenbench run' will run the specified autogen scenarios for a given number of repetitions and record all logs and trace information. When running in a Docker environment (default), each run will begin from a common, tightly controlled, environment. The resultant logs can then be further processed by other scripts to produce metrics. positional arguments: scenario The JSONL scenario file to run. If a directory is specified, then all JSONL scenarios in the directory are run. (default: ./scenarios) options: -h, --help show this help message and exit -c CONFIG, --config CONFIG The environment variable name or path to the OAI_CONFIG_LIST (default: OAI_CONFIG_LIST). -r REPEAT, --repeat REPEAT The number of repetitions to run for each scenario (default: 1). -s SUBSAMPLE, --subsample SUBSAMPLE Run on a subsample of the tasks in the JSONL file(s). If a decimal value is specified, then run on the given proportion of tasks in each file. For example "0.7" would run on 70% of tasks, and "1.0" would run on 100% of tasks. If an integer value is specified, then randomly select *that* number of tasks from each specified JSONL file. For example "7" would run tasks, while "1" would run only 1 task from each specified JSONL file. (default: 1.0; which is 100%) -m MODEL, --model MODEL Filters the config_list to include only models matching the provided model name (default: None, which is all models). --requirements REQUIREMENTS The requirements file to pip install before running the scenario. -d DOCKER_IMAGE, --docker-image DOCKER_IMAGE The Docker image to use when running scenarios. Can not be used together with --native. (default: 'autogenbench:default', which will be created if not present) --native Run the scenarios natively rather than in docker. NOTE: This is not advisable, and should be done with great caution. ```
GitHub
autogen
autogen/samples/tools/autogenbench/README.md
autogen
Results By default, the AutoGenBench stores results in a folder hierarchy with the following template: ``./results/[scenario]/[task_id]/[instance_id]`` For example, consider the following folders: ``./results/default_two_agents/two_agent_stocks/0`` ``./results/default_two_agents/two_agent_stocks/1`` ... ``./results/default_two_agents/two_agent_stocks/9`` This folder holds the results for the ``two_agent_stocks`` task of the ``default_two_agents`` tasks file. The ``0`` folder contains the results of the first instance / run. The ``1`` folder contains the results of the second run, and so on. You can think of the _task_id_ as mapping to a prompt, or a unique set of parameters, while the _instance_id_ defines a specific attempt or run. Within each folder, you will find the following files: - *timestamp.txt*: records the date and time of the run, along with the version of the pyautogen library installed - *console_log.txt*: all console output produced by Docker when running AutoGen. Read this like you would a regular console. - *[agent]_messages.json*: for each Agent, a log of their messages dictionaries - *./coding*: A directory containing all code written by AutoGen, and all artifacts produced by that code.
GitHub
autogen
autogen/samples/tools/autogenbench/README.md
autogen
Contributing or Defining New Tasks or Benchmarks If you would like to develop -- or even contribute -- your own tasks or benchmarks, please review the [contributor's guide](CONTRIBUTING.md) for complete technical details.
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/MATH/README.md
autogen
# MATH Benchmark This scenario implements the [MATH](https://arxiv.org/abs/2103.03874) benchmark.
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/MATH/README.md
autogen
Running the tasks ``` autogenbench run Tasks/math_two_agents.jsonl autogenbench tabulate Results/math_two_agents ``` By default, only a small subset (17 of 5000) MATH problems are exposed. Edit `Scripts/init_tasks.py` to expose more tasks.
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/MATH/README.md
autogen
Note on automated evaluation In this scenario, we adopted an automated evaluation pipeline (from [AutoGen](https://arxiv.org/abs/2308.08155) evaluation) that uses LLM to compare the results. Thus, the metric above is only an estimation of the agent's performance on math problems. We also find a similar practice of using LLM as judger for MATH dataset from the [Cumulative Reasoning](https://arxiv.org/abs/2308.04371) paper ([code](https://github.com/iiis-ai/cumulative-reasoning/blob/main/MATH/math-cr-4shot.py)). The static checking from MATH dataset requires an exact match ('comparing 2.0 and 2 results in False'). We haven't found an established way that accurately compares the answer, so human involvement is still needed to confirm the result. In AutoGen, the conversation will end at “TERMINATE” by default. To enable an automated way of answer extraction and evaluation, we prompt an LLM with 1. the given problem 2. the ground truth answer 3. the last response from the solver, to extract the answer and compare it with the ground truth answer. We evaluate the 17 problems for 3 times and go through these problems manually to check the answers. Compared with the automated result evaluation (the model is gpt-4-0613), we find that in 2/3 trials, the automated evaluation determined 1 correct answer as wrong (False Negative). This means 49/51 problems are evaluated correctly. We also went through 200 random sampled problems from whole dataset to check the results. There are 1 False Negative and 2 False Positives. We note that False Positive is also possible due to the hallucination of LLMs, and the variety of problems.
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/MATH/README.md
autogen
References **Measuring Mathematical Problem Solving With the MATH Dataset**<br/> Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, Jacob Steinhardt<br/> [https://arxiv.org/abs/2103.03874](https://arxiv.org/abs/2103.03874) **AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation**<br/> Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Shaokun Zhang, Erkang Zhu, Beibin Li, Li Jiang, Xiaoyun Zhang and Chi Wang<br/> [https://arxiv.org/abs/2308.08155](https://arxiv.org/abs/2308.08155) **Cumulative Reasoning with Large Language Models**<br/> Yifan Zhang, Jingqin Yang, Yang Yuan, Andrew Chi-Chih Yao<br/> [https://arxiv.org/abs/2308.04371](https://arxiv.org/abs/2308.04371)
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/Examples/README.md
autogen
# Example Tasks Various AutoGen example tasks. Unlike other benchmark tasks, these tasks have no automated evaluation.
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/Examples/README.md
autogen
Running the tasks ``` autogenbench run Tasks/default_two_agents ``` Some tasks require a Bing API key. Edit the ENV.json file to provide a valid BING_API_KEY, or simply allow that task to fail (it is only required by one task).
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/AutoGPT/README.md
autogen
# AutoGPT Benchmark This scenario implements an older subset of the [AutoGPT](https://github.com/Significant-Gravitas/Auto-GPT-Benchmarks/tree/master/agbenchmark#readme) benchmark. Tasks were selected in November 2023, and may have since been deprecated. They are nonetheless useful for comparison and development.
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/AutoGPT/README.md
autogen
Running the tasks ``` autogenbench run Tasks/autogpt__two_agents.jsonl autogenbench tabulate Results/autogpt__two_agents ```
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/GAIA/README.md
autogen
# GAIA Benchmark This scenario implements the [GAIA](https://arxiv.org/abs/2311.12983) agent benchmark.
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/GAIA/README.md
autogen
Running the TwoAgents tasks Level 1 tasks: ```sh autogenbench run Tasks/gaia_test_level_1__two_agents.jsonl autogenbench tabulate Results/gaia_test_level_1__two_agents ``` Level 2 and 3 tasks are executed similarly.
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/GAIA/README.md
autogen
Running the SocietyOfMind tasks Running the SocietyOfMind tasks is similar to the TwoAgentTasks, but requires an `ENV.json` file with a working BING API key. This file should be located in the root current working directory from where you are running autogenbench, and should have at least the following contents: ```json { "BING_API_KEY": "Your_API_key" } ``` Once created, simply run: ```sh autogenbench run Tasks/gaia_test_level_1__soc.jsonl autogenbench tabulate Results/gaia_test_level_1__soc ``` And similarly for level 2 and 3.
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/GAIA/README.md
autogen
References **GAIA: a benchmark for General AI Assistants**<br/> Grégoire Mialon, Clémentine Fourrier, Craig Swift, Thomas Wolf, Yann LeCun, Thomas Scialom<br/> [https://arxiv.org/abs/2311.12983](https://arxiv.org/abs/2311.12983)
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/HumanEval/README.md
autogen
# HumanEval Benchmark This scenario implements a modified version of the [HumanEval](https://arxiv.org/abs/2107.03374) benchmark. Compared to the original benchmark, there are **two key differences** here: - A chat model rather than a completion model is used. - The agents get pass/fail feedback about their implementations, and can keep trying until they succeed or run out of tokens or turns.
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/HumanEval/README.md
autogen
Running the tasks ``` autogenbench run Tasks/human_eval_two_agents.jsonl autogenbench tabulate Results/human_eval_two_agents ``` For faster development and iteration, a reduced HumanEval set is available via `Tasks/r_human_eval_two_agents.jsonl`, and contains only 26 problems of varying difficulty.
GitHub
autogen
autogen/samples/tools/autogenbench/scenarios/HumanEval/README.md
autogen
References **Evaluating Large Language Models Trained on Code**<br/> Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, Wojciech Zaremba<br/> [https://arxiv.org/abs/2107.03374](https://arxiv.org/abs/2107.03374)
GitHub
autogen
autogen/autogen/agentchat/contrib/agent_eval/README.md
autogen
Agents for running the [AgentEval](https://microsoft.github.io/autogen/blog/2023/11/20/AgentEval/) pipeline. AgentEval is a process for evaluating a LLM-based system's performance on a given task. When given a task to evaluate and a few example runs, the critic and subcritic agents create evaluation criteria for evaluating a system's solution. Once the criteria has been created, the quantifier agent can evaluate subsequent task solutions based on the generated criteria. For more information see: [AgentEval Integration Roadmap](https://github.com/microsoft/autogen/issues/2162) See our [blog post](https://microsoft.github.io/autogen/blog/2024/06/21/AgentEval) for usage examples and general explanations.
GitHub
autogen
autogen/.github/PULL_REQUEST_TEMPLATE.md
autogen
<!-- Thank you for your contribution! Please review https://microsoft.github.io/autogen/docs/Contribute before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
GitHub
autogen
autogen/.github/PULL_REQUEST_TEMPLATE.md
autogen
Why are these changes needed? <!-- Please give a short summary of the change and the problem this solves. -->
GitHub
autogen
autogen/.github/PULL_REQUEST_TEMPLATE.md
autogen
Related issue number <!-- For example: "Closes #1234" -->
GitHub
autogen
autogen/.github/PULL_REQUEST_TEMPLATE.md
autogen
Checks - [ ] I've included any doc changes needed for https://microsoft.github.io/autogen/. See https://microsoft.github.io/autogen/docs/Contribute#documentation to build and test documentation locally. - [ ] I've added tests (if relevant) corresponding to the changes introduced in this PR. - [ ] I've made sure all auto checks have passed.