source
stringclasses
1 value
repository
stringclasses
1 value
file
stringlengths
17
99
label
stringclasses
1 value
content
stringlengths
11
13.3k
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
IPython Code Executor To use the IPython code executor, you need to install the `jupyter-client` and `ipykernel` packages: ```bash pip install "pyautogen[ipython]" ``` To use the IPython code executor: ```python from autogen import UserProxyAgent proxy = UserProxyAgent(name="proxy", code_execution_config={"executor": "ipython-embedded"}) ```
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
blendsearch `pyautogen<0.2` offers a cost-effective hyperparameter optimization technique [EcoOptiGen](https://arxiv.org/abs/2303.04673) for tuning Large Language Models. Please install with the [blendsearch] option to use it. ```bash pip install "pyautogen[blendsearch]<0.2" ``` Example notebooks: [Optimize for Code Generation](https://github.com/microsoft/autogen/blob/main/notebook/oai_completion.ipynb) [Optimize for Math](https://github.com/microsoft/autogen/blob/main/notebook/oai_chatgpt_gpt4.ipynb)
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
retrievechat `pyautogen` supports retrieval-augmented generation tasks such as question answering and code generation with RAG agents. Please install with the [retrievechat] option to use it with ChromaDB. ```bash pip install "pyautogen[retrievechat]" ``` Alternatively `pyautogen` also supports PGVector and Qdrant which can be installed in place of ChromaDB, or alongside it. ```bash pip install "pyautogen[retrievechat-pgvector]" ``` ```bash pip install "pyautogen[retrievechat-qdrant]" ``` RetrieveChat can handle various types of documents. By default, it can process plain text and PDF files, including formats such as 'txt', 'json', 'csv', 'tsv', 'md', 'html', 'htm', 'rtf', 'rst', 'jsonl', 'log', 'xml', 'yaml', 'yml' and 'pdf'. If you install [unstructured](https://unstructured-io.github.io/unstructured/installation/full_installation.html) (`pip install "unstructured[all-docs]"`), additional document types such as 'docx', 'doc', 'odt', 'pptx', 'ppt', 'xlsx', 'eml', 'msg', 'epub' will also be supported. You can find a list of all supported document types by using `autogen.retrieve_utils.TEXT_FORMATS`. Example notebooks: [Automated Code Generation and Question Answering with Retrieval Augmented Agents](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat.ipynb) [Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent)](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_groupchat_RAG.ipynb) [Automated Code Generation and Question Answering with Qdrant based Retrieval Augmented Agents](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_qdrant.ipynb)
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
Teachability To use Teachability, please install AutoGen with the [teachable] option. ```bash pip install "pyautogen[teachable]" ``` Example notebook: [Chatting with a teachable agent](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_teachability.ipynb)
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
Large Multimodal Model (LMM) Agents We offered Multimodal Conversable Agent and LLaVA Agent. Please install with the [lmm] option to use it. ```bash pip install "pyautogen[lmm]" ``` Example notebooks: [LLaVA Agent](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_lmm_llava.ipynb)
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
mathchat `pyautogen<0.2` offers an experimental agent for math problem solving. Please install with the [mathchat] option to use it. ```bash pip install "pyautogen[mathchat]<0.2" ``` Example notebooks: [Using MathChat to Solve Math Problems](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_MathChat.ipynb)
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
Graph To use a graph in `GroupChat`, particularly for graph visualization, please install AutoGen with the [graph] option. ```bash pip install "pyautogen[graph]" ``` Example notebook: [Finite State Machine graphs to set speaker transition constraints](https://microsoft.github.io/autogen/docs/notebooks/agentchat_groupchat_finite_state_machine)
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
Long Context Handling AutoGen includes support for handling long textual contexts by leveraging the LLMLingua library for text compression. To enable this functionality, please install AutoGen with the `[long-context]` option: ```bash pip install "pyautogen[long-context]" ```
GitHub
autogen
autogen/website/docs/contributor-guide/tests.md
autogen
# Tests Tests are automatically run via GitHub actions. There are two workflows: 1. [build.yml](https://github.com/microsoft/autogen/blob/main/.github/workflows/build.yml) 1. [openai.yml](https://github.com/microsoft/autogen/blob/main/.github/workflows/openai.yml) The first workflow is required to pass for all PRs (and it doesn't do any OpenAI calls). The second workflow is required for changes that affect the OpenAI tests (and does actually call LLM). The second workflow requires approval to run. When writing tests that require OpenAI calls, please use [`pytest.mark.skipif`](https://github.com/microsoft/autogen/blob/b1adac515931bf236ac59224269eeec683a162ba/test/oai/test_client.py#L19) to make them run in only when `openai` package is installed. If additional dependency for this test is required, install the dependency in the corresponding python version in [openai.yml](https://github.com/microsoft/autogen/blob/main/.github/workflows/openai.yml). Make sure all tests pass, this is required for [build.yml](https://github.com/microsoft/autogen/blob/main/.github/workflows/build.yml) checks to pass
GitHub
autogen
autogen/website/docs/contributor-guide/tests.md
autogen
Running tests locally To run tests, install the [test] option: ```bash pip install -e."[test]" ``` Then you can run the tests from the `test` folder using the following command: ```bash pytest test ``` Tests for the `autogen.agentchat.contrib` module may be skipped automatically if the required dependencies are not installed. Please consult the documentation for each contrib module to see what dependencies are required. See [here](https://github.com/microsoft/autogen/blob/main/notebook/contributing.md#testing) for how to run notebook tests.
GitHub
autogen
autogen/website/docs/contributor-guide/tests.md
autogen
Skip flags for tests - `--skip-openai` for skipping tests that require access to OpenAI services. - `--skip-docker` for skipping tests that explicitly use docker - `--skip-redis` for skipping tests that require a Redis server For example, the following command will skip tests that require access to OpenAI and docker services: ```bash pytest test --skip-openai --skip-docker ```
GitHub
autogen
autogen/website/docs/contributor-guide/tests.md
autogen
Coverage Any code you commit should not decrease coverage. To ensure your code maintains or increases coverage, use the following commands after installing the required test dependencies: ```bash pip install -e ."[test]" pytest test --cov-report=html ``` Pytest generated a code coverage report and created a htmlcov directory containing an index.html file and other related files. Open index.html in any web browser to visualize and navigate through the coverage data interactively. This interactive visualization allows you to identify uncovered lines and review coverage statistics for individual files.
GitHub
autogen
autogen/website/docs/contributor-guide/pre-commit.md
autogen
# Pre-commit Run `pre-commit install` to install pre-commit into your git hooks. Before you commit, run `pre-commit run` to check if you meet the pre-commit requirements. If you use Windows (without WSL) and can't commit after installing pre-commit, you can run `pre-commit uninstall` to uninstall the hook. In WSL or Linux this is supposed to work.
GitHub
autogen
autogen/website/docs/contributor-guide/maintainer.md
autogen
# Guidance for Maintainers
GitHub
autogen
autogen/website/docs/contributor-guide/maintainer.md
autogen
General - Be a member of the community and treat everyone as a member. Be inclusive. - Help each other and encourage mutual help. - Actively post and respond. - Keep open communication. - Identify good maintainer candidates from active contributors.
GitHub
autogen
autogen/website/docs/contributor-guide/maintainer.md
autogen
Pull Requests - For new PR, decide whether to close without review. If not, find the right reviewers. One source to refer to is the roles on Discord. Another consideration is to ask users who can benefit from the PR to review it. - For old PR, check the blocker: reviewer or PR creator. Try to unblock. Get additional help when needed. - When requesting changes, make sure you can check back in time because it blocks merging. - Make sure all the checks are passed. - For changes that require running OpenAI tests, make sure the OpenAI tests pass too. Running these tests requires approval. - In general, suggest small PRs instead of a giant PR. - For documentation change, request snapshot of the compiled website, or compile by yourself to verify the format. - For new contributors who have not signed the contributing agreement, remind them to sign before reviewing. - For multiple PRs which may have conflict, coordinate them to figure out the right order. - Pay special attention to: - Breaking changes. Don’t make breaking changes unless necessary. Don’t merge to main until enough headsup is provided and a new release is ready. - Test coverage decrease. - Changes that may cause performance degradation. Do regression test when test suites are available. - Discourage **change to the core library** when there is an alternative.
GitHub
autogen
autogen/website/docs/contributor-guide/maintainer.md
autogen
Issues and Discussions - For new issues, write a reply, apply a label if relevant. Ask on discord when necessary. For roadmap issues, apply the roadmap label and encourage community discussion. Mention relevant experts when necessary. - For old issues, provide an update or close. Ask on discord when necessary. Encourage PR creation when relevant. - Use “good first issue” for easy fix suitable for first-time contributors. - Use “task list” for issues that require multiple PRs. - For discussions, create an issue when relevant. Discuss on discord when appropriate.
GitHub
autogen
autogen/website/docs/contributor-guide/docker.md
autogen
# Docker for Development For developers contributing to the AutoGen project, we offer a specialized Docker environment. This setup is designed to streamline the development process, ensuring that all contributors work within a consistent and well-equipped environment.
GitHub
autogen
autogen/website/docs/contributor-guide/docker.md
autogen
Autogen Developer Image (autogen_dev_img) - **Purpose**: The `autogen_dev_img` is tailored for contributors to the AutoGen project. It includes a suite of tools and configurations that aid in the development and testing of new features or fixes. - **Usage**: This image is recommended for developers who intend to contribute code or documentation to AutoGen. - **Forking the Project**: It's advisable to fork the AutoGen GitHub project to your own repository. This allows you to make changes in a separate environment without affecting the main project. - **Updating Dockerfile**: Modify your copy of `Dockerfile` in the `dev` folder as needed for your development work. - **Submitting Pull Requests**: Once your changes are ready, submit a pull request from your branch to the upstream AutoGen GitHub project for review and integration. For more details on contributing, see the [AutoGen Contributing](https://microsoft.github.io/autogen/docs/Contribute) page.
GitHub
autogen
autogen/website/docs/contributor-guide/docker.md
autogen
Building the Developer Docker Image - To build the developer Docker image (`autogen_dev_img`), use the following commands: ```bash docker build -f .devcontainer/dev/Dockerfile -t autogen_dev_img https://github.com/microsoft/autogen.git#main ``` - For building the developer image built from a specific Dockerfile in a branch other than main/master ```bash # clone the branch you want to work out of git clone --branch {branch-name} https://github.com/microsoft/autogen.git # cd to your new directory cd autogen # build your Docker image docker build -f .devcontainer/dev/Dockerfile -t autogen_dev-srv_img . ```
GitHub
autogen
autogen/website/docs/contributor-guide/docker.md
autogen
Using the Developer Docker Image Once you have built the `autogen_dev_img`, you can run it using the standard Docker commands. This will place you inside the containerized development environment where you can run tests, develop code, and ensure everything is functioning as expected before submitting your contributions. ```bash docker run -it -p 8081:3000 -v `pwd`/autogen-newcode:newstuff/ autogen_dev_img bash ``` - Note that the `pwd` is shorthand for present working directory. Thus, any path after the pwd is relative to that. If you want a more verbose method you could remove the "`pwd`/autogen-newcode" and replace it with the full path to your directory ```bash docker run -it -p 8081:3000 -v /home/AutoGenDeveloper/autogen-newcode:newstuff/ autogen_dev_img bash ```
GitHub
autogen
autogen/website/docs/contributor-guide/docker.md
autogen
Develop in Remote Container If you use vscode, you can open the autogen folder in a [Container](https://code.visualstudio.com/docs/remote/containers). We have provided the configuration in [devcontainer](https://github.com/microsoft/autogen/blob/main/.devcontainer). They can be used in GitHub codespace too. Developing AutoGen in dev containers is recommended.
GitHub
autogen
autogen/website/docs/contributor-guide/documentation.md
autogen
# Documentation
GitHub
autogen
autogen/website/docs/contributor-guide/documentation.md
autogen
How to get a notebook rendered on the website See [here](https://github.com/microsoft/autogen/blob/main/notebook/contributing.md#how-to-get-a-notebook-displayed-on-the-website) for instructions on how to get a notebook in the `notebook` directory rendered on the website.
GitHub
autogen
autogen/website/docs/contributor-guide/documentation.md
autogen
Build documentation locally 1\. To build and test documentation locally, first install [Node.js](https://nodejs.org/en/download/). For example, ```bash nvm install --lts ``` Then, install `yarn` and other required packages: ```bash npm install --global yarn pip install pydoc-markdown pyyaml termcolor ``` 2\. You also need to install quarto. Please click on the `Pre-release` tab from [this website](https://quarto.org/docs/download/) to download the latest version of `quarto` and install it. Ensure that the `quarto` version is `1.5.23` or higher. 3\. Finally, run the following commands to build: ```console cd website yarn install --frozen-lockfile --ignore-engines pydoc-markdown python process_notebooks.py render yarn start ``` The last command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.
GitHub
autogen
autogen/website/docs/contributor-guide/documentation.md
autogen
Build with Docker To build and test documentation within a docker container. Use the Dockerfile in the `dev` folder as described above to build your image: ```bash docker build -f .devcontainer/dev/Dockerfile -t autogen_dev_img https://github.com/microsoft/autogen.git#main ``` Then start the container like so, this will log you in and ensure that Docker port 3000 is mapped to port 8081 on your local machine ```bash docker run -it -p 8081:3000 -v `pwd`/autogen-newcode:newstuff/ autogen_dev_img bash ``` Once at the CLI in Docker run the following commands: ```bash cd website yarn install --frozen-lockfile --ignore-engines pydoc-markdown python process_notebooks.py render yarn start --host 0.0.0.0 --port 3000 ``` Once done you should be able to access the documentation at `http://127.0.0.1:8081/autogen`
GitHub
autogen
autogen/website/docs/contributor-guide/contributing.md
autogen
# Contributing to AutoGen The project welcomes contributions from developers and organizations worldwide. Our goal is to foster a collaborative and inclusive community where diverse perspectives and expertise can drive innovation and enhance the project's capabilities. Whether you are an individual contributor or represent an organization, we invite you to join us in shaping the future of this project. Together, we can build something truly remarkable. Possible contributions include but not limited to: - Pushing patches. - Code review of pull requests. - Documentation, examples and test cases. - Readability improvement, e.g., improvement on docstr and comments. - Community participation in [issues](https://github.com/microsoft/autogen/issues), [discussions](https://github.com/microsoft/autogen/discussions), [discord](https://aka.ms/autogen-dc), and [twitter](https://twitter.com/pyautogen). - Tutorials, blog posts, talks that promote the project. - Sharing application scenarios and/or related research. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit <https://cla.opensource.microsoft.com>. If you are new to GitHub [here](https://help.github.com/categories/collaborating-with-issues-and-pull-requests/) is a detailed help source on getting involved with development on GitHub. When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA. This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
GitHub
autogen
autogen/website/docs/contributor-guide/contributing.md
autogen
Roadmaps To see what we are working on and what we plan to work on, please check our [Roadmap Issues](https://aka.ms/autogen-roadmap).
GitHub
autogen
autogen/website/docs/contributor-guide/contributing.md
autogen
Becoming a Reviewer There is currently no formal reviewer solicitation process. Current reviewers identify reviewers from active contributors. If you are willing to become a reviewer, you are welcome to let us know on discord.
GitHub
autogen
autogen/website/docs/contributor-guide/contributing.md
autogen
Contact Maintainers The project is currently maintained by a [dynamic group of volunteers](https://butternut-swordtail-8a5.notion.site/410675be605442d3ada9a42eb4dfef30?v=fa5d0a79fd3d4c0f9c112951b2831cbb&pvs=4) from several different organizations. Contact project administrators Chi Wang and Qingyun Wu via auto-gen@outlook.com if you are interested in becoming a maintainer.
GitHub
autogen
autogen/website/docs/contributor-guide/file-bug-report.md
autogen
# File A Bug Report When you submit an issue to [GitHub](https://github.com/microsoft/autogen/issues), please do your best to follow these guidelines! This will make it a lot easier to provide you with good feedback: - The ideal bug report contains a short reproducible code snippet. This way anyone can try to reproduce the bug easily (see [this](https://stackoverflow.com/help/mcve) for more details). If your snippet is longer than around 50 lines, please link to a [gist](https://gist.github.com) or a GitHub repo. - If an exception is raised, please **provide the full traceback**. - Please include your **operating system type and version number**, as well as your **Python, autogen, scikit-learn versions**. The version of autogen can be found by running the following code snippet: ```python import autogen print(autogen.__version__) ``` - Please ensure all **code snippets and error messages are formatted in appropriate code blocks**. See [Creating and highlighting code blocks](https://help.github.com/articles/creating-and-highlighting-code-blocks) for more details.
GitHub
autogen
autogen/website/docs/tutorial/what-next.md
autogen
# What Next? Now that you have learned the basics of AutoGen, you can start to build your own agents. Here are some ideas to get you started without going to the advanced topics: 1. **Chat with LLMs**: In [Human in the Loop](./human-in-the-loop) we covered the basic human-in-the-loop usage. You can try to hook up different LLMs using local model servers like [Ollama](https://github.com/ollama/ollama) and [LM Studio](https://lmstudio.ai/), and chat with them using the human-in-the-loop component of your human proxy agent. 2. **Prompt Engineering**: In [Code Executors](./code-executors) we covered the simple two agent scenario using GPT-4 and Python code executor. To make this scenario work for different LLMs and programming languages, you probably need to tune the system message of the code writer agent. Same with other scenarios that we have covered in this tutorial, you can also try to tune system messages for different LLMs. 3. **Complex Tasks**: In [ConversationPatterns](./conversation-patterns) we covered the basic conversation patterns. You can try to find other tasks that can be decomposed into these patterns, and leverage the code executors and tools to make the agents more powerful.
GitHub
autogen
autogen/website/docs/tutorial/what-next.md
autogen
Dig Deeper - Read the [user guide](/docs/topics) to learn more - Read the examples and guides in the [notebooks section](/docs/notebooks) - Check [research](/docs/Research) and [blog](/blog)
GitHub
autogen
autogen/website/docs/tutorial/what-next.md
autogen
Get Help If you have any questions, you can ask in our [GitHub Discussions](https://github.com/microsoft/autogen/discussions), or join our [Discord Server](https://aka.ms/autogen-dc). [![](https://img.shields.io/discord/1153072414184452236?logo=discord&style=flat.png)](https://aka.ms/autogen-dc)
GitHub
autogen
autogen/website/docs/tutorial/what-next.md
autogen
Get Involved - Check out [Roadmap Issues](https://aka.ms/autogen-roadmap) to see what we are working on. - Contribute your work to our [gallery](/docs/Gallery) - Follow our [contribution guide](/docs/contributor-guide/contributing) to make a pull request to AutoGen - You can also share your work with the community on the Discord server.
GitHub
autogen
autogen/website/blog/2023-04-21-LLM-tuning-math/index.md
autogen
--- title: Does Model and Inference Parameter Matter in LLM Applications? - A Case Study for MATH authors: sonichi tags: [LLM, GPT, research] --- ![level 2 algebra](img/level2algebra.png) **TL;DR:** * **Just by tuning the inference parameters like model, number of responses, temperature etc. without changing any model weights or prompt, the baseline accuracy of untuned gpt-4 can be improved by 20% in high school math competition problems.** * **For easy problems, the tuned gpt-3.5-turbo model vastly outperformed untuned gpt-4 in accuracy (e.g., 90% vs. 70%) and cost efficiency. For hard problems, the tuned gpt-4 is much more accurate (e.g., 35% vs. 20%) and less expensive than untuned gpt-4.** * **AutoGen can help with model selection, parameter tuning, and cost-saving in LLM applications.** Large language models (LLMs) are powerful tools that can generate natural language texts for various applications, such as chatbots, summarization, translation, and more. GPT-4 is currently the state of the art LLM in the world. Is model selection irrelevant? What about inference parameters? In this blog post, we will explore how model and inference parameter matter in LLM applications, using a case study for [MATH](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/be83ab3ecd0db773eb2dc1b0a17836a1-Abstract-round2.html), a benchmark for evaluating LLMs on advanced mathematical problem solving. MATH consists of 12K math competition problems from AMC-10, AMC-12 and AIME. Each problem is accompanied by a step-by-step solution. We will use AutoGen to automatically find the best model and inference parameter for LLMs on a given task and dataset given an inference budget, using a novel low-cost search & pruning strategy. AutoGen currently supports all the LLMs from OpenAI, such as GPT-3.5 and GPT-4. We will use AutoGen to perform model selection and inference parameter tuning. Then we compare the performance and inference cost on solving algebra problems with the untuned gpt-4. We will also analyze how different difficulty levels affect the results.
GitHub
autogen
autogen/website/blog/2023-04-21-LLM-tuning-math/index.md
autogen
Experiment Setup We use AutoGen to select between the following models with a target inference budget $0.02 per instance: - gpt-3.5-turbo, a relatively cheap model that powers the popular ChatGPT app - gpt-4, the state of the art LLM that costs more than 10 times of gpt-3.5-turbo We adapt the models using 20 examples in the train set, using the problem statement as the input and generating the solution as the output. We use the following inference parameters: - temperature: The parameter that controls the randomness of the output text. A higher temperature means more diversity but less coherence. We search for the optimal temperature in the range of [0, 1]. - top_p: The parameter that controls the probability mass of the output tokens. Only tokens with a cumulative probability less than or equal to top-p are considered. A lower top-p means more diversity but less coherence. We search for the optimal top-p in the range of [0, 1]. - max_tokens: The maximum number of tokens that can be generated for each output. We search for the optimal max length in the range of [50, 1000]. - n: The number of responses to generate. We search for the optimal n in the range of [1, 100]. - prompt: We use the template: "{problem} Solve the problem carefully. Simplify your answer as much as possible. Put the final answer in \\boxed{{}}." where {problem} will be replaced by the math problem instance. In this experiment, when n > 1, we find the answer with highest votes among all the responses and then select it as the final answer to compare with the ground truth. For example, if n = 5 and 3 of the responses contain a final answer 301 while 2 of the responses contain a final answer 159, we choose 301 as the final answer. This can help with resolving potential errors due to randomness. We use the average accuracy and average inference cost as the metric to evaluate the performance over a dataset. The inference cost of a particular instance is measured by the price per 1K tokens and the number of tokens consumed.
GitHub
autogen
autogen/website/blog/2023-04-21-LLM-tuning-math/index.md
autogen
Experiment Results The first figure in this blog post shows the average accuracy and average inference cost of each configuration on the level 2 Algebra test set. Surprisingly, the tuned gpt-3.5-turbo model is selected as a better model and it vastly outperforms untuned gpt-4 in accuracy (92% vs. 70%) with equal or 2.5 times higher inference budget. The same observation can be obtained on the level 3 Algebra test set. ![level 3 algebra](img/level3algebra.png) However, the selected model changes on level 4 Algebra. ![level 4 algebra](img/level4algebra.png) This time gpt-4 is selected as the best model. The tuned gpt-4 achieves much higher accuracy (56% vs. 44%) and lower cost than the untuned gpt-4. On level 5 the result is similar. ![level 5 algebra](img/level5algebra.png) We can see that AutoGen has found different optimal model and inference parameters for each subset of a particular level, which shows that these parameters matter in cost-sensitive LLM applications and need to be carefully tuned or adapted. An example notebook to run these experiments can be found at: https://github.com/microsoft/FLAML/blob/v1.2.1/notebook/autogen_chatgpt.ipynb. The experiments were run when AutoGen was a subpackage in FLAML.
GitHub
autogen
autogen/website/blog/2023-04-21-LLM-tuning-math/index.md
autogen
Analysis and Discussion While gpt-3.5-turbo demonstrates competitive accuracy with voted answers in relatively easy algebra problems under the same inference budget, gpt-4 is a better choice for the most difficult problems. In general, through parameter tuning and model selection, we can identify the opportunity to save the expensive model for more challenging tasks, and improve the overall effectiveness of a budget-constrained system. There are many other alternative ways of solving math problems, which we have not covered in this blog post. When there are choices beyond the inference parameters, they can be generally tuned via [`flaml.tune`](https://microsoft.github.io/FLAML/docs/Use-Cases/Tune-User-Defined-Function). The need for model selection, parameter tuning and cost saving is not specific to the math problems. The [Auto-GPT](https://github.com/Significant-Gravitas/Auto-GPT) project is an example where high cost can easily prevent a generic complex task to be accomplished as it needs many LLM inference calls.
GitHub
autogen
autogen/website/blog/2023-04-21-LLM-tuning-math/index.md
autogen
For Further Reading * [Research paper about the tuning technique](https://arxiv.org/abs/2303.04673) * [Documentation about inference tuning](/docs/Use-Cases/enhanced_inference) *Do you have any experience to share about LLM applications? Do you like to see more support or research of LLM optimization or automation? Please join our [Discord](https://aka.ms/autogen-dc) server for discussion.*
GitHub
autogen
autogen/website/blog/2023-07-14-Local-LLMs/index.md
autogen
--- title: Use AutoGen for Local LLMs authors: jialeliu tags: [LLM] --- **TL;DR:** We demonstrate how to use autogen for local LLM application. As an example, we will initiate an endpoint using [FastChat](https://github.com/lm-sys/FastChat) and perform inference on [ChatGLMv2-6b](https://github.com/THUDM/ChatGLM2-6B).
GitHub
autogen
autogen/website/blog/2023-07-14-Local-LLMs/index.md
autogen
Preparations ### Clone FastChat FastChat provides OpenAI-compatible APIs for its supported models, so you can use FastChat as a local drop-in replacement for OpenAI APIs. However, its code needs minor modification in order to function properly. ```bash git clone https://github.com/lm-sys/FastChat.git cd FastChat ``` ### Download checkpoint ChatGLM-6B is an open bilingual language model based on General Language Model (GLM) framework, with 6.2 billion parameters. ChatGLM2-6B is its second-generation version. Before downloading from HuggingFace Hub, you need to have Git LFS [installed](https://docs.github.com/en/repositories/working-with-files/managing-large-files/installing-git-large-file-storage). ```bash git clone https://huggingface.co/THUDM/chatglm2-6b ```
GitHub
autogen
autogen/website/blog/2023-07-14-Local-LLMs/index.md
autogen
Initiate server First, launch the controller ```bash python -m fastchat.serve.controller ``` Then, launch the model worker(s) ```bash python -m fastchat.serve.model_worker --model-path chatglm2-6b ``` Finally, launch the RESTful API server ```bash python -m fastchat.serve.openai_api_server --host localhost --port 8000 ``` Normally this will work. However, if you encounter error like [this](https://github.com/lm-sys/FastChat/issues/1641), commenting out all the lines containing `finish_reason` in `fastchat/protocol/api_protocol.py` and `fastchat/protocol/openai_api_protocol.py` will fix the problem. The modified code looks like: ```python class CompletionResponseChoice(BaseModel): index: int text: str logprobs: Optional[int] = None # finish_reason: Optional[Literal["stop", "length"]] class CompletionResponseStreamChoice(BaseModel): index: int text: str logprobs: Optional[float] = None # finish_reason: Optional[Literal["stop", "length"]] = None ```
GitHub
autogen
autogen/website/blog/2023-07-14-Local-LLMs/index.md
autogen
Interact with model using `oai.Completion` (requires openai<1) Now the models can be directly accessed through openai-python library as well as `autogen.oai.Completion` and `autogen.oai.ChatCompletion`. ```python from autogen import oai # create a text completion request response = oai.Completion.create( config_list=[ { "model": "chatglm2-6b", "base_url": "http://localhost:8000/v1", "api_type": "openai", "api_key": "NULL", # just a placeholder } ], prompt="Hi", ) print(response) # create a chat completion request response = oai.ChatCompletion.create( config_list=[ { "model": "chatglm2-6b", "base_url": "http://localhost:8000/v1", "api_type": "openai", "api_key": "NULL", } ], messages=[{"role": "user", "content": "Hi"}] ) print(response) ``` If you would like to switch to different models, download their checkpoints and specify model path when launching model worker(s).
GitHub
autogen
autogen/website/blog/2023-07-14-Local-LLMs/index.md
autogen
interacting with multiple local LLMs If you would like to interact with multiple LLMs on your local machine, replace the `model_worker` step above with a multi model variant: ```bash python -m fastchat.serve.multi_model_worker \ --model-path lmsys/vicuna-7b-v1.3 \ --model-names vicuna-7b-v1.3 \ --model-path chatglm2-6b \ --model-names chatglm2-6b ``` The inference code would be: ```python from autogen import oai # create a chat completion request response = oai.ChatCompletion.create( config_list=[ { "model": "chatglm2-6b", "base_url": "http://localhost:8000/v1", "api_type": "openai", "api_key": "NULL", }, { "model": "vicuna-7b-v1.3", "base_url": "http://localhost:8000/v1", "api_type": "openai", "api_key": "NULL", } ], messages=[{"role": "user", "content": "Hi"}] ) print(response) ```
GitHub
autogen
autogen/website/blog/2023-07-14-Local-LLMs/index.md
autogen
For Further Reading * [Documentation](/docs/Getting-Started) about `autogen`. * [Documentation](https://github.com/lm-sys/FastChat) about FastChat.
GitHub
autogen
autogen/notebook/contributing.md
autogen
# Contributing
GitHub
autogen
autogen/notebook/contributing.md
autogen
How to get a notebook displayed on the website In the notebook metadata set the `tags` and `description` `front_matter` properties. For example: ```json { "...": "...", "metadata": { "...": "...", "front_matter": { "tags": ["code generation", "debugging"], "description": "Use conversable language learning model agents to solve tasks and provide automatic feedback through a comprehensive example of writing, executing, and debugging Python code to compare stock price changes." } } } ``` **Note**: Notebook metadata can be edited by opening the notebook in a text editor (Or "Open With..." -> "Text Editor" in VSCode) The `tags` field is a list of tags that will be used to categorize the notebook. The `description` field is a brief description of the notebook.
GitHub
autogen
autogen/notebook/contributing.md
autogen
Best practices for authoring notebooks The following points are best practices for authoring notebooks to ensure consistency and ease of use for the website. - The Colab button will be automatically generated on the website for all notebooks where it is missing. Going forward, it is recommended to not include the Colab button in the notebook itself. - Ensure the header is a `h1` header, - `#` - Don't put anything between the yaml and the header ### Consistency for installation and LLM config You don't need to explain in depth how to install AutoGen. Unless there are specific instructions for the notebook just use the following markdown snippet: `````` ````{=mdx} :::info Requirements Install `pyautogen`: ```bash pip install pyautogen ``` For more information, please refer to the [installation guide](/docs/installation/). ::: ```` `````` Or if extras are needed: `````` ````{=mdx} :::info Requirements Some extra dependencies are needed for this notebook, which can be installed via pip: ```bash pip install pyautogen[retrievechat] flaml[automl] ``` For more information, please refer to the [installation guide](/docs/installation/). ::: ```` `````` When specifying the config list, to ensure consistency it is best to use approximately the following code: ```python import autogen config_list = autogen.config_list_from_json( env_or_file="OAI_CONFIG_LIST", ) ``` Then after the code cell where this is used, include the following markdown snippet: `````` ````{=mdx} :::tip Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration). ::: ```` ``````
GitHub
autogen
autogen/notebook/contributing.md
autogen
Testing Notebooks can be tested by running: ```sh python website/process_notebooks.py test ``` This will automatically scan for all notebooks in the notebook/ and website/ dirs. To test a specific notebook pass its path: ```sh python website/process_notebooks.py test notebook/agentchat_logging.ipynb ``` Options: - `--timeout` - timeout for a single notebook - `--exit-on-first-fail` - stop executing further notebooks after the first one fails ### Skip tests If a notebook needs to be skipped then add to the notebook metadata: ```json { "...": "...", "metadata": { "skip_test": "REASON" } } ```
GitHub
autogen
autogen/notebook/contributing.md
autogen
Metadata fields All possible metadata fields are as follows: ```json { "...": "...", "metadata": { "...": "...", "front_matter": { "tags": "List[str] - List of tags to categorize the notebook", "description": "str - Brief description of the notebook", }, "skip_test": "str - Reason for skipping the test. If present, the notebook will be skipped during testing", "skip_render": "str - Reason for skipping rendering the notebook. If present, the notebook will be left out of the website.", "extra_files_to_copy": "List[str] - List of files to copy to the website. The paths are relative to the notebook directory", } } ```
GitHub
autogen
autogen/dotnet/README.md
autogen
### AutoGen for .NET [![dotnet-ci](https://github.com/microsoft/autogen/actions/workflows/dotnet-build.yml/badge.svg)](https://github.com/microsoft/autogen/actions/workflows/dotnet-build.yml) [![NuGet version](https://badge.fury.io/nu/AutoGen.Core.svg)](https://badge.fury.io/nu/AutoGen.Core) > [!NOTE] > Nightly build is available at: > - ![Static Badge](https://img.shields.io/badge/public-blue?style=flat) ![Static Badge](https://img.shields.io/badge/nightly-yellow?style=flat) ![Static Badge](https://img.shields.io/badge/github-grey?style=flat): https://nuget.pkg.github.com/microsoft/index.json > - ![Static Badge](https://img.shields.io/badge/public-blue?style=flat) ![Static Badge](https://img.shields.io/badge/nightly-yellow?style=flat) ![Static Badge](https://img.shields.io/badge/myget-grey?style=flat): https://www.myget.org/F/agentchat/api/v3/index.json > - ![Static Badge](https://img.shields.io/badge/internal-blue?style=flat) ![Static Badge](https://img.shields.io/badge/nightly-yellow?style=flat) ![Static Badge](https://img.shields.io/badge/azure_devops-grey?style=flat) : https://devdiv.pkgs.visualstudio.com/DevDiv/_packaging/AutoGen/nuget/v3/index.json Firstly, following the [installation guide](./website/articles/Installation.md) to install AutoGen packages. Then you can start with the following code snippet to create a conversable agent and chat with it. ```csharp using AutoGen; using AutoGen.OpenAI; var openAIKey = Environment.GetEnvironmentVariable("OPENAI_API_KEY") ?? throw new Exception("Please set OPENAI_API_KEY environment variable."); var gpt35Config = new OpenAIConfig(openAIKey, "gpt-3.5-turbo"); var assistantAgent = new AssistantAgent( name: "assistant", systemMessage: "You are an assistant that help user to do some tasks.", llmConfig: new ConversableAgentConfig { Temperature = 0, ConfigList = [gpt35Config], }) .RegisterPrintMessage(); // register a hook to print message nicely to console // set human input mode to ALWAYS so that user always provide input var userProxyAgent = new UserProxyAgent( name: "user", humanInputMode: ConversableAgent.HumanInputMode.ALWAYS) .RegisterPrintMessage(); // start the conversation await userProxyAgent.InitiateChatAsync( receiver: assistantAgent, message: "Hey assistant, please do me a favor.", maxRound: 10); ``` #### Samples You can find more examples under the [sample project](https://github.com/microsoft/autogen/tree/dotnet/dotnet/sample/AutoGen.BasicSamples). #### Functionality - ConversableAgent - [x] function call - [x] code execution (dotnet only, powered by [`dotnet-interactive`](https://github.com/dotnet/interactive)) - Agent communication - [x] Two-agent chat - [x] Group chat - [ ] Enhanced LLM Inferences - Exclusive for dotnet - [x] Source generator for type-safe function definition generation #### Update log ##### Update on 0.0.11 (2024-03-26) - Add link to Discord channel in nuget's readme.md - Document improvements ##### Update on 0.0.10 (2024-03-12) - Rename `Workflow` to `Graph` - Rename `AddInitializeMessage` to `SendIntroduction` - Rename `SequentialGroupChat` to `RoundRobinGroupChat` ##### Update on 0.0.9 (2024-03-02) - Refactor over @AutoGen.Message and introducing `TextMessage`, `ImageMessage`, `MultiModalMessage` and so on. PR [#1676](https://github.com/microsoft/autogen/pull/1676) - Add `AutoGen.SemanticKernel` to support seamless integration with Semantic Kernel - Move the agent contract abstraction to `AutoGen.Core` package. The `AutoGen.Core` package provides the abstraction for message type, agent and group chat and doesn't contain dependencies over `Azure.AI.OpenAI` or `Semantic Kernel`. This is useful when you want to leverage AutoGen's abstraction only and want to avoid introducing any other dependencies. - Move `GPTAgent`, `OpenAIChatAgent` and all openai-dependencies to `AutoGen.OpenAI` ##### Update on 0.0.8 (2024-02-28) - Fix [#1804](https://github.com/microsoft/autogen/pull/1804) - Streaming support for IAgent [#1656](https://github.com/microsoft/autogen/pull/1656) - Streaming support for middleware via `MiddlewareStreamingAgent` [#1656](https://github.com/microsoft/autogen/pull/1656) - Graph chat support with conditional transition workflow [#1761](https://github.com/microsoft/autogen/pull/1761) - AutoGen.SourceGenerator: Generate `FunctionContract` from `FunctionAttribute` [#1736](https://github.com/microsoft/autogen/pull/1736) ##### Update on 0.0.7 (2024-02-11) - Add `AutoGen.LMStudio` to support comsume openai-like API from LMStudio local server ##### Update on 0.0.6 (2024-01-23) - Add `MiddlewareAgent` - Use `MiddlewareAgent` to implement existing agent hooks (RegisterPreProcess, RegisterPostProcess, RegisterReply) - Remove `AutoReplyAgent`, `PreProcessAgent`, `PostProcessAgent` because they are replaced by `MiddlewareAgent` ##### Update on 0.0.5 - Simplify `IAgent` interface by removing `ChatLLM` Property - Add `GenerateReplyOptions` to `IAgent.GenerateReplyAsync` which allows user to specify or override the options when generating reply ##### Update on 0.0.4 - Move out dependency of Semantic Kernel - Add type `IChatLLM` as connector to LLM ##### Update on 0.0.3 - In AutoGen.SourceGenerator, rename FunctionAttribution to FunctionAttribute - In AutoGen, refactor over ConversationAgent, UserProxyAgent, and AssistantAgent ##### Update on 0.0.2 - update Azure.OpenAI.AI to 1.0.0-beta.12 - update Semantic kernel to 1.0.1
GitHub
autogen
autogen/dotnet/nuget/NUGET.md
autogen
### About AutoGen for .NET `AutoGen for .NET` is the official .NET SDK for [AutoGen](https://github.com/microsoft/autogen). It enables you to create LLM agents and construct multi-agent workflows with ease. It also provides integration with popular platforms like OpenAI, Semantic Kernel, and LM Studio. ### Gettings started - Find documents and examples on our [document site](https://microsoft.github.io/autogen-for-net/) - Join our [Discord channel](https://discord.gg/pAbnFJrkgZ) to get help and discuss with the community - Report a bug or request a feature by creating a new issue in our [github repo](https://github.com/microsoft/autogen) - Consume the nightly build package from one of the [nightly build feeds](https://microsoft.github.io/autogen-for-net/articles/Installation.html#nighly-build)
GitHub
autogen
autogen/dotnet/website/README.md
autogen
## How to build and run the website ### Prerequisites - dotnet 7.0 or later ### Build Firstly, go to autogen/dotnet folder and run the following command to build the website: ```bash dotnet tool restore dotnet tool run docfx website/docfx.json --serve ``` After the command is executed, you can open your browser and navigate to `http://localhost:8080` to view the website.
GitHub
autogen
autogen/dotnet/website/index.md
autogen
[!INCLUDE [](./articles/getting-start.md)]
GitHub
autogen
autogen/dotnet/website/articles/Create-a-user-proxy-agent.md
autogen
## UserProxyAgent [`UserProxyAgent`](../api/AutoGen.UserProxyAgent.yml) is a special type of agent that can be used to proxy user input to another agent or group of agents. It supports the following human input modes: - `ALWAYS`: Always ask user for input. - `NEVER`: Never ask user for input. In this mode, the agent will use the default response (if any) to respond to the message. Or using underlying LLM model to generate response if provided. - `AUTO`: Only ask user for input when conversation is terminated by the other agent(s). Otherwise, use the default response (if any) to respond to the message. Or using underlying LLM model to generate response if provided. > [!TIP] > You can also set up `humanInputMode` when creating `AssistantAgent` to enable/disable human input. `UserProxyAgent` is equivalent to `AssistantAgent` with `humanInputMode` set to `ALWAYS`. Similarly, `AssistantAgent` is equivalent to `UserProxyAgent` with `humanInputMode` set to `NEVER`. ### Create a `UserProxyAgent` with `HumanInputMode` set to `ALWAYS` [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/UserProxyAgentCodeSnippet.cs?name=code_snippet_1)] When running the code, the user proxy agent will ask user for input and use the input as response. ![code output](../images/articles/CreateUserProxyAgent/image-1.png)
GitHub
autogen
autogen/dotnet/website/articles/MistralChatAgent-use-function-call.md
autogen
## Use tool in MistralChatAgent The following example shows how to enable tool support in @AutoGen.Mistral.MistralClientAgent by creating a `GetWeatherAsync` function and passing it to the agent. Firstly, you need to install the following packages: ```bash dotnet add package AutoGen.Mistral dotnet add package AutoGen.SourceGenerator ``` > [!Note] > Tool support is only available in some mistral models. Please refer to the [link](https://docs.mistral.ai/capabilities/function_calling/#available-models) for tool call support in mistral models. > [!Note] > The `AutoGen.SourceGenerator` package carries a source generator that adds support for type-safe function definition generation. For more information, please check out [Create type-safe function](./Create-type-safe-function-call.md). > [!NOTE] > If you are using VSCode as your editor, you may need to restart the editor to see the generated code. Import the required namespace [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/MistralAICodeSnippet.cs?name=using_statement)] Then define a public partial `MistralAgentFunction` class and `GetWeather` method. The `GetWeather` method is a simple function that returns the weather of a given location that marked with @AutoGen.Core.FunctionAttribute. Marking the class as `public partial` together with the @AutoGen.Core.FunctionAttribute attribute allows the source generator to generate the @AutoGen.Core.FunctionContract for the `GetWeather` method. [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/MistralAICodeSnippet.cs?name=weather_function)] Then create an @AutoGen.Mistral.MistralClientAgent and register it with @AutoGen.Mistral.Extension.MistralAgentExtension.RegisterMessageConnector* so it can support @AutoGen.Core.ToolCallMessage and @AutoGen.Core.ToolCallResultMessage. These message types are necessary to use @AutoGen.Core.FunctionCallMiddleware, which provides support for processing and invoking function calls. [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/MistralAICodeSnippet.cs?name=create_mistral_function_call_agent)] Then create an @AutoGen.Core.FunctionCallMiddleware with `GetWeather` function When creating the middleware, we also pass a `functionMap` object which means the function will be automatically invoked when the agent replies a `GetWeather` function call. [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/MistralAICodeSnippet.cs?name=create_get_weather_function_call_middleware)] After the function call middleware is created, register it with the agent so the `GetWeather` function will be passed to agent during chat completion. [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/MistralAICodeSnippet.cs?name=register_function_call_middleware)] Finally, you can chat with the @AutoGen.Mistral.MistralClientAgent about weather! The agent will automatically invoke the `GetWeather` function to "get" the weather information and return the result. [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/MistralAICodeSnippet.cs?name=send_message_with_function_call)]
GitHub
autogen
autogen/dotnet/website/articles/Function-call-with-ollama-and-litellm.md
autogen
This example shows how to use function call with local LLM models where [Ollama](https://ollama.com/) as local model provider and [LiteLLM](https://docs.litellm.ai/docs/) proxy server which provides an openai-api compatible interface. [![](https://img.shields.io/badge/Open%20on%20Github-grey?logo=github)](https://github.com/microsoft/autogen/blob/main/dotnet/sample/AutoGen.OpenAI.Sample/Tool_Call_With_Ollama_And_LiteLLM.cs) To run this example, the following prerequisites are required: - Install [Ollama](https://ollama.com/) and [LiteLLM](https://docs.litellm.ai/docs/) on your local machine. - A local model that supports function call. In this example `dolphincoder:latest` is used.
GitHub
autogen
autogen/dotnet/website/articles/Function-call-with-ollama-and-litellm.md
autogen
Install Ollama and pull `dolphincoder:latest` model First, install Ollama by following the instructions on the [Ollama website](https://ollama.com/). After installing Ollama, pull the `dolphincoder:latest` model by running the following command: ```bash ollama pull dolphincoder:latest ```
GitHub
autogen
autogen/dotnet/website/articles/Function-call-with-ollama-and-litellm.md
autogen
Install LiteLLM and start the proxy server You can install LiteLLM by following the instructions on the [LiteLLM website](https://docs.litellm.ai/docs/). ```bash pip install 'litellm[proxy]' ``` Then, start the proxy server by running the following command: ```bash litellm --model ollama_chat/dolphincoder --port 4000 ``` This will start an openai-api compatible proxy server at `http://localhost:4000`. You can verify if the server is running by observing the following output in the terminal: ```bash #------------------------------------------------------------# # # # 'The worst thing about this product is...' # # https://github.com/BerriAI/litellm/issues/new # # # #------------------------------------------------------------# INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:4000 (Press CTRL+C to quit) ```
GitHub
autogen
autogen/dotnet/website/articles/Function-call-with-ollama-and-litellm.md
autogen
Install AutoGen and AutoGen.SourceGenerator In your project, install the AutoGen and AutoGen.SourceGenerator package using the following command: ```bash dotnet add package AutoGen dotnet add package AutoGen.SourceGenerator ``` The `AutoGen.SourceGenerator` package is used to automatically generate type-safe `FunctionContract` instead of manually defining them. For more information, please check out [Create type-safe function](Create-type-safe-function-call.md). And in your project file, enable structural xml document support by setting the `GenerateDocumentationFile` property to `true`: ```xml <PropertyGroup> <!-- This enables structural xml document support --> <GenerateDocumentationFile>true</GenerateDocumentationFile> </PropertyGroup> ```
GitHub
autogen
autogen/dotnet/website/articles/Function-call-with-ollama-and-litellm.md
autogen
Define `WeatherReport` function and create @AutoGen.Core.FunctionCallMiddleware Create a `public partial` class to host the methods you want to use in AutoGen agents. The method has to be a `public` instance method and its return type must be `Task<string>`. After the methods are defined, mark them with `AutoGen.Core.FunctionAttribute` attribute. [!code-csharp[Define WeatherReport function](../../sample/AutoGen.OpenAI.Sample/Tool_Call_With_Ollama_And_LiteLLM.cs?name=Function)] Then create a @AutoGen.Core.FunctionCallMiddleware and add the `WeatherReport` function to the middleware. The middleware will pass the `FunctionContract` to the agent when generating a response, and process the tool call response when receiving a `ToolCallMessage`. [!code-csharp[Define WeatherReport function](../../sample/AutoGen.OpenAI.Sample/Tool_Call_With_Ollama_And_LiteLLM.cs?name=Create_tools)]
GitHub
autogen
autogen/dotnet/website/articles/Function-call-with-ollama-and-litellm.md
autogen
Create @AutoGen.OpenAI.OpenAIChatAgent with `GetWeatherReport` tool and chat with it Because LiteLLM proxy server is openai-api compatible, we can use @AutoGen.OpenAI.OpenAIChatAgent to connect to it as a third-party openai-api provider. The agent is also registered with a @AutoGen.Core.FunctionCallMiddleware which contains the `WeatherReport` tool. Therefore, the agent can call the `WeatherReport` tool when generating a response. [!code-csharp[Create an agent with tools](../../sample/AutoGen.OpenAI.Sample/Tool_Call_With_Ollama_And_LiteLLM.cs?name=Create_Agent)] The reply from the agent will similar to the following: ```bash AggregateMessage from assistant -------------------- ToolCallMessage: ToolCallMessage from assistant -------------------- - GetWeatherAsync: {"city": "new york"} -------------------- ToolCallResultMessage: ToolCallResultMessage from assistant -------------------- - GetWeatherAsync: The weather in new york is 72 degrees and sunny. -------------------- ```
GitHub
autogen
autogen/dotnet/website/articles/Group-chat-overview.md
autogen
@AutoGen.Core.IGroupChat is a fundamental feature in AutoGen. It provides a way to organize multiple agents under the same context and work together to resolve a given task. In AutoGen, there are two types of group chat: - @AutoGen.Core.RoundRobinGroupChat : This group chat runs agents in a round-robin sequence. The chat history plus the most recent reply from the previous agent will be passed to the next agent. - @AutoGen.Core.GroupChat : This group chat provides a more dynamic yet controlable way to determine the next speaker agent. You can either use a llm agent as group admin, or use a @AutoGen.Core.Graph, which is introduced by [this PR](https://github.com/microsoft/autogen/pull/1761), or both to determine the next speaker agent. > [!NOTE] > In @AutoGen.Core.GroupChat, when only the group admin is used to determine the next speaker agent, it's recommented to use a more powerful llm model, such as `gpt-4` to ensure the best experience.
GitHub
autogen
autogen/dotnet/website/articles/Run-dotnet-code.md
autogen
`AutoGen` provides a built-in feature to run code snippet from agent response. Currently the following languages are supported: - dotnet More languages will be supported in the future.
GitHub
autogen
autogen/dotnet/website/articles/Run-dotnet-code.md
autogen
What is a code snippet? A code snippet in agent response is a code block with a language identifier. For example: [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/RunCodeSnippetCodeSnippet.cs?name=code_snippet_1_3)]
GitHub
autogen
autogen/dotnet/website/articles/Run-dotnet-code.md
autogen
Why running code snippet is useful? The ability of running code snippet can greatly extend the ability of an agent. Because it enables agent to resolve tasks by writing code and run it, which is much more powerful than just returning a text response. For example, in data analysis scenario, agent can resolve tasks like "What is the average of the sales amount of the last 7 days?" by firstly write a code snippet to query the sales amount of the last 7 days, then calculate the average and then run the code snippet to get the result. > [!WARNING] > Running arbitrary code snippet from agent response could bring risks to your system. Using this feature with caution.
GitHub
autogen
autogen/dotnet/website/articles/Run-dotnet-code.md
autogen
How to run dotnet code snippet? The built-in feature of running dotnet code snippet is provided by [dotnet-interactive](https://github.com/dotnet/interactive). To run dotnet code snippet, you need to install the following package to your project, which provides the intergraion with dotnet-interactive: ```xml <PackageReference Include="AutoGen.DotnetInteractive" /> ``` Then you can use @AutoGen.DotnetInteractive.AgentExtension.RegisterDotnetCodeBlockExectionHook(AutoGen.IAgent,InteractiveService,System.String,System.String) to register a `reply hook` to run dotnet code snippet. The hook will check if a csharp code snippet is present in the most recent message from history, and run the code snippet if it is present. The following code snippet shows how to register a dotnet code snippet execution hook: [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/RunCodeSnippetCodeSnippet.cs?name=code_snippet_0_1)] [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/RunCodeSnippetCodeSnippet.cs?name=code_snippet_1_1)] [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/RunCodeSnippetCodeSnippet.cs?name=code_snippet_1_2)]
GitHub
autogen
autogen/dotnet/website/articles/Consume-LLM-server-from-LM-Studio.md
autogen
## Consume LLM server from LM Studio You can use @AutoGen.LMStudio.LMStudioAgent from `AutoGen.LMStudio` package to consume openai-like API from LMStudio local server. ### What's LM Studio [LM Studio](https://lmstudio.ai/) is an app that allows you to deploy and inference hundreds of thousands of open-source language model on your local machine. It provides an in-app chat ui plus an openai-like API to interact with the language model programmatically. ### Installation - Install LM studio if you haven't done so. You can find the installation guide [here](https://lmstudio.ai/) - Add `AutoGen.LMStudio` to your project. ```xml <ItemGroup> <PackageReference Include="AutoGen.LMStudio" Version="AUTOGEN_LMSTUDIO_VERSION" /> </ItemGroup> ``` ### Usage The following code shows how to use `LMStudioAgent` to write a piece of C# code to calculate 100th of fibonacci. Before running the code, make sure you have local server from LM Studio running on `localhost:1234`. [!code-csharp[](../../sample/AutoGen.BasicSamples/Example08_LMStudio.cs?name=lmstudio_using_statements)] [!code-csharp[](../../sample/AutoGen.BasicSamples/Example08_LMStudio.cs?name=lmstudio_example_1)]
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-use-function-call.md
autogen
The following example shows how to create a `GetWeatherAsync` function and pass it to @AutoGen.OpenAI.OpenAIChatAgent. Firstly, you need to install the following packages: ```xml <ItemGroup> <PackageReference Include="AutoGen.OpenAI" Version="AUTOGEN_VERSION" /> <PackageReference Include="AutoGen.SourceGenerator" Version="AUTOGEN_VERSION" /> </ItemGroup> ``` > [!Note] > The `AutoGen.SourceGenerator` package carries a source generator that adds support for type-safe function definition generation. For more information, please check out [Create type-safe function](./Create-type-safe-function-call.md). > [!NOTE] > If you are using VSCode as your editor, you may need to restart the editor to see the generated code. Firstly, import the required namespaces: [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/OpenAICodeSnippet.cs?name=using_statement)] Then, define a public partial class: `Function` with `GetWeather` method [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/OpenAICodeSnippet.cs?name=weather_function)] Then, create an @AutoGen.OpenAI.OpenAIChatAgent and register it with @AutoGen.OpenAI.OpenAIChatRequestMessageConnector so it can support @AutoGen.Core.ToolCallMessage and @AutoGen.Core.ToolCallResultMessage. These message types are necessary to use @AutoGen.Core.FunctionCallMiddleware, which provides support for processing and invoking function calls. [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/OpenAICodeSnippet.cs?name=openai_chat_agent_get_weather_function_call)] Then, create an @AutoGen.Core.FunctionCallMiddleware with `GetWeather` function and register it with the agent above. When creating the middleware, we also pass a `functionMap` to @AutoGen.Core.FunctionCallMiddleware, which means the function will be automatically invoked when the agent replies a `GetWeather` function call. [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/OpenAICodeSnippet.cs?name=create_function_call_middleware)] Finally, you can chat with the @AutoGen.OpenAI.OpenAIChatAgent and invoke the `GetWeather` function. [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/OpenAICodeSnippet.cs?name=chat_agent_send_function_call)]
GitHub
autogen
autogen/dotnet/website/articles/Use-function-call.md
autogen
## Use function call in AutoGen agent Typically, there are three ways to pass a function definition to an agent to enable function call: - Pass function definitions when creating an agent. This only works if the agent supports pass function call from its constructor. - Passing function definitions in @AutoGen.Core.GenerateReplyOptions when invoking an agent - Register an agent with @AutoGen.Core.FunctionCallMiddleware to process and invoke function calls. > [!NOTE] > To use function call, the underlying LLM model must support function call as well for the best experience. If the model does not support function call, it's likely that the function call will be ignored and the model will reply with a normal response even if a function call is passed to it.
GitHub
autogen
autogen/dotnet/website/articles/Use-function-call.md
autogen
Pass function definitions when creating an agent In some agents like @AutoGen.AssistantAgent or @AutoGen.OpenAI.GPTAgent, you can pass function definitions when creating the agent Suppose the `TypeSafeFunctionCall` is defined in the following code snippet: [!code-csharp[TypeSafeFunctionCall](../../sample/AutoGen.BasicSamples/CodeSnippet/TypeSafeFunctionCallCodeSnippet.cs?name=weather_report)] You can then pass the `WeatherReport` to the agent when creating it: [!code-csharp[assistant agent](../../sample/AutoGen.BasicSamples/CodeSnippet/FunctionCallCodeSnippet.cs?name=code_snippet_4)]
GitHub
autogen
autogen/dotnet/website/articles/Use-function-call.md
autogen
Passing function definitions in @AutoGen.Core.GenerateReplyOptions when invoking an agent You can also pass function definitions in @AutoGen.Core.GenerateReplyOptions when invoking an agent. This is useful when you want to override the function definitions passed to the agent when creating it. [!code-csharp[assistant agent](../../sample/AutoGen.BasicSamples/CodeSnippet/FunctionCallCodeSnippet.cs?name=overrider_function_contract)]
GitHub
autogen
autogen/dotnet/website/articles/Use-function-call.md
autogen
Register an agent with @AutoGen.Core.FunctionCallMiddleware to process and invoke function calls You can also register an agent with @AutoGen.Core.FunctionCallMiddleware to process and invoke function calls. This is useful when you want to process and invoke function calls in a more flexible way. [!code-csharp[assistant agent](../../sample/AutoGen.BasicSamples/CodeSnippet/FunctionCallCodeSnippet.cs?name=register_function_call_middleware)]
GitHub
autogen
autogen/dotnet/website/articles/Use-function-call.md
autogen
Invoke function call inside an agent To invoke a function instead of returning the function call object, you can pass its function call wrapper to the agent via `functionMap`. You can then pass the `WeatherReportWrapper` to the agent via `functionMap`: [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/FunctionCallCodeSnippet.cs?name=code_snippet_6)] When a function call object is returned, the agent will invoke the function and uses the return value as response rather than returning the function call object. [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/FunctionCallCodeSnippet.cs?name=code_snippet_6_1)]
GitHub
autogen
autogen/dotnet/website/articles/Use-function-call.md
autogen
Invoke function call by another agent You can also use another agent to invoke the function call from one agent. This is a useful pattern in two-agent chat, where one agent is used as a function proxy to invoke the function call from another agent. Once the function call is invoked, the result can be returned to the original agent for further processing. [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/FunctionCallCodeSnippet.cs?name=two_agent_weather_chat)]
GitHub
autogen
autogen/dotnet/website/articles/Create-your-own-agent.md
autogen
## Coming soon
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-use-json-mode.md
autogen
The following example shows how to enable JSON mode in @AutoGen.OpenAI.OpenAIChatAgent. [![](https://img.shields.io/badge/Open%20on%20Github-grey?logo=github)](https://github.com/microsoft/autogen/blob/main/dotnet/sample/AutoGen.OpenAI.Sample/Use_Json_Mode.cs)
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-use-json-mode.md
autogen
What is JSON mode? JSON mode is a new feature in OpenAI which allows you to instruct model to always respond with a valid JSON object. This is useful when you want to constrain the model output to JSON format only. > [!NOTE] > Currently, JOSN mode is only supported by `gpt-4-turbo-preview` and `gpt-3.5-turbo-0125`. For more information (and limitations) about JSON mode, please visit [OpenAI API documentation](https://platform.openai.com/docs/guides/text-generation/json-mode).
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-use-json-mode.md
autogen
How to enable JSON mode in OpenAIChatAgent. To enable JSON mode for @AutoGen.OpenAI.OpenAIChatAgent, set `responseFormat` to `ChatCompletionsResponseFormat.JsonObject` when creating the agent. Note that when enabling JSON mode, you also need to instruct the agent to output JSON format in its system message. [!code-csharp[](../../sample/AutoGen.OpenAI.Sample/Use_Json_Mode.cs?name=create_agent)] After enabling JSON mode, the `openAIClientAgent` will always respond in JSON format when it receives a message. [!code-csharp[](../../sample/AutoGen.OpenAI.Sample/Use_Json_Mode.cs?name=chat_with_agent)] When running the example, the output from `openAIClientAgent` will be a valid JSON object which can be parsed as `Person` class defined below. Note that in the output, the `address` field is missing because the address information is not provided in user input. [!code-csharp[](../../sample/AutoGen.OpenAI.Sample/Use_Json_Mode.cs?name=person_class)] The output will be: ```bash Name: John Age: 25 Done ```
GitHub
autogen
autogen/dotnet/website/articles/Installation.md
autogen
### Current version: [![NuGet version](https://badge.fury.io/nu/AutoGen.Core.svg)](https://badge.fury.io/nu/AutoGen.Core) AutoGen.Net provides the following packages, you can choose to install one or more of them based on your needs: - `AutoGen`: The one-in-all package. This package has dependencies over `AutoGen.Core`, `AutoGen.OpenAI`, `AutoGen.LMStudio`, `AutoGen.SemanticKernel` and `AutoGen.SourceGenerator`. - `AutoGen.Core`: The core package, this package provides the abstraction for message type, agent and group chat. - `AutoGen.OpenAI`: This package provides the integration agents over openai models. - `AutoGen.Mistral`: This package provides the integration agents for Mistral.AI models. - `AutoGen.Ollama`: This package provides the integration agents for [Ollama](https://ollama.com/). - `AutoGen.Anthropic`: This package provides the integration agents for [Anthropic](https://www.anthropic.com/api) - `AutoGen.LMStudio`: This package provides the integration agents from LM Studio. - `AutoGen.SemanticKernel`: This package provides the integration agents over semantic kernel. - `AutoGen.Gemini`: This package provides the integration agents from [Google Gemini](https://gemini.google.com/). - `AutoGen.SourceGenerator`: This package carries a source generator that adds support for type-safe function definition generation. - `AutoGen.DotnetInteractive`: This packages carries dotnet interactive support to execute dotnet code snippet. >[!Note] > Help me choose > - If you just want to install one package and enjoy the core features of AutoGen, choose `AutoGen`. > - If you want to leverage AutoGen's abstraction only and want to avoid introducing any other dependencies, like `Azure.AI.OpenAI` or `Semantic Kernel`, choose `AutoGen.Core`. You will need to implement your own agent, but you can still use AutoGen core features like group chat, built-in message type, workflow and middleware. >- If you want to use AutoGen with openai, choose `AutoGen.OpenAI`, similarly, choose `AutoGen.LMStudio` or `AutoGen.SemanticKernel` if you want to use agents from LM Studio or semantic kernel. >- If you just want the type-safe source generation for function call and don't want any other features, which even include the AutoGen's abstraction, choose `AutoGen.SourceGenerator`. Then, install the package using the following command: ```bash dotnet add package AUTOGEN_PACKAGES ``` ### Consume nightly build To consume nightly build, you can add one of the following feeds to your `NuGet.config` or global nuget config: - ![Static Badge](https://img.shields.io/badge/public-blue?style=flat) ![Static Badge](https://img.shields.io/badge/github-grey?style=flat): https://nuget.pkg.github.com/microsoft/index.json - ![Static Badge](https://img.shields.io/badge/public-blue?style=flat) ![Static Badge](https://img.shields.io/badge/myget-grey?style=flat): https://www.myget.org/F/agentchat/api/v3/index.json - ![Static Badge](https://img.shields.io/badge/internal-blue?style=flat) ![Static Badge](https://img.shields.io/badge/azure_devops-grey?style=flat) : https://devdiv.pkgs.visualstudio.com/DevDiv/_packaging/AutoGen/nuget/v3/index.json To add a local `NuGet.config`, create a file named `NuGet.config` in the root of your project and add the following content: ```xml <?xml version="1.0" encoding="utf-8"?> <configuration> <packageSources> <clear /> <!-- dotnet-tools contains Microsoft.DotNet.Interactive.VisualStudio package, which is used by AutoGen.DotnetInteractive --> <add key="dotnet-tools" value="https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-tools/nuget/v3/index.json" /> <add key="AutoGen" value="$(FEED_URL)" /> <!-- replace $(FEED_URL) with the feed url --> <!-- other feeds --> </packageSources> <disabledPackageSources /> </configuration> ``` To add the feed to your global nuget config. You can do this by running the following command in your terminal: ```bash dotnet nuget add source FEED_URL --name AutoGen # dotnet-tools contains Microsoft.DotNet.Interactive.VisualStudio package, which is used by AutoGen.DotnetInteractive dotnet nuget add source https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-tools/nuget/v3/index.json --name dotnet-tools ``` Once you have added the feed, you can install the nightly-build package using the following command: ```bash dotnet add package AUTOGEN_PACKAGES VERSION ```
GitHub
autogen
autogen/dotnet/website/articles/AutoGen-Mistral-Overview.md
autogen
## AutoGen.Mistral overview AutoGen.Mistral provides the following agent(s) to connect to [Mistral.AI](https://mistral.ai/) platform. - @AutoGen.Mistral.MistralClientAgent: A slim wrapper agent over @AutoGen.Mistral.MistralClient. ### Get started with AutoGen.Mistral To get started with AutoGen.Mistral, follow the [installation guide](Installation.md) to make sure you add the AutoGen feed correctly. Then add the `AutoGen.Mistral` package to your project file. ```bash dotnet add package AutoGen.Mistral ``` >[!NOTE] > You need to provide an api-key to use Mistral models which will bring additional cost while using. you can get the api key from [Mistral.AI](https://mistral.ai/). ### Example Import the required namespace [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/MistralAICodeSnippet.cs?name=using_statement)] Create a @AutoGen.Mistral.MistralClientAgent and start chatting! [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/MistralAICodeSnippet.cs?name=create_mistral_agent)] Use @AutoGen.Core.IStreamingAgent.GenerateStreamingReplyAsync* to stream the chat completion. [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/MistralAICodeSnippet.cs?name=streaming_chat)]
GitHub
autogen
autogen/dotnet/website/articles/Built-in-messages.md
autogen
## An overview of built-in @AutoGen.Core.IMessage types Start from 0.0.9, AutoGen introduces the @AutoGen.Core.IMessage and @AutoGen.Core.IMessage`1 types to provide a unified message interface for different agents. The @AutoGen.Core.IMessage is a non-generic interface that represents a message. The @AutoGen.Core.IMessage`1 is a generic interface that represents a message with a specific `T` where `T` can be any type. Besides, AutoGen also provides a set of built-in message types that implement the @AutoGen.Core.IMessage and @AutoGen.Core.IMessage`1 interfaces. These built-in message types are designed to cover different types of messages as much as possilbe. The built-in message types include: > [!NOTE] > The minimal requirement for an agent to be used as admin in @AutoGen.Core.GroupChat is to support @AutoGen.Core.TextMessage. > [!NOTE] > @AutoGen.Core.Message will be deprecated in 0.0.14. Please replace it with a more specific message type like @AutoGen.Core.TextMessage, @AutoGen.Core.ImageMessage, etc. - @AutoGen.Core.TextMessage: A message that contains a piece of text. - @AutoGen.Core.ImageMessage: A message that contains an image. - @AutoGen.Core.MultiModalMessage: A message that contains multiple modalities like text, image, etc. - @AutoGen.Core.ToolCallMessage: A message that represents a function call request. - @AutoGen.Core.ToolCallResultMessage: A message that represents a function call result. - @AutoGen.Core.ToolCallAggregateMessage: A message that contains both @AutoGen.Core.ToolCallMessage and @AutoGen.Core.ToolCallResultMessage. This type of message is used by @AutoGen.Core.FunctionCallMiddleware to aggregate both @AutoGen.Core.ToolCallMessage and @AutoGen.Core.ToolCallResultMessage into a single message. - @AutoGen.Core.MessageEnvelope`1: A message that represents an envelope that contains a message of any type. - @AutoGen.Core.Message: The original message type before 0.0.9. This message type is reserved for backward compatibility. It is recommended to replace it with a more specific message type like @AutoGen.Core.TextMessage, @AutoGen.Core.ImageMessage, etc. ### Streaming message support AutoGen also introduces @AutoGen.Core.IStreamingMessage and @AutoGen.Core.IStreamingMessage`1 which are used in streaming call api. The following built-in message types implement the @AutoGen.Core.IStreamingMessage and @AutoGen.Core.IStreamingMessage`1 interfaces: > [!NOTE] > All @AutoGen.Core.IMessage is also a @AutoGen.Core.IStreamingMessage. That means you can return an @AutoGen.Core.IMessage from a streaming call method. It's also recommended to return the final updated result instead of the last update as the last message in the streaming call method to indicate the end of the stream, which saves caller's effort of assembling the final result from multiple updates. - @AutoGen.Core.TextMessageUpdate: A message that contains a piece of text update. - @AutoGen.Core.ToolCallMessageUpdate: A message that contains a function call request update. #### Usage The below code snippet shows how to print a streaming update to console and update the final result on the caller side. [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/BuildInMessageCodeSnippet.cs?name=StreamingCallCodeSnippet)] If the agent returns a final result instead of the last update as the last message in the streaming call method, the caller can directly use the final result without assembling the final result from multiple updates. [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/BuildInMessageCodeSnippet.cs?name=StreamingCallWithFinalMessage)]
GitHub
autogen
autogen/dotnet/website/articles/Roundrobin-chat.md
autogen
@AutoGen.Core.RoundRobinGroupChat is a group chat that invokes agents in a round-robin order. It's useful when you want to call multiple agents in a fixed sequence. For example, asking search agent to retrieve related information followed by a summarization agent to summarize the information. Beside, it also used by @AutoGen.Core.AgentExtension.SendAsync(AutoGen.Core.IAgent,AutoGen.Core.IAgent,System.String,System.Collections.Generic.IEnumerable{AutoGen.Core.IMessage},System.Int32,System.Threading.CancellationToken) in two agent chat. ### Use @AutoGen.Core.RoundRobinGroupChat to implement a search-summarize chat flow ```mermaid flowchart LR A[User] -->|Ask a question| B[Search Agent] B -->|Retrieve information| C[Summarization Agent] C -->|Summarize result| A[User] ``` > [!NOTE] > Complete code can be found in [Example11_Sequential_GroupChat_Example](https://github.com/microsoft/autogen/blob/dotnet/dotnet/sample/AutoGen.BasicSamples/Example11_Sequential_GroupChat_Example.cs); Step 1: Add required using statements [!code-csharp[](../../sample/AutoGen.BasicSamples/Example11_Sequential_GroupChat_Example.cs?name=using_statement)] Step 2: Create a `bingSearch` agent using @AutoGen.SemanticKernel.SemanticKernelAgent [!code-csharp[](../../sample/AutoGen.BasicSamples/Example11_Sequential_GroupChat_Example.cs?name=CreateBingSearchAgent)] Step 3: Create a `summarization` agent using @AutoGen.SemanticKernel.SemanticKernelAgent [!code-csharp[](../../sample/AutoGen.BasicSamples/Example11_Sequential_GroupChat_Example.cs?name=CreateSummarizerAgent)] Step 4: Create a @AutoGen.Core.RoundRobinGroupChat and add `bingSearch` and `summarization` agents to it [!code-csharp[](../../sample/AutoGen.BasicSamples/Example11_Sequential_GroupChat_Example.cs?name=Sequential_GroupChat_Example)] Output: ![Searcher-Summarizer](../images/articles/SequentialGroupChat/SearcherSummarizer.gif)
GitHub
autogen
autogen/dotnet/website/articles/Agent-overview.md
autogen
`Agent` is one of the most fundamental concepts in AutoGen.Net. In AutoGen.Net, you construct a single agent to process a specific task, and you extend an agent using [Middlewares](./Middleware-overview.md), and you construct a multi-agent workflow using [GroupChat](./Group-chat-overview.md). > [!NOTE] > Every agent in AutoGen.Net implements @AutoGen.Core.IAgent, for agent that supports streaming reply, it also implements @AutoGen.Core.IStreamingAgent.
GitHub
autogen
autogen/dotnet/website/articles/Agent-overview.md
autogen
Create an agent - Create an @AutoGen.AssistantAgent: [Create an assistant agent](./Create-an-agent.md) - Create an @AutoGen.OpenAI.OpenAIChatAgent: [Create an OpenAI chat agent](./OpenAIChatAgent-simple-chat.md) - Create a @AutoGen.SemanticKernel.SemanticKernelAgent: [Create a semantic kernel agent](./AutoGen.SemanticKernel/SemanticKernelAgent-simple-chat.md) - Create a @AutoGen.LMStudio.LMStudioAgent: [Connect to LM Studio](./Consume-LLM-server-from-LM-Studio.md) - Create your own agent: [Create your own agent](./Create-your-own-agent.md)
GitHub
autogen
autogen/dotnet/website/articles/Agent-overview.md
autogen
Chat with an agent To chat with an agent, typically you can invoke @AutoGen.Core.IAgent.GenerateReplyAsync*. On top of that, you can also use one of the extension methods like @AutoGen.Core.AgentExtension.SendAsync* as shortcuts. > [!NOTE] > AutoGen provides a list of built-in message types like @AutoGen.Core.TextMessage, @AutoGen.Core.ImageMessage, @AutoGen.Core.MultiModalMessage, @AutoGen.Core.ToolCallMessage, @AutoGen.Core.ToolCallResultMessage, etc. You can use these message types to chat with an agent. For further details, see [built-in messages](./Built-in-messages.md). - Send a @AutoGen.Core.TextMessage to an agent via @AutoGen.Core.IAgent.GenerateReplyAsync*: [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/AgentCodeSnippet.cs?name=ChatWithAnAgent_GenerateReplyAsync)] - Send a message to an agent via @AutoGen.Core.AgentExtension.SendAsync*: [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/AgentCodeSnippet.cs?name=ChatWithAnAgent_SendAsync)]
GitHub
autogen
autogen/dotnet/website/articles/Agent-overview.md
autogen
Streaming chat If an agent implements @AutoGen.Core.IStreamingAgent, you can use @AutoGen.Core.IStreamingAgent.GenerateStreamingReplyAsync* to chat with the agent in a streaming way. You would need to process the streaming updates on your side though. - Send a @AutoGen.Core.TextMessage to an agent via @AutoGen.Core.IStreamingAgent.GenerateStreamingReplyAsync*, and print the streaming updates to console: [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/AgentCodeSnippet.cs?name=ChatWithAnAgent_GenerateStreamingReplyAsync)]
GitHub
autogen
autogen/dotnet/website/articles/Agent-overview.md
autogen
Register middleware to an agent @AutoGen.Core.IMiddleware and @AutoGen.Core.IStreamingMiddleware are used to extend the behavior of @AutoGen.Core.IAgent.GenerateReplyAsync* and @AutoGen.Core.IStreamingAgent.GenerateStreamingReplyAsync*. You can register middleware to an agent to customize the behavior of the agent on things like function call support, converting message of different types, print message, gather user input, etc. - Middleware overview: [Middleware overview](./Middleware-overview.md) - Write message to console: [Print message middleware](./Print-message-middleware.md) - Convert message type: [SemanticKernelChatMessageContentConnector](./AutoGen.SemanticKernel/SemanticKernelAgent-support-more-messages.md) and [OpenAIChatRequestMessageConnector](./OpenAIChatAgent-support-more-messages.md) - Create your own middleware: [Create your own middleware](./Create-your-own-middleware.md)
GitHub
autogen
autogen/dotnet/website/articles/Agent-overview.md
autogen
Group chat You can construct a multi-agent workflow using @AutoGen.Core.IGroupChat. In AutoGen.Net, there are two type of group chat: @AutoGen.Core.SequentialGroupChat: Orchestrates the agents in the group chat in a fix, sequential order. @AutoGen.Core.GroupChat: Provide more dynamic yet controllable way to orchestrate the agents in the group chat. For further details, see [Group chat overview](./Group-chat-overview.md).
GitHub
autogen
autogen/dotnet/website/articles/Group-chat.md
autogen
@AutoGen.Core.GroupChat invokes agents in a dynamic way. On one hand, It relies on its admin agent to intellegently determines the next speaker based on conversation context, and on the other hand, it also allows you to control the conversation flow by using a @AutoGen.Core.Graph. This makes it a more dynamic yet controlable way to determine the next speaker agent. You can use @AutoGen.Core.GroupChat to create a dynamic group chat with multiple agents working together to resolve a given task. > [!NOTE] > In @AutoGen.Core.GroupChat, when only the group admin is used to determine the next speaker agent, it's recommented to use a more powerful llm model, such as `gpt-4` to ensure the best experience.
GitHub
autogen
autogen/dotnet/website/articles/Group-chat.md
autogen
Use @AutoGen.Core.GroupChat to implement a code interpreter chat flow The following example shows how to create a dynamic group chat with @AutoGen.Core.GroupChat. In this example, we will create a dynamic group chat with 4 agents: `admin`, `coder`, `reviewer` and `runner`. Each agent has its own role in the group chat: ### Code interpreter group chat - `admin`: create task for group to work on and terminate the conversation when task is completed. In this example, the task to resolve is to calculate the 39th Fibonacci number. - `coder`: a dotnet coder who can write code to resolve tasks. - `reviewer`: a dotnet code reviewer who can review code written by `coder`. In this example, `reviewer` will examine if the code written by `coder` follows the condition below: - has only one csharp code block. - use top-level statements. - is dotnet code snippet. - print the result of the code snippet to console. - `runner`: a dotnet code runner who can run code written by `coder` and print the result. ```mermaid flowchart LR subgraph Group Chat B[Amin] C[Coder] D[Reviewer] E[Runner] end ``` > [!NOTE] > The complete code of this example can be found in `Example07_Dynamic_GroupChat_Calculate_Fibonacci` ### Create group chat The code below shows how to create a dynamic group chat with @AutoGen.Core.GroupChat. In this example, we will create a dynamic group chat with 4 agents: `admin`, `coder`, `reviewer` and `runner`. In this case we don't pass a workflow to the group chat, so the group chat will use driven by the admin agent. [!code-csharp[](../../sample/AutoGen.BasicSamples/Example07_Dynamic_GroupChat_Calculate_Fibonacci.cs?name=create_group_chat)] > [!TIP] > You can set up initial context for the group chat using @AutoGen.Core.GroupChatExtension.SendIntroduction*. The initial context can help group admin orchestrates the conversation flow. Output: ![GroupChat](../images/articles/DynamicGroupChat/dynamicChat.gif) ### Below are break-down of how agents are created and their roles in the group chat. - Create admin agent The code below shows how to create `admin` agent. `admin` agent will create a task for group to work on and terminate the conversation when task is completed. [!code-csharp[](../../sample/AutoGen.BasicSamples/Example07_Dynamic_GroupChat_Calculate_Fibonacci.cs?name=create_admin)] - Create coder agent [!code-csharp[](../../sample/AutoGen.BasicSamples/Example07_Dynamic_GroupChat_Calculate_Fibonacci.cs?name=create_coder)] - Create reviewer agent The code below shows how to create `reviewer` agent. `reviewer` agent is a dotnet code reviewer who can review code written by `coder`. In this example, a `function` is used to examine if the code written by `coder` follows the condition. [!code-csharp[](../../sample/AutoGen.BasicSamples/Example07_Dynamic_GroupChat_Calculate_Fibonacci.cs?name=reviewer_function)] > [!TIP] > You can use @AutoGen.Core.FunctionAttribute to generate type-safe function definition and function call wrapper for the function. For more information, please check out [Create type safe function call](./Create-type-safe-function-call.md). [!code-csharp[](../../sample/AutoGen.BasicSamples/Example07_Dynamic_GroupChat_Calculate_Fibonacci.cs?name=create_reviewer)] - Create runner agent > [!TIP] > `AutoGen` provides a built-in support for running code snippet. For more information, please check out [Execute code snippet](./Run-dotnet-code.md). [!code-csharp[](../../sample/AutoGen.BasicSamples/Example07_Dynamic_GroupChat_Calculate_Fibonacci.cs?name=create_runner)]
GitHub
autogen
autogen/dotnet/website/articles/Two-agent-chat.md
autogen
In `AutoGen`, you can start a conversation between two agents using @AutoGen.Core.AgentExtension.InitiateChatAsync* or one of @AutoGen.Core.AgentExtension.SendAsync* APIs. When conversation starts, the sender agent will firstly send a message to receiver agent, then receiver agent will generate a reply and send it back to sender agent. This process will repeat until either one of the agent sends a termination message or the maximum number of turns is reached. > [!NOTE] > A termination message is an @AutoGen.Core.IMessage which content contains the keyword: @AutoGen.Core.GroupChatExtension.TERMINATE. To determine if a message is a terminate message, you can use @AutoGen.Core.GroupChatExtension.IsGroupChatTerminateMessage*.
GitHub
autogen
autogen/dotnet/website/articles/Two-agent-chat.md
autogen
A basic example The following example shows how to start a conversation between the teacher agent and student agent, where the student agent starts the conversation by asking teacher to create math questions. > [!TIP] > You can use @AutoGen.Core.PrintMessageMiddlewareExtension.RegisterPrintMessage* to pretty print the message replied by the agent. > [!NOTE] > The conversation is terminated when teacher agent sends a message containing the keyword: @AutoGen.Core.GroupChatExtension.TERMINATE. > [!NOTE] > The teacher agent uses @AutoGen.Core.MiddlewareExtension.RegisterPostProcess* to register a post process function which returns a hard-coded termination message when a certain condition is met. Comparing with putting the @AutoGen.Core.GroupChatExtension.TERMINATE keyword in the prompt, this approach is more robust especially when a weaker LLM model is used. [!code-csharp[](../../sample/AutoGen.BasicSamples/Example02_TwoAgent_MathChat.cs?name=code_snippet_1)]
GitHub
autogen
autogen/dotnet/website/articles/Middleware-overview.md
autogen
`Middleware` is a key feature in AutoGen.Net that enables you to customize the behavior of @AutoGen.Core.IAgent.GenerateReplyAsync*. It's similar to the middleware concept in ASP.Net and is widely used in AutoGen.Net for various scenarios, such as function call support, converting message of different types, print message, gather user input, etc. Here are a few examples of how middleware is used in AutoGen.Net: - @AutoGen.AssistantAgent is essentially an agent with @AutoGen.Core.FunctionCallMiddleware, @AutoGen.HumanInputMiddleware and default reply middleware. - @AutoGen.OpenAI.GPTAgent is essentially an @AutoGen.OpenAI.OpenAIChatAgent with @AutoGen.Core.FunctionCallMiddleware and @AutoGen.OpenAI.OpenAIChatRequestMessageConnector.
GitHub
autogen
autogen/dotnet/website/articles/Middleware-overview.md
autogen
Use middleware in an agent To use middleware in an existing agent, you can either create a @AutoGen.Core.MiddlewareAgent on top of the original agent or register middleware functions to the original agent. ### Create @AutoGen.Core.MiddlewareAgent on top of the original agent [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/MiddlewareAgentCodeSnippet.cs?name=create_middleware_agent_with_original_agent)] ### Register middleware functions to the original agent [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/MiddlewareAgentCodeSnippet.cs?name=register_middleware_agent)]
GitHub
autogen
autogen/dotnet/website/articles/Middleware-overview.md
autogen
Short-circuit the next agent The example below shows how to short-circuit the inner agent [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/MiddlewareAgentCodeSnippet.cs?name=short_circuit_middleware_agent)] > [!Note] > When multiple middleware functions are registered, the order of middleware functions is first registered, last invoked.
GitHub
autogen
autogen/dotnet/website/articles/Middleware-overview.md
autogen
Streaming middleware You can also modify the behavior of @AutoGen.Core.IStreamingAgent.GenerateStreamingReplyAsync* by registering streaming middleware to it. One example is @AutoGen.OpenAI.OpenAIChatRequestMessageConnector which converts `StreamingChatCompletionsUpdate` to one of `AutoGen.Core.TextMessageUpdate` or `AutoGen.Core.ToolCallMessageUpdate`. [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/MiddlewareAgentCodeSnippet.cs?name=register_streaming_middleware)]
GitHub
autogen
autogen/dotnet/website/articles/Function-call-overview.md
autogen
## Overview of function call In some LLM models, you can provide a list of function definitions to the model. The function definition is usually essentially an OpenAPI schema object which describes the function, its parameters and return value. And these function definitions tells the model what "functions" are available to be used to resolve the user's request. This feature greatly extend the capability of LLM models by enabling them to "execute" arbitrary function as long as it can be described as a function definition. Below is an example of a function definition for getting weather report for a city: > [!NOTE] > To use function call, the underlying LLM model must support function call as well for the best experience. > The model used in the example below is `gpt-3.5-turbo-0613`. ```json { "name": "GetWeather", "description": "Get the weather report for a city", "parameters": { "city": { "type": "string", "description": "The city name" }, "required": ["city"] }, } ``` When the model receives a message, it will intelligently decide whether to use function call or not based on the message received. If the model decides to use function call, it will generate a function call which can be used to invoke the actual function. The function call is a json object which contains the function name and its arguments. Below is an example of a function call object for getting weather report for Seattle: ```json { "name": "GetWeather", "arguments": { "city": "Seattle" } } ``` And when the function call is return to the caller, it can be used to invoke the actual function to get the weather report for Seattle. ### Create type-safe function contract and function call wrapper use AutoGen.SourceGenerator AutoGen provides a source generator to easness the trouble of manually craft function contract and function call wrapper from a function. To use this feature, simply add the `AutoGen.SourceGenerator` package to your project and decorate your function with `Function` attribute. For more information, please check out [Create type-safe function](Create-type-safe-function-call.md). ### Use function call in an agent AutoGen provides first-class support for function call in its agent story. Usually there are three ways to enable a function call in an agent. - Pass function definitions when creating an agent. This only works if the agent supports pass function call from its constructor. - Passing function definitions in @AutoGen.Core.GenerateReplyOptions when invoking an agent - Register an agent with @AutoGen.Core.FunctionCallMiddleware to process and invoke function calls. For more information, please check out [Use function call in an agent](Use-function-call.md).
GitHub
autogen
autogen/dotnet/website/articles/Use-graph-in-group-chat.md
autogen
Sometimes, you may want to add more control on how the next agent is selected in a @AutoGen.Core.GroupChat based on the task you want to resolve. For example, in the previous [code writing example](./Group-chat.md), the original code interpreter workflow can be improved by the following diagram because it's not necessary for `admin` to directly talk to `reviewer`, nor it's necessary for `coder` to talk to `runner`. ```mermaid flowchart TD A[Admin] -->|Ask coder to write code| B[Coder] B -->|Ask Reviewer to review code| C[Reviewer] C -->|Ask Runner to run code| D[Runner] D -->|Send result if succeed| A[Admin] D -->|Ask coder to fix if failed| B[Coder] C -->|Ask coder to fix if not approved| B[Coder] ``` By having @AutoGen.Core.GroupChat to follow a specific graph flow, we can bring prior knowledge to group chat and make the conversation more efficient and robust. This is where @AutoGen.Core.Graph comes in. ### Create a graph The following code shows how to create a graph that represents the diagram above. The graph doesn't need to be a finite state machine where each state can only have one legitimate next state. Instead, it can be a directed graph where each state can have multiple legitimate next states. And if there are multiple legitimate next states, the `admin` agent of @AutoGen.Core.GroupChat will decide which one to go based on the conversation context. > [!TIP] > @AutoGen.Core.Graph supports conditional transitions. To create a conditional transition, you can pass a lambda function to `canTransitionAsync` when creating a @AutoGen.Core.Transition. The lambda function should return a boolean value indicating if the transition can be taken. [!code-csharp[](../../sample/AutoGen.BasicSamples/Example07_Dynamic_GroupChat_Calculate_Fibonacci.cs?name=create_workflow)] Once the graph is created, you can pass it to the group chat. The group chat will then use the graph along with admin agent to orchestrate the conversation flow. [!code-csharp[](../../sample/AutoGen.BasicSamples/Example07_Dynamic_GroupChat_Calculate_Fibonacci.cs?name=create_group_chat_with_workflow)]