Type something to search...
AgentVerse

AgentVerse

AgentVerse

3.7k 342
03 May, 2024
  JavaScript

What is AgentVerse ?

AgentVerse is designed to facilitate the deployment of multiple LLM-based agents in various applications. AgentVerse primarily provides two frameworks: task-solving and simulation.

  • Task-solving: This framework assembles multiple agents as an automatic multi-agent system (AgentVerse-Tasksolving, Multi-agent as system) to collaboratively accomplish the corresponding tasks.

  • Applications: software development system, consulting system, etc.

AgentVerse

  • Simulation: This framework allows users to set up custom environments to observe behaviors among, or interact with, multiple agents. ⚠️⚠️⚠️ We’re refactoring the code. If you require a stable version that exclusively supports simulation framework, you can use release-0.1 branch. Applications: game, social behavior research of LLM-based agents, etc.

AgentVerse


Install AgentVerse

Manually Install (Recommended!) Make sure you have Python >= 3.9

Terminal window
git clone https://github.com/OpenBMB/AgentVerse.git --depth 1
cd AgentVerse
pip install -e .

If you want to use AgentVerse with local models such as LLaMA, you need to additionally install some other dependencies:

Terminal window
pip install -r requirements_local.txt

Install with pip Or you can install through pip

Terminal window
pip install -U agentverse

Environment Variables

You need to export your OpenAI API key as follows:

Terminal window
# Export your OpenAI API key
export OPENAI_API_KEY="your_api_key_here"

If you want use Azure OpenAI services, please export your Azure OpenAI key and OpenAI API base as follows:

Terminal window
export AZURE_OPENAI_API_KEY="your_api_key_here"
export AZURE_OPENAI_API_BASE="your_api_base_here"

Simulation

Framework Required Modules

Terminal window
- agentverse
- agents
- simulation_agent
- environments
- simulation_env

CLI Example

You can create a multi-agent environments provided by us. Using the classroom scenario as an example. In this scenario, there are nine agents, one playing the role of a professor and the other eight as students.

Terminal window
agentverse-simulation --task simulation/nlp_classroom_9players

GUI Example

We also provide a local website demo for this environment. You can launch it with

Terminal window
agentverse-simulation-gui --task simulation/nlp_classroom_9players

After successfully launching the local server, you can visit http://127.0.0.1:7860/ to view the classroom environment. If you want to run the simulation cases with tools (e.g., simulation/nlp_classroom_3players_withtool), you need to install BMTools as follows:

Terminal window
git clone git+https://github.com/OpenBMB/BMTools.git
cd BMTools
pip install -r requirements.txt
python setup.py develop

This is optional. If you do not install BMTools, the simulation cases without tools can still run normally.

Task-Solving

Framework Required Modules

Terminal window
- agentverse
- agents
- simulation_env
- environments
- tasksolving_env

CLI Example

To run the experiments with the task-solving environment proposed in our paper, you can use the following command: To run AgentVerse on a benchmark dataset, you can try

Terminal window
# Run the Humaneval benchmark using gpt-3.5-turbo (config file `agentverse/tasks/tasksolving/humaneval/gpt-3.5/config.yaml`)
agentverse-benchmark --task tasksolving/humaneval/gpt-3.5 --dataset_path data/humaneval/test.jsonl --overwrite

To run AgentVerse on a specific problem, you can try

Terminal window
# Run a single query (config file `agentverse/tasks/tasksolving/brainstorming/gpt-3.5/config.yaml`). The task is specified in the config file.
agentverse-tasksolving --task tasksolving/brainstorming

To run the tool using cases presented in our paper, i.e., multi-agent using tools such as web browser, Jupyter notebook, bing search, etc., you can first build ToolsServer provided by XAgent. You can follow their instruction to build and run the ToolServer. After building and launching the ToolServer, you can use the following command to run the task-solving cases with tools:

Terminal window
agentverse-tasksolving --task tasksolving/tool_using/24point

We have provided more tasks in agentverse/tasks/tasksolving/tool_using/ that show how multi-agent can use tools to solve problems. Also, you can take a look at agentverse/tasks/tasksolving for more experiments we have done in our paper.

Local Model Support

1. Install the Additional Dependencies

If you want to use local models such as LLaMA, you need to additionally install some other dependencies:

Terminal window
pip install -r requirements_local.txt

2. Launch the Local Server

Then modify the MODEL_PATH and MODEL_NAME according to your need to launch the local server with the following command:

Terminal window
bash scripts/run_local_model_server.sh

The script will launch a service for Llama 7B chat model. The MODEL_NAME in AgentVerse currently supports several models including llama-2-7b-chat-hf , llama-2-13b-chat-hf , llama-2-70b-chat-hf , vicuna-7b-v1.5 , and vicuna-13b-v1.5 . If you wish to integrate additional models that are compatible with FastChat, you need to:

  1. Add the new MODEL_NAME into the LOCAL_LLMS within agentverse/llms/__init__.py. Furthermore, establish
  2. Add the mapping from the new MODEL_NAME to its corresponding Huggingface identifier in the LOCAL_LLMS_MAPPING within the agentverse/llms/__init__.py file.

3. Modify the Config File

In your config file, set the llm_type to local and model to the MODEL_NAME . For example

Terminal window
llm:
llm_type: local
model: llama-2-7b-chat-hf
...

You can refer to agentverse/tasks/tasksolving/commongen/llama-2-7b-chat-hf/config.yaml for a more detailed example.