

GPT Researcher
8.7k 1.1kWhat is GPT Researcher ?
GPT Researcher is an autonomous agent designed for comprehensive online research on a variety of tasks.
The agent can produce detailed, factual and unbiased research reports, with customization options for focusing on relevant resources, outlines, and lessons. Inspired by the recent Plan-and-Solve and RAG papers, GPT Researcher addresses issues of speed, determinism and reliability, offering a more stable performance and increased speed through parallelized agent work, as opposed to synchronous operations.
GPT Researcher Features
-
๐ Generate research, outlines, resources and lessons reports
-
๐ Aggregates over 20 web sources per research to form objective and factual conclusions
-
๐ฅ๏ธ Includes an easy-to-use web interface (HTML/CSS/JS)
-
๐ Scrapes web sources with javascript support
-
๐ Keeps track and context of visited and used web sources
-
๐ Export research reports to PDF and moreโฆ
Why GPT Researcher?
-
To form objective conclusions for manual research tasks can take time, sometimes weeks to find the right resources and information.
-
Current LLMs are trained on past and outdated information, with heavy risks of hallucinations, making them almost irrelevant for research tasks.
-
Solutions that enable web search (such as ChatGPT + Web Plugin), only consider limited resources and content that in some cases result in superficial conclusions or biased answers.
-
Using only a selection of resources can create bias in determining the right conclusions for research questions or tasks.
Architecture
The main idea is to run โplannerโ and โexecutionโ agents, whereas the planner generates questions to research, and the execution agents seek the most related information based on each generated research question. Finally, the planner filters and aggregates all related information and creates a research report.
The agents leverage both gpt3.5-turbo and gpt-4-turbo (128K context) to complete a research task. We optimize for costs using each only when necessary. The average research task takes around 3 minutes to complete, and costs ~$0.1.
More specifically:
-
Create a domain specific agent based on research query or task.
-
Generate a set of research questions that together form an objective opinion on any given task.
-
For each research question, trigger a crawler agent that scrapes online resources for information relevant to the given task.
-
For each scraped resources, summarize based on relevant information and keep track of its sources.
-
Finally, filter and aggregate all summarized sources and generate a final research report.
Demo
https://github.com/assafelovic/gpt-researcher/assets/13554167/a00c89a6-a295-4dd0-b58d-098a31c40fda
Tutorials
๐ Documentation
Please see here for full documentation on:
-
Getting started (installation, setting up the environment, simple examples)
-
How-To examples (demos, integrations, docker support)
-
Reference (full API docs)
-
Tavily API integration (high-level explanation of core concepts)
Quickstart
-
Step 0 - Install Python 3.11 or later. See here for a step-by-step guide.
-
Step 1 - Download the project
git clone https://github.com/assafelovic/gpt-researcher.git
cd gpt-researcher
- Step 2 - Install dependencies
pip install -r requirements.txt
- Step 3 - Create .env file with your OpenAI Key and Tavily API key or simply export it
export OPENAI_API_KEY={Your OpenAI API Key here}
export TAVILY_API_KEY={Your Tavily API Key here}
-
For LLM, we recommend OpenAI GPT, but you can use any other LLM model (including open sources) supported by Langchain Adapter, simply change the llm model and provider in config/config.py. Follow this guide to learn how to integrate LLMs with Langchain.
-
For search engine, we recommend Tavily Search API (optimized for LLMs), but you can also refer to other search engines of your choice by changing the search provider in config/config.py to
"duckduckgo"
,"googleAPI"
,"googleSerp"
, or"searx"
. Then add the corresponding env API key as seen in the config.py file. -
We highly recommend using OpenAI GPT models and Tavily Search API for optimal performance.
-
Step 4 - Run the agent with FastAPI
uvicorn main:app --reload
- Step 5 - Go to http://localhost:8000 on any browser and enjoy researching!
To learn how to get started with Docker or to learn more about the features and services check out the documentation page.