Configuration
settings.yaml
GraphRAG is configured via settings.yaml
. Key areas:
- models: Chat + embedding model configuration (OpenAI)
- input/output/cache/reporting: storage locations (file-based by default)
- extract_graph, community_reports: core workflows
- query: prompts and behavior for drift/local/global/basic
Example (excerpt):
models:
default_chat_model:
type: openai_chat
api_key: ${OPENAI_API_KEY}
model: gpt-4o-mini-2024-07-18
default_embedding_model:
type: openai_embedding
api_key: ${OPENAI_API_KEY}
model: text-embedding-3-small
input:
storage: { type: file, base_dir: "input" }
output:
type: file
base_dir: "output"
cache:
type: file
base_dir: "cache"
reporting:
type: file
base_dir: "logs"
extract_graph:
prompt: "prompts/extract_graph.txt"
entity_types: [organization, person, geo, event]
community_reports:
graph_prompt: "prompts/community_report_graph.txt"
text_prompt: "prompts/community_report_text.txt"
max_length: 2000
max_input_length: 8000
Prompts
Prompt templates are in prompts/
and are referenced from settings.yaml
for the extract, report, and query workflows.
Environment Variables
Set OPENAI_API_KEY
in .env
. See ./env for guidance.
Last updated on