The app for independent voices

Retrieval-augmented Graph Agentic Network (ReaGAN)

Another clever way to combine agentic capabilities and retrieval.

Graph learning frameworks always make a comeback. This time, the nodes are agents that can plan, act, and reason.

Great read for AI devs!

Here are my notes:

● Overview

This paper introduces ReaGAN, a graph learning framework that reconceptualizes each node in a graph as an autonomous agent capable of planning, reasoning, and acting via a frozen LLM.

● What's new?

Instead of relying on static, layer-wise message passing, ReaGAN enables node-level autonomy, where each node independently decides whether to aggregate information from local neighbors, retrieve semantically similar but distant nodes, or take no action at all.

● Benefits

This node-agent abstraction addresses two key challenges in graph learning: (1) handling varying informativeness of nodes and (2) combining local structure with global semantics.

● Core Modules

Each node operates in a multi-step loop with four core modules: Memory, Planning, Action, and Tool Use (RAG).

The node constructs a natural language prompt from its memory, queries a frozen LLM (e.g., Qwen2-14B) for the next action(s), executes them, and updates its memory accordingly.

● Memory --> Prompt

ReaGAN constructs prompts from a node’s memory: combining the original text feature, aggregated local/global summaries, and selected labeled neighbor examples to enable the LLM to plan actions or make predictions based on rich, multi-scale, and personalized context.

● Action Space

The node’s action space includes Local Aggregation (structured neighbors), Global Aggregation (via retrieval), Prediction, and NoOp.

The latter regulates over-aggregation and reflects the agent’s ability to opt out when sufficient context exists.

● Results

ReaGAN performs competitively on node classification tasks without any fine-tuning.

On datasets like Cora and Chameleon, it matches or outperforms traditional GNNs despite using only a frozen LLM, highlighting the strength of structured prompting and retrieval-based reasoning.

● Ablation

Both the agentic planning mechanism and global semantic retrieval are essential.

Removing either (e.g., forcing fixed action plans or disabling RAG) leads to significant accuracy drops, especially in sparse graphs like Citeseer.

● Prompt design and memory strategy matter

Using both local and global context improves performance on dense graphs, while selective global use benefits sparse ones.

Showing label names in prompts harms accuracy, likely due to LLM overfitting to label text rather than reasoning from examples.

Paper: arxiv.org/abs/2508.00429

Track trending AI papers here: nlp.elvissaravia.com

Aug 4
at
4:17 PM

Log in or sign up

Join the most interesting and insightful discussions.