Congratulations! You have completed the Aura Agents course.
Here is a summary of what you learned in each module and a set of best practices to carry forward.
Module 1: Introduction to Aura Agents
In the first module, you learned how Aura Agents work and how to get started quickly.
-
What Aura Agents are — A no/low-code platform in the Aura Console where natural language questions are answered by querying your knowledge graph.
-
How agents reason — Agents use multi-step reasoning to interpret user input, select the correct tool, execute the query, and generate a response.
-
The three tool types — Cypher Template for predictable queries, Text2Cypher for ad-hoc questions, and Similarity Search for vector-based lookups.
-
Creating with AI — The Create with AI option generates a working agent and tool set from a prompt and your instance schema.
Module 2: Design and implement
In the second module, you learned how to build and design agents with precision.
-
Cypher Template tools — Pre-written, parameterized Cypher queries where the LLM extracts parameter values from the user’s question at runtime.
-
Text2Cypher — A tool that generates Cypher from natural language at runtime, suited for questions where the query structure changes between requests.
-
Designing an agent — One focused agent per task, question types mapped to the correct tool, and descriptions specific enough for the LLM to select the correct tool.
Module 3: Publish and connect
In the third module, you learned how to make an agent available outside the Aura Console.
-
Access modes — Internal limits access to Aura project members; External exposes an HTTP API for applications to call.
-
MCP — The Model Context Protocol wraps your agent as a callable tool, enabling AI hosts such as Cursor and Claude Desktop to discover and invoke it without custom integration code.
-
Connecting to Cursor — After enabling external access and the MCP server, you connected your agent to Cursor and tested it with real prompts.
Best practices
-
Scope agents narrowly — One focused agent per task makes tool selection easier for the LLM and makes behavior more predictable. Include an instruction to decline off-topic requests.
-
Templates first — If you can write the complete Cypher query now with only
$parameterslots for variable values, use a Cypher Template. Reserve Text2Cypher for questions whose structure changes. -
One tool per question pattern — Give each distinct question type its own tool. A single template that tries to cover unrelated questions leads to vague descriptions and wrong tool picks.
-
Write specific tool descriptions — Each description should answer two questions: what does this tool return, and when should the LLM use it instead of the other tools?
-
Describe parameters as instructions — Write parameter descriptions with examples, such as "The customer ID, for example ALFKI or QUICK", not just "The customer ID".
-
Use the reasoning panel — Always check the reasoning trace to verify which tool was selected and what Cypher was generated. Inspect Text2Cypher output before relying on it in production.
-
Save changes — After adding, editing, or deleting tools, click Update agent to persist the configuration.
Continue learning
To go deeper with graph-backed AI and MCP:
-
Neo4j & GenAI Fundamentals - Understand the fundamentals of generative AI and how Neo4j integrates with LLMs
-
Using Neo4j with LangChain - Integrate Neo4j with LangChain for retrieval-augmented generation and agents
-
Developing with Neo4j MCP Tools - Learn how to use the Model Context Protocol to connect AI applications to Neo4j tools and data sources
-
Building GraphRAG Python MCP tools - Build your own GraphRAG MCP server with graph-backed tools using the MCP Python SDK
-
Building GraphRAG TypeScript MCP tools - Build your own MCP tools and server using the MCP TypeScript SDK
Summary
You completed the Aura Agents course. You learned how to create, design, and publish agents that answer natural language questions using your Neo4j knowledge graph, and how to expose them to AI applications through MCP.