Langchain
AgentiPy Integrations: LangChain
AgentiPy seamlessly integrates with LangChain, empowering your AI agents to perform complex operations directly on the Solana blockchain. This integration provides out-of-the-box, pre-configured tooling that simplifies the development of sophisticated AI-driven decentralized applications (dApps).
This document outlines how to set up and use AgentiPy's LangChain integration to build a console-based Solana blockchain assistant.
Key Features
Pre-configured Tools: Access Solana blockchain functionalities as readily available LangChain tools.
Low-Code Integration: Focus on your agent's logic, not boilerplate code for blockchain interactions.
AI-Powered Blockchain Operations: Leverage Large Language Models (LLMs) to intelligently select and execute Solana functions based on natural language queries.
Console Application Ready: Easily demonstrate and test blockchain operations directly from your terminal.
Prerequisites
Before you begin, ensure you have the following:
Python 3.9+: Installed on your system.
Solana Wallet Private Key: A base58-encoded private key for a Solana wallet with some SOL (for gas fees) and potentially other tokens you wish to interact with.
Important: For development and testing, it's highly recommended to use a devnet private key and devnet SOL. Never use your mainnet private key for testing or in insecure environments.
OpenAI API Key: An API key from OpenAI to use their
gpt-4o-mini
model.
Installation
You'll need to install AgentiPy, LangChain, OpenAI's Python client, and python-dotenv
:
pip install agentipy langchain-openai langgraph python-dotenv
Setup Environment Variables
Create a file named .env
in the root directory of your project (where your Python script will be located). Add your private key and OpenAI API key to this file:
SOLANA_PRIVATE_KEY="YOUR_BASE58_ENCODED_SOLANA_PRIVATE_KEY"
OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
Replace "YOUR_BASE58_ENCODED_SOLANA_PRIVATE_KEY"
and "YOUR_OPENAI_API_KEY"
with your actual keys.
Code Explanation: Solana Blockchain Assistant
The following Python script demonstrates a console-based AI assistant powered by AgentiPy's LangChain integration. It uses langgraph
to create an agent that can dynamically call Solana-related tools based on user input.
import asyncio
import os
import json
from dotenv import load_dotenv
# AgentiPy and LangChain imports
from agentipy.agent import SolanaAgentKit
from agentipy.langchain.core import get_all_core_tools
from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage, HumanMessage
from langgraph.graph import MessagesState
from langgraph.graph import StateGraph
from langgraph.prebuilt import ToolNode
from langgraph.prebuilt import tools_condition
from langgraph.graph import START
# Load environment variables
load_dotenv()
# --- Solana Agent and LLM Setup ---
# Initialize Solana agent with your private key and desired RPC endpoint.
# 'https://api.mainnet-beta.solana.com' is used here, but 'https://api.devnet.solana.com'
# is recommended for testing.
agent = SolanaAgentKit(
os.getenv("SOLANA_PRIVATE_KEY"),
"https://api.mainnet-beta.solana.com"
)
# AgentiPy provides pre-built tools compatible with LangChain.
# get_all_core_tools() retrieves functions like checking balance, creating tokens, etc.
tools = [*get_all_core_tools(solana_kit=agent)]
# Initialize the Language Model (LLM).
# The llm_with_tools is a bound version of the LLM that's aware of the available tools.
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
llm_with_tools = llm.bind_tools(tools)
# Define the system message. This guides the AI's behavior and response format.
# It emphasizes blockchain operations and specific output formatting.
sys_msg = SystemMessage(
content="""You are an AI assistant specializing in blockchain-related operations. You help users with Solana blockchain tasks such as checking wallet balances, creating tokens, retrieving token prices, and other blockchain operations.
When responding, please follow this format:
📋 OPERATION:
[Brief description of what's being done]
🔍 DETAILS:
[Relevant information and data]
💡 RESULT:
[Final outcome or response]
📝 NOTE:
[Any additional information or recommendations, if applicable]
Remember:
1. All function inputs must be formatted as a JSON string, ensuring proper escaping. Examples:
- "{"name":"test","surname":"test"}"
- "{"amount":100,"recipient":"abc123"}"
- "{"token":"USDT"}"
2. Always return valid JSON objects inside strings
3. Keys must be strings, values properly formatted (numbers as integers, booleans as true/false)
4. Escape necessary characters for valid JSON
5. For non-blockchain requests, respond with:
"I'm sorry, I can only assist with blockchain-related operations."
"""
)
# --- LangGraph Setup ---
# The assistant function defines how the LLM processes messages within the graph.
# It invokes the LLM with the system message and the current conversation history.
def assistant(state: MessagesState):
"""
Invokes the LLM with the current state messages and system message.
"""
return {"messages": [llm_with_tools.invoke([sys_msg] + state["messages"])]}
# Initialize LangGraph's StateGraph to build the agent's workflow.
builder = StateGraph(MessagesState)
# Add nodes to the graph:
# 'assistant': Calls the LLM to determine the next action (respond or call a tool).
# 'tools': Executes the chosen LangChain tool.
builder.add_node("assistant", assistant)
builder.add_node("tools", ToolNode(tools))
# Define the graph's flow:
# START -> 'assistant': The conversation always begins with the assistant.
builder.add_edge(START, "assistant")
# Conditional edges from 'assistant':
# 'tools_condition' from langgraph determines if the LLM wants to call a tool.
# If so, the flow goes to 'tools'; otherwise, it remains with 'assistant' to generate a final response.
builder.add_conditional_edges(
"assistant",
tools_condition, # Condition to check if LLM wants to use a tool
)
# 'tools' -> 'assistant': After a tool is executed, the result is fed back to the assistant
# for it to process the output and formulate a user-friendly response.
builder.add_edge("tools", "assistant")
# Compile the graph, making it ready to be invoked.
graph = builder.compile()
# --- Response Extraction Logic ---
# Helper function to parse the AI's potentially multi-part response into a clean answer.
def extract_answer(response_text: str) -> str:
"""
Extracts the most relevant answer from the AI's response based on the
defined format. Prioritizes '💡 RESULT:' section or JSON outputs.
"""
try:
lines = response_text.split("\n")
for line in lines:
if line.strip().startswith("{") and line.strip().endswith("}"):
data = json.loads(line.strip())
if data.get("status") == "success":
if "balance" in data:
return f"Balance: {data['balance']} {data.get('token', 'SOL')}"
return str(data)
except json.JSONDecodeError:
pass
except Exception as e:
print(f"Error parsing potential JSON: {e}")
sections = response_text.split("\n\n")
for section in sections:
if section.startswith("💡 RESULT:"):
return section.replace("💡 RESULT:", "").strip()
for line in response_text.split("\n"):
if line.strip() and not line.startswith(("📋", "🔍", "💡", "📝")):
return line.strip()
return response_text.strip()
# --- Console Application Logic ---
# The main asynchronous function for the console application.
async def main():
"""
Main function to run the console application.
Prompts the user for a question, processes it, and prints the response.
"""
print("Welcome to the Solana Blockchain Assistant!")
print("Type 'exit' to quit.")
while True:
question_input = input("\nYour question: ").strip()
if question_input.lower() == 'exit':
print("Goodbye!")
break
if not question_input:
print("Please enter a question.")
continue
try:
# Wrap the user's question in a HumanMessage.
messages = [HumanMessage(content=question_input)]
# Invoke the compiled LangGraph with the user's message.
result = await graph.ainvoke({"messages": messages})
# Process the response messages from the graph.
response_text = ""
for m in result["messages"]:
if hasattr(m, "content") and m.content:
response_text += f"{m.content}\n"
elif hasattr(m, "tool_calls") and m.tool_calls:
response_text += "Performing blockchain operations via tool call...\n"
elif hasattr(m, "tool_outputs") and m.tool_outputs:
response_text += f"Tool Output: {m.tool_outputs}\n"
# Extract and print the simplified answer.
answer = extract_answer(response_text.strip())
print(f"\nAI Assistant: {answer}")
except Exception as e:
print(f"\nAn error occurred: {e}")
print("Please ensure your environment variables are correctly set and the Solana node is accessible.")
if __name__ == "__main__":
# Essential for asyncio on Windows to prevent runtime errors.
if os.name == "nt":
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
# Run the main console application.
asyncio.run(main())
How to Run
Save the code: Save the script above as
solana_assistant.py
(or any other.py
file).Ensure
.env
file exists: Make sure your.env
file is in the same directory assolana_assistant.py
and contains yourSOLANA_PRIVATE_KEY
andOPENAI_API_KEY
.Run from terminal: Open your terminal or command prompt, navigate to the directory where you saved the file, and run:
python solana_assistant.py
The assistant will start and prompt you for input.
Example Usage
Once the application is running, you can type questions into the console:
Last updated