开启左侧

LangGraph 框架详细介绍与Python代码示例

[复制链接]
米落枫 发表于 6 天前 | 显示全部楼层 |阅读模式 打印 上一主题 下一主题
作者:小郭开发
LangGraph 框架详细介绍与Python代码示例

一、什么是 LangGraph

LangGraph 是一个基于 LangChain 的高级框架,专门用于构建有状态的、多轮对话的、多智能体的应用程序。它允许开发者使用图(Graph)的结构来定义复杂的 AI 工作流,特别适合构建需要记忆、条件分支和循环的智能应用。
核心特性

    图结构:使用有向图(DAG)定义应用逻辑,支持循环和分支状态管理:内置状态管理机制,自动处理对话历史多智能体支持:轻松构建多个智能体协作的应用持久化:支持将对话状态持久化到数据库流式处理:支持实时流式输出调试友好:可视化图结构,便于调试和理解
LangGraph vs LangChain

特性LangChainLangGraph
结构线性链式图结构
状态管理手动管理自动管理
多轮对话需要显式 memory内置状态
复杂工作流代码控制图结构
智能体协作较复杂原生支持
二、安装与配置

安装 LangGraph
  1. pip install langgraph
  2. pip install langchain-openai
  3. pip install langchain
复制代码
基础配置
  1. import os
  2. os.environ["OPENAI_API_KEY"]="your-api-key-here"
复制代码
三、核心概念

1. State(状态)

LangGraph 使用状态(State)来管理应用的所有数据。状态可以是字典、数据类等。
  1. from typing import Annotated
  2. from typing_extensions import TypedDict
  3. classState(TypedDict):
  4.     messages:list
  5.     user_input:str
  6.     context:str
复制代码
2. Nodes(节点)

节点是图中的处理单元,每个节点接收状态并返回更新后的状态。
  1. defchatbot(state: State):# 处理逻辑return{"messages":[...]}defrouter(state: State):# 路由逻辑return"node_name"
复制代码
3. Edges(边)

边定义节点之间的连接关系,决定状态如何在节点间流动。
  1. graph.add_edge("node1","node2")
  2. graph.add_conditional_edges("node1", router,{"path1":"node2","path2":"node3"})
复制代码
四、基本示例

示例1:简单的对话图
  1. from langgraph.graph import StateGraph
  2. from langchain_openai import ChatOpenAI
  3. from typing import TypedDict
  4. # 定义状态classState(TypedDict):
  5.     messages:list# 创建 LLM
  6. llm = ChatOpenAI(model="gpt-3.5-turbo")# 定义节点defchatbot(state: State):
  7.     messages = state["messages"]
  8.     response = llm.invoke(messages)return{"messages": messages +[response]}# 构建图
  9. workflow = StateGraph(State)
  10. workflow.add_node("chatbot", chatbot)
  11. workflow.set_entry_point("chatbot")
  12. workflow.add_edge("chatbot","chatbot")# 编译图
  13. app = workflow.compile()# 运行
  14. result = app.invoke({"messages":[{"role":"user","content":"你是谁"}]})print(result["messages"][-1].content)
复制代码
示例2:带条件分支的图
  1. from langgraph.graph import StateGraph, END
  2. from typing import TypedDict, Annotated
  3. import operator
  4. classState(TypedDict):
  5.     messages:list
  6.     current_node:strdefchatbot(state: State):
  7.     messages = state["messages"]
  8.     response = ChatOpenAI(model="gpt-3.5-turbo").invoke(messages)return{"messages": messages +[response]}defroute_message(state: State):
  9.     last_message = state["messages"][-1].content
  10.     if"天气"in last_message:return"weather"elif"笑话"in last_message:return"joke"else:return"chatbot"
  11. workflow = StateGraph(State)
  12. workflow.add_node("chatbot", chatbot)
  13. workflow.add_node("weather",lambda s:{"messages": s["messages"]+[{"role":"assistant","content":"今天晴转多云"}]})
  14. workflow.add_node("joke",lambda s:{"messages": s["messages"]+[{"role":"assistant","content":"为什么程序员分不清万圣节和圣诞节?因为 Oct 31 == Dec 25!"}]})
  15. workflow.add_conditional_edges("chatbot",
  16.     route_message,{"weather":"weather","joke":"joke","chatbot": END
  17.     })
  18. workflow.set_entry_point("chatbot")
  19. app = workflow.compile()# 运行
  20. result = app.invoke({"messages":[{"role":"user","content":"讲个笑话"}]})print(result["messages"][-1].content)
复制代码
五、高级功能

1. 循环图(聊天机器人)
  1. from langgraph.graph import StateGraph, END
  2. from langchain_openai import ChatOpenAI
  3. from langchain_core.messages import HumanMessage, AIMessage
  4. classState(TypedDict):
  5.     messages:listdefchatbot(state: State):
  6.     llm = ChatOpenAI(model="gpt-3.5-turbo")
  7.     response = llm.invoke(state["messages"])return{"messages": state["messages"]+[response]}# 创建图
  8. workflow = StateGraph(State)
  9. workflow.add_node("chatbot", chatbot)
  10. workflow.set_entry_point("chatbot")# 添加循环边
  11. workflow.add_edge("chatbot", END)# 编译
  12. app = workflow.compile()# 多轮对话
  13. messages =[{"role":"user","content":"你好"}]
  14. result1 = app.invoke({"messages": messages})print(f"用户: {messages[-1]['content']}")print(f"AI: {result1['messages'][-1].content}")
  15. messages = result1["messages"]+[{"role":"user","content":"你能做什么?"}]
  16. result2 = app.invoke({"messages": messages})print(f"用户: {messages[-1]['content']}")print(f"AI: {result2['messages'][-1].content}")
复制代码
2. 多智能体系统
  1. from langgraph.graph import StateGraph, END
  2. from langchain_openai import ChatOpenAI
  3. from typing import TypedDict, Annotated
  4. import operator
  5. classState(TypedDict):
  6.     messages:list
  7.     current_agent:strdefresearcher(state: State):
  8.     llm = ChatOpenAI(model="gpt-3.5-turbo")
  9.     response = llm.invoke(state["messages"])return{"messages": state["messages"]+[response],"current_agent":"researcher"}defwriter(state: State):
  10.     llm = ChatOpenAI(model="gpt-3.5-turbo")
  11.     response = llm.invoke(state["messages"])return{"messages": state["messages"]+[response],"current_agent":"writer"}defrouter(state: State):if"写"in state["messages"][-1].content:return"writer"return"researcher"# 创建多智能体图
  12. workflow = StateGraph(State)
  13. workflow.add_node("researcher", researcher)
  14. workflow.add_node("writer", writer)
  15. workflow.add_conditional_edges("researcher",
  16.     router,{"researcher":"researcher","writer":"writer"})
  17. workflow.set_entry_point("researcher")
  18. app = workflow.compile()# 运行
  19. result = app.invoke({"messages":[{"role":"user","content":"写一篇关于人工智能的文章"}],"current_agent":"researcher"})
复制代码
3. 带记忆的聊天机器人
  1. from langgraph.graph import StateGraph, END
  2. from langgraph.checkpoint.memory import MemorySaver
  3. from langchain_openai import ChatOpenAI
  4. from typing import TypedDict
  5. classState(TypedDict):
  6.     messages:listdefchatbot(state: State):
  7.     llm = ChatOpenAI(model="gpt-3.5-turbo")
  8.     response = llm.invoke(state["messages"])return{"messages": state["messages"]+[response]}# 创建带内存检查点的图
  9. workflow = StateGraph(State)
  10. workflow.add_node("chatbot", chatbot)
  11. workflow.set_entry_point("chatbot")
  12. workflow.add_edge("chatbot", END)# 使用 MemorySaver 保存对话历史
  13. memory = MemorySaver()
  14. app = workflow.compile(checkpointer=memory)# 配置线程 ID
  15. config ={"configurable":{"thread_id":"user_1"}}# 第一次对话
  16. result1 = app.invoke({"messages":[{"role":"user","content":"我叫张三"}]},
  17.     config
  18. )print(f"AI: {result1['messages'][-1].content}")# 第二次对话(保留上下文)
  19. result2 = app.invoke({"messages":[{"role":"user","content":"我叫什么名字?"}]},
  20.     config
  21. )print(f"AI: {result2['messages'][-1].content}")
复制代码
4. 复杂的工作流(研究助手)
  1. from langgraph.graph import StateGraph, END
  2. from langchain_openai import ChatOpenAI
  3. from typing import TypedDict, Annotated
  4. classState(TypedDict):
  5.     topic:str
  6.     research:str
  7.     outline:str
  8.     draft:strdefresearch_node(state: State):
  9.     llm = ChatOpenAI(model="gpt-3.5-turbo")
  10.     prompt =f"研究主题:{state['topic']},提供详细的研究内容"
  11.     research = llm.invoke(prompt)return{"research": research.content}defoutline_node(state: State):
  12.     llm = ChatOpenAI(model="gpt-3.5-turbo")
  13.     prompt =f"基于以下研究内容,创建文章大纲:\n{state['research']}"
  14.     outline = llm.invoke(prompt)return{"outline": outline.content}defwrite_node(state: State):
  15.     llm = ChatOpenAI(model="gpt-3.5-turbo")
  16.     prompt =f"根据以下大纲撰写文章:\n大纲:{state['outline']}\n主题:{state['topic']}"
  17.     draft = llm.invoke(prompt)return{"draft": draft.content}# 创建工作流
  18. workflow = StateGraph(State)
  19. workflow.add_node("research", research_node)
  20. workflow.add_node("outline", outline_node)
  21. workflow.add_node("write", write_node)# 定义流程
  22. workflow.set_entry_point("research")
  23. workflow.add_edge("research","outline")
  24. workflow.add_edge("outline","write")
  25. workflow.add_edge("write", END)# 编译
  26. app = workflow.compile()# 运行
  27. result = app.invoke({"topic":"人工智能在医疗领域的应用"})print("研究内容:", result["research"])print("文章大纲:", result["outline"])print("最终文章:", result["draft"])
复制代码
六、可视化图结构
  1. from langgraph.graph import StateGraph
  2. from IPython.display import Image, display
  3. # 创建图
  4. workflow = StateGraph(State)
  5. workflow.add_node("chatbot", chatbot)
  6. workflow.set_entry_point("chatbot")
  7. workflow.add_edge("chatbot", END)# 编译
  8. app = workflow.compile()# 可视化try:
  9.     display(Image(app.get_graph().draw_mermaid_png()))except:print(app.get_graph().draw_ascii())
复制代码
七、实际应用场景

1. 智能客服系统
  1. from langgraph.graph import StateGraph, END
  2. from typing import TypedDict
  3. classState(TypedDict):
  4.     messages:list
  5.     intent:strdefdetect_intent(state: State):# 意图检测
  6.     last_msg = state["messages"][-1].content
  7.     if"退货"in last_msg:return{"intent":"return"}elif"咨询"in last_msg:return{"intent":"consult"}return{"intent":"general"}defgeneral_handler(state: State):
  8.     llm = ChatOpenAI(model="gpt-3.5-turbo")
  9.     response = llm.invoke(state["messages"])return{"messages": state["messages"]+[response]}defreturn_handler(state: State):return{"messages": state["messages"]+[{"role":"assistant","content":"退货流程:1. 订单确认 2. 原因说明 3. 物流寄回"}]}defconsult_handler(state: State):return{"messages": state["messages"]+[{"role":"assistant","content":"请描述您的具体问题,我会为您解答"}]}
  10. workflow = StateGraph(State)
  11. workflow.add_node("detect", detect_intent)
  12. workflow.add_node("general", general_handler)
  13. workflow.add_node("return", return_handler)
  14. workflow.add_node("consult", consult_handler)
  15. workflow.set_entry_point("detect")
  16. workflow.add_conditional_edges("detect",lambda s: s["intent"],{"general":"general","return":"return","consult":"consult"})
  17. workflow.add_edge("general", END)
  18. workflow.add_edge("return", END)
  19. workflow.add_edge("consult", END)
  20. app = workflow.compile()
复制代码
2. 代码审查助手
  1. from langgraph.graph import StateGraph, END
  2. from typing import TypedDict
  3. classState(TypedDict):
  4.     code:str
  5.     issues:list
  6.     suggestions:strdefanalyze_code(state: State):
  7.     llm = ChatOpenAI(model="gpt-3.5-turbo")
  8.     prompt =f"""分析以下代码,找出潜在问题:
  9.    
  10. ```python
  11. {state['code']}
复制代码
请列出所有发现的问题"“”
response = llm.invoke(prompt)
return {“issues”: response.content.split(“\n”)}
def suggest_fixes(state: State):
llm = ChatOpenAI(model=“gpt-3.5-turbo”)
prompt = f"""针对以下代码问题,提供修复建议:
问题:
{state[‘issues’]}
代码:
{state[‘code’]}“”"
response = llm.invoke(prompt)
return {“suggestions”: response.content}
workflow = StateGraph(State)
workflow.add_node(“analyze”, analyze_code)
workflow.add_node(“suggest”, suggest_fixes)
workflow.set_entry_point(“analyze”)
workflow.add_edge(“analyze”, “suggest”)
workflow.add_edge(“suggest”, END)
app = workflow.compile()
result = app.invoke({“code”: “def add(a, b): return a + b”})
print(result[“issues”])
print(result[“suggestions”])
  1. ## 八、LangGraph 的优势
  2. 1. **清晰的结构**:图结构使复杂逻辑更易理解和维护
  3. 2. **状态管理**:自动处理状态传递和持久化
  4. 3. **可扩展性**:轻松添加新节点和边
  5. 4. **调试方便**:可视化图结构,便于排查问题
  6. 5. **灵活性**:支持条件分支、循环、并行等复杂逻辑
  7. 6. **多智能体**:原生支持多个智能体协作
  8. ## 九、总结
  9. LangGraph 为构建复杂的 AI 应用提供了强大的图结构支持。相比 LangChain 的线性链式结构,LangGraph 的图结构更适合以下场景:
  10. - **需要状态管理的多轮对话应用**
  11. - **多个智能体协作的工作流**
  12. - **包含条件分支和循环的复杂逻辑**
  13. - **需要持久化和恢复的应用状态**
  14. 随着 AI 应用复杂度的不断提升,LangGraph 正成为构建生产级 AI 应用的重要工具。通过合理设计图结构,开发者可以构建出既灵活又可维护的智能应用。
复制代码
原文地址:https://blog.csdn.net/weixin_43126767/article/details/158609671
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

发布主题
阅读排行更多+

Powered by Discuz! X3.4© 2001-2013 Discuz Team.( 京ICP备17022993号-3 )