大模型从入门到应用——LangChain:代理(Agents)-[工具(Tools):人工确认工具验证和Tools作为OpenAI函数]

2023-09-14 19:59:57

分类目录:《大模型从入门到应用》总目录

LangChain系列文章:


人工确认工具验证

本节演示如何为任何工具添加人工确认验证,我们将使用HumanApprovalCallbackhandler完成此操作。假设我们需要使用ShellTool,将此工具添加到自动化流程中会带来明显的风险。我们将看看如何强制对输入到该工具的内容进行手动人工确认。我们通常建议不要使用ShellTool。它有很多被误用的方式,并且在大多数情况下并不需要使用它。我们这里只是为了演示目的才使用它。

from langchain.callbacks import HumanApprovalCallbackHandler
from langchain.tools import ShellTool
tool = ShellTool()
print(tool.run('echo Hello World!'))

输出:

Hello World!
添加人工确认

将默认的HumanApprovalCallbackHandler添加到工具中,这样在实际执行命令之前,用户必须手动批准工具的每个输入。

tool = ShellTool(callbacks=[HumanApprovalCallbackHandler()])
print(tool.run("ls /usr"))

日志输出与输入:

Do you approve of the following input? Anything except 'Y'/'Yes' (case-insensitive) will be treated as a no.

ls /usr
yes
X11
X11R6
bin
lib
libexec
local
sbin
share
standalone

输入:

print(tool.run("ls /private"))

日志输出与输入:

Do you approve of the following input? Anything except 'Y'/'Yes' (case-insensitive) will be treated as a no.

ls /private
no



---------------------------------------------------------------------------

HumanRejectedException                    Traceback (most recent call last)

Cell In[17], line 1
----> 1 print(tool.run("ls /private"))


File ~/langchain/langchain/tools/base.py:257, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, **kwargs)
    255 # TODO: maybe also pass through run_manager is _run supports kwargs
    256 new_arg_supported = signature(self._run).parameters.get("run_manager")
--> 257 run_manager = callback_manager.on_tool_start(
    258     {"name": self.name, "description": self.description},
    259     tool_input if isinstance(tool_input, str) else str(tool_input),
    260     color=start_color,
    261     **kwargs,
    262 )
    263 try:
    264     tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)


File ~/langchain/langchain/callbacks/manager.py:672, in CallbackManager.on_tool_start(self, serialized, input_str, run_id, parent_run_id, **kwargs)
    669 if run_id is None:
    670     run_id = uuid4()
--> 672 _handle_event(
    673     self.handlers,
    674     "on_tool_start",
    675     "ignore_agent",
    676     serialized,
    677     input_str,
    678     run_id=run_id,
    679     parent_run_id=self.parent_run_id,
    680     **kwargs,
    681 )
    683 return CallbackManagerForToolRun(
    684     run_id, self.handlers, self.inheritable_handlers, self.parent_run_id
    685 )


File ~/langchain/langchain/callbacks/manager.py:157, in _handle_event(handlers, event_name, ignore_condition_name, *args, **kwargs)
    155 except Exception as e:
    156     if handler.raise_error:
--> 157         raise e
    158     logging.warning(f"Error in {event_name} callback: {e}")


File ~/langchain/langchain/callbacks/manager.py:139, in _handle_event(handlers, event_name, ignore_condition_name, *args, **kwargs)
    135 try:
    136     if ignore_condition_name is None or not getattr(
    137         handler, ignore_condition_name
    138     ):
--> 139         getattr(handler, event_name)(*args, **kwargs)
    140 except NotImplementedError as e:
    141     if event_name == "on_chat_model_start":


File ~/langchain/langchain/callbacks/human.py:48, in HumanApprovalCallbackHandler.on_tool_start(self, serialized, input_str, run_id, parent_run_id, **kwargs)
     38 def on_tool_start(
     39     self,
     40     serialized: Dict[str, Any],
   (...)
     45     **kwargs: Any,
     46 ) -> Any:
     47     if self._should_check(serialized) and not self._approve(input_str):
---> 48         raise HumanRejectedException(
     49             f"Inputs {input_str} to tool {serialized} were rejected."
     50         )


HumanRejectedException: Inputs ls /private to tool {'name': 'terminal', 'description': 'Run shell commands on this MacOS machine.'} were rejected.
配置人工确认

假设我们有一个代理程序,接收多个工具,并且我们只希望在某些工具和某些输入上触发人工确认请求。我们可以配置回调处理程序来实现这一点。

from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
def _should_check(serialized_obj: dict) -> bool:
    # Only require approval on ShellTool.
    return serialized_obj.get("name") == "terminal"

def _approve(_input: str) -> bool:
    if _input == "echo 'Hello World'":
        return True
    msg = (
        "Do you approve of the following input? "
        "Anything except 'Y'/'Yes' (case-insensitive) will be treated as a no."
    )
    msg += "\n\n" + _input + "\n"
    resp = input(msg)
    return resp.lower() in ("yes", "y")

callbacks = [HumanApprovalCallbackHandler(should_check=_should_check, approve=_approve)]
llm = OpenAI(temperature=0)
tools = load_tools(["wikipedia", "llm-math", "terminal"], llm=llm)
agent = initialize_agent(
    tools, 
    llm, 
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, 
)
agent.run("It's 2023 now. How many years ago did Konrad Adenauer become Chancellor of Germany.", callbacks=callbacks)

输出:

'Konrad Adenauer became Chancellor of Germany in 1949, 74 years ago.'

输入:

agent.run("print 'Hello World' in the terminal", callbacks=callbacks)

输出:

'Hello World'

输入:

agent.run("list all directories in /private", callbacks=callbacks)

日志输出与输入:


ls /private
no



---------------------------------------------------------------------------

HumanRejectedException                    Traceback (most recent call last)

Cell In[39], line 1
----> 1 agent.run("list all directories in /private", callbacks=callbacks)


File ~/langchain/langchain/chains/base.py:236, in Chain.run(self, callbacks, *args, **kwargs)
    234     if len(args) != 1:
    235         raise ValueError("`run` supports only one positional argument.")
--> 236     return self(args[0], callbacks=callbacks)[self.output_keys[0]]
    238 if kwargs and not args:
    239     return self(kwargs, callbacks=callbacks)[self.output_keys[0]]


File ~/langchain/langchain/chains/base.py:140, in Chain.__call__(self, inputs, return_only_outputs, callbacks)
    138 except (KeyboardInterrupt, Exception) as e:
    139     run_manager.on_chain_error(e)
--> 140     raise e
    141 run_manager.on_chain_end(outputs)
    142 return self.prep_outputs(inputs, outputs, return_only_outputs)


File ~/langchain/langchain/chains/base.py:134, in Chain.__call__(self, inputs, return_only_outputs, callbacks)
    128 run_manager = callback_manager.on_chain_start(
    129     {"name": self.__class__.__name__},
    130     inputs,
    131 )
    132 try:
    133     outputs = (
--> 134         self._call(inputs, run_manager=run_manager)
    135         if new_arg_supported
    136         else self._call(inputs)
    137     )
    138 except (KeyboardInterrupt, Exception) as e:
    139     run_manager.on_chain_error(e)


File ~/langchain/langchain/agents/agent.py:953, in AgentExecutor._call(self, inputs, run_manager)
    951 # We now enter the agent loop (until it returns something).
    952 while self._should_continue(iterations, time_elapsed):
--> 953     next_step_output = self._take_next_step(
    954         name_to_tool_map,
    955         color_mapping,
    956         inputs,
    957         intermediate_steps,
    958         run_manager=run_manager,
    959     )
    960     if isinstance(next_step_output, AgentFinish):
    961         return self._return(
    962             next_step_output, intermediate_steps, run_manager=run_manager
    963         )


File ~/langchain/langchain/agents/agent.py:820, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
    818         tool_run_kwargs["llm_prefix"] = ""
    819     # We then call the tool on the tool input to get an observation
--> 820     observation = tool.run(
    821         agent_action.tool_input,
    822         verbose=self.verbose,
    823         color=color,
    824         callbacks=run_manager.get_child() if run_manager else None,
    825         **tool_run_kwargs,
    826     )
    827 else:
    828     tool_run_kwargs = self.agent.tool_run_logging_kwargs()


File ~/langchain/langchain/tools/base.py:257, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, **kwargs)
    255 # TODO: maybe also pass through run_manager is _run supports kwargs
    256 new_arg_supported = signature(self._run).parameters.get("run_manager")
--> 257 run_manager = callback_manager.on_tool_start(
    258     {"name": self.name, "description": self.description},
    259     tool_input if isinstance(tool_input, str) else str(tool_input),
    260     color=start_color,
    261     **kwargs,
    262 )
    263 try:
    264     tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)


File ~/langchain/langchain/callbacks/manager.py:672, in CallbackManager.on_tool_start(self, serialized, input_str, run_id, parent_run_id, **kwargs)
    669 if run_id is None:
    670     run_id = uuid4()
--> 672 _handle_event(
    673     self.handlers,
    674     "on_tool_start",
    675     "ignore_agent",
    676     serialized,
    677     input_str,
    678     run_id=run_id,
    679     parent_run_id=self.parent_run_id,
    680     **kwargs,
    681 )
    683 return CallbackManagerForToolRun(
    684     run_id, self.handlers, self.inheritable_handlers, self.parent_run_id
    685 )


File ~/langchain/langchain/callbacks/manager.py:157, in _handle_event(handlers, event_name, ignore_condition_name, *args, **kwargs)
    155 except Exception as e:
    156     if handler.raise_error:
--> 157         raise e
    158     logging.warning(f"Error in {event_name} callback: {e}")


File ~/langchain/langchain/callbacks/manager.py:139, in _handle_event(handlers, event_name, ignore_condition_name, *args, **kwargs)
    135 try:
    136     if ignore_condition_name is None or not getattr(
    137         handler, ignore_condition_name
    138     ):
--> 139         getattr(handler, event_name)(*args, **kwargs)
    140 except NotImplementedError as e:
    141     if event_name == "on_chat_model_start":


File ~/langchain/langchain/callbacks/human.py:48, in HumanApprovalCallbackHandler.on_tool_start(self, serialized, input_str, run_id, parent_run_id, **kwargs)
     38 def on_tool_start(
     39     self,
     40     serialized: Dict[str, Any],
   (...)
     45     **kwargs: Any,
     46 ) -> Any:
     47     if self._should_check(serialized) and not self._approve(input_str):
---> 48         raise HumanRejectedException(
     49             f"Inputs {input_str} to tool {serialized} were rejected."
     50         )


HumanRejectedException: Inputs ls /private to tool {'name': 'terminal', 'description': 'Run shell commands on this MacOS machine.'} were rejected.

Tools作为OpenAI函数

这个笔记本将介绍如何将LangChain的工具作为OpenAI函数使用。

from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage
model = ChatOpenAI(model="gpt-3.5-turbo-0613")
from langchain.tools import MoveFileTool, format_tool_to_openai_function
tools = [MoveFileTool()]
functions = [format_tool_to_openai_function(t) for t in tools]
message = model.predict_messages([HumanMessage(content='move file foo to bar')], functions=functions)
message

输出:

AIMessage(content='', additional_kwargs={'function_call': {'name': 'move_file', 'arguments': '{\n  "source_path": "foo",\n  "destination_path": "bar"\n}'}}, example=False)

输入:

message.additional_kwargs['function_call']

输出:

{'name': 'move_file',
 'arguments': '{\n  "source_path": "foo",\n  "destination_path": "bar"\n}'}

参考文献:
[1] LangChain官方网站:https://www.langchain.com/
[2] LangChain 🦜️🔗 中文网,跟着LangChain一起学LLM/GPT开发:https://www.langchain.com.cn/
[3] LangChain中文网 - LangChain 是一个用于开发由语言模型驱动的应用程序的框架:http://www.cnlangchain.com/

更多推荐

Nginx服务优化措施、网页安全与配置防盗链

目录一.优化Nginx二.隐藏/查看版本号隐藏版本号方法一:修改配置文件,关闭版本号隐藏版本号方法二:修改源码文件中的版本号,重新编译安装三.修改用户与组四.设置缓存时间五.日志切割脚本六.设置连接超时控制连接访问时间七.开启多进程八.配置网页压缩九.配置防盗链9.1.配置web源主机(192.168.47.103)9

【Python】PySpark 数据计算 ① ( RDD#map 方法 | RDD#map 语法 | 传入普通函数 | 传入 lambda 匿名函数 | 链式调用 )

文章目录一、RDD#map方法1、RDD#map方法引入2、RDD#map语法3、RDD#map用法4、代码示例-RDD#map数值计算(传入普通函数)5、代码示例-RDD#map数值计算(传入lambda匿名函数)6、代码示例-RDD#map数值计算(链式调用)一、RDD#map方法1、RDD#map方法引入在PyS

API网关是如何提升API接口安全管控能力的

API安全的重要性近几年,越来越多的企业开始数字化转型之路。数字化转型的核心是将企业的服务、资产和能力打包成服务(服务的形式通常为API,API又称接口,下文中提到的API和接口意思相同),从而让资源之间形成更强的连接和互动关系,释放原有资产的价值,提升企业的服务能力。企业数字化转型使得基于API的业务系统剧增,随之而

RK3568笔记分享之“如何挂载SPI FRAM铁电存储芯片”——飞凌嵌入式

对于做快速存储采集数据类产品的用户来说,在处理突发掉电情况时需要保存现有数据并避免数据丢失,这种情况下有很多种解决方案,铁电存储器(FRAM)就是个很好的选择。FRAM是一种具有快速写入速度的非易失性存储器,既可以进行非易失性数据存储,又可以像RAM一样操作。本文将借助飞凌嵌入式OK3568-C开发板来为大家介绍一种采

周界警戒AI算法+视频智能分析在安全生产场景中的应用

长期以来,周界防范安防系统在大型园区、工厂、社区、机场、火车站站台、重点单位等领域应用较为广泛和常见。随着AI人工智能等新兴技术的快速发展与落地应用,通过AI智能检测与视频智能分析技术,现代化的周界安防系统可以做到全天候快速、准确地发现入侵等异常事件,并及时报警遏制。今天我们来介绍下旭帆科技安全生产周界警戒AI算法的具

[2023.09.18]: Rust中类型转换在错误处理中的应用解析

随着项目的进展,关于Rust的故事又翻开了新的一页,今天来到了服务器端的开发场景,发现错误处理中的错误类型转换有必要分享一下。Rust抽象出来了Result<T,E>,T是返回值的类型,E是错误类型。只要函数的返回值的类型被定义为Resut<T,E>,那么作为开发人员就有责任来处理调用这个函数可能发生的错误。通过Res

CIIS 2023丨聚焦文档图像处理前沿领域,合合信息AI助力图像处理与内容安全保障

近日,2023第十二届中国智能产业高峰论坛(CIIS2023)在江西南昌顺利举行。大会由中国人工智能学会、江西省科学技术厅、南昌市人民政府主办,南昌市科学技术局、中国工程科技发展战略江西研究院承办。本次大会重点关注AI大模型、生成式AI、无人系统、智能制造、数字安全等领域,汇集了来自中国工程院、国际欧亚科学院、国际核能

聚观早报|高德发布安全出行大模型;小鹏G9焕新上市

【聚观365】9月21日消息高德发布安全出行大模型小鹏G9焕新上市妙鸭相机上线免费版RedmiNote13Pro+支持IP68Neuralink将进行首次人体临床试验高德发布安全出行大模型高德发布了安全出行大模型。据介绍,安全出行大模型基于高德的地图大数据、位置大数据、导航大数据、智能决策系统等能力,从风险识别、风险预

悬崖边:企业如何应对网络安全漏洞趋势

在本文中,我们将讨论企业在处理漏洞时面临的挑战,解释安全漏洞是如何引发网络攻击的,以及为什么它会导致不可接受的事件。我们还将分享我们在识别趋势性漏洞方面的经验。现代信息安全方法正在成为企业的工作流程。例如,不久前,整个IT行业都在向容器化发展,而对云环境的安全和保护机制的研究还是个新鲜事物。现在,几乎每家公司在产品架构

研究报告:周界警戒AI算法+视频智能分析在安全生产场景中的应用

长期以来,周界防范安防系统在大型园区、工厂、社区、机场、火车站站台、重点单位等领域应用较为广泛和常见。随着AI人工智能等新兴技术的快速发展与落地应用,通过AI智能检测与视频智能分析技术,现代化的周界安防系统可以做到全天候快速、准确地发现入侵等异常事件,并及时报警遏制。今天我们来介绍下旭帆科技安全生产周界警戒AI算法的具

Qt使用I.MX6U开发板上的按键(原理:将电脑键盘方向键↓在Qt中的枚举值与开发板中按键定义的枚举值一致,这样电脑端测试效果就与开发板的一致)

在上篇介绍了Qt点亮I.MX6U开发板的一个LED,对于Qt控制I.MX6U开发板的一个蜂鸣器原理也是一样的,就不做详细介绍,具体可参考Qt控制I.MX6U开发板的一个蜂鸣器,本篇介绍Qt使用I.MX6U开发板上的按键的相关内容。文章目录1.开发板硬件图及板卡按键在电脑键盘中的对应原理2.出厂内核设备树中注册的按键3.

热文推荐