赞
踩
AutoGPT 和 LangChain 都是通过 ReAct(ReAct: Synergizing Reasoning and Acting in Language Models)来实现让大语言模型进行行动规划(例如让语言模型调用外部的工具或数据就是一种典型行动)的能力。
无论是对于工具(外部数据源、API 或函数)的功能描述还是对于语言模型整体输出内容的结构性描述都是使用自然语言进行的,因此很难保证可靠性,会有概率出现不符合预期的输出。
整个过程还涉及到与语言模型的多个轮次的交互,且语言模型还会把完整的用自然语言表述的思考过程也一并输出,因此需要占用大量的 Token。无论是从对模型的最大 Token Window 的限制来看,还是从大多数 MaaS 形态的模型按 Token 计费的商业模式来看都是极其不经济的。
Flappy 受到了 ReWOO 论文( Binfeng Xu, Zhiyuan Peng, Bowen Lei, Subhabrata Mukherjee, Yuchen Liu, & Dongkuan Xu. (2023). ReWOO: Decoupling Reasoning from Observations for Efficient Augmented Language Models.) 的启发,将行动计划的规划阶段和执行阶段拆开来实现。
例如同样的场景下,Flappy 可以语言模型生成 1 个执行计划(这个执行计划里已经一次性规划好了,每一步需要执行的操作)。 接着 Flappy 会根据每一步的具体需求调用对应的外部接口或语言模型来完成计划的执行。从而减少了每个步骤都需要语言模型参与的局限性。
另一方面,这样的设计还有 1 个好处是,可以使用不同能力的模型来分别进行任何规划和落地执行。 例如在任务规划阶段使用 GPT-4,具体的数据结构化任务的执行则可以交给 7B 的开源模型来做。从而最优化生产环境下的成本问题。
GitHub 仓库:https://github.com/pleisto/flappy
文档地址:https://flappy.pleisto.com/
现在我们尝试用 Flappy 来做一个简历筛选的 Node.js 应用,帮助 HR 快速筛选和整理简历信息。
筛选简历的关键,在于提炼简历信息。有了结构化的简历信息后,我们只需提供合适的筛选方法供使用者调用即可。
因此应用需要做的事情包括:
现在假设我们的数据库中有几份不同格式的简历,我们希望能提取出简历中的关键数据,包括:
筛选目的则是:获取所有工作年限大于7年的应聘者简历。
那我们就朝这个目标开始吧。
下面我将创建一个 TypeScript Node.js 项目来完成这个案例。我将把完整的创建过程用到的命令和代码都贴在文章中,你也可以按照相同的方式来玩一下。
# 新建项目 mkdir resume-assistant cd resuem-assistant # yarn 或者 npm 初始化项目 yarn init yarn add typescript ts-node --dev # 初始化 typescript 项目配置 yarn tsc --init # 添加 node-flappy 依赖 yarn add @pleisto/node-flappy@next # 创建入口文件 touch index.ts
import { createFlappyAgent, ChatGPT, } from '@pleisto/node-flappy' const gpt35 = new ChatGPT( new OpenAI({ apiKey: process.env.OPENAI_API_KEY!, baseURL: process.env.OPENAI_API_BASE! }), 'gpt-3.5-turbo' ) const resumeAssistant = createFlappyAgent({ llm: gpt35, functions: [ ] })
创建实例时,需要提供一个 LLM 模块给 flappy agent 使用。flappy 内置了对 ChatGPT,以及 Baichuan 的支持。这里以 ChatGPT 为例,当然我们建议不要将 apiKey 直接暴露在代码里,此处是通过环境变量引入。至于 functions
变量,将在下文做详细介绍。
运行文件,如果一切配置正常,将不会报错。
export OPENAI_API_KEY=xxx
export OPENAI_API_BASE=xxx
yarn ts-node index.ts
首先我们需要先准备一些 txt 格式的简历文件(可以让 ChatGPT 帮你生成),将他们放入 data 文件夹。
# 创建 data 文件夹,将 txt 格式的简历文件放入
mkdir data
Resume A I am an experienced software engineer with over seven years of front-end development experience. I am passionate about building exceptional user interfaces and am proficient in HTML, CSS, and JavaScript. I have a deep understanding of front-end frameworks like React, Vue, and Angular. I have been involved in multiple large-scale projects, where I was responsible for designing and implementing front-end architectures to ensure high performance and user-friendliness of websites. Additionally, I have project management experience and can lead teams to deliver high-quality results on time. ### Project Experience #### 1. E-commerce Website Refactoring (ABC Company) - Participated in the refactoring project of ABC Company's e-commerce website and served as the lead front-end technical lead. - Rebuilt the website's front end using the React framework, implementing responsive design and dynamic loading to enhance user experience. - Optimized front-end performance, reducing page load times, and improving overall website performance. - Designed and implemented a user behavior tracking and analysis system, providing crucial data support for the marketing team. #### 2. Social Media App Development (XYZ Startup) - Led a four-person front-end development team in building a social media application from scratch. - Utilized Vue.js framework and Vuex for state management, implementing real-time chat, dynamic post publishing, and user interaction features. - Integrated third-party login and sharing functionalities, boosting user registration and engagement. - Successfully launched the application into the market, growing the user base from zero to over fifty thousand. #### 3. Internal Management System Upgrade (DEF Enterprise) - Responsible for upgrading the company's internal management system from traditional server-side rendering to a modern front-end/backend separation architecture. - Developed a new front-end interface using the Angular framework, achieving fast data loading and interaction capabilities. - Optimized data communication with the backend using GraphQL, reducing unnecessary request cycles and enhancing system efficiency. - Facilitated team members' transition to the new technology stack through training and documentation. ### Skills and Expertise - Front-end Technologies: HTML, CSS, JavaScript, React, Vue, Angular, Redux, GraphQL - Front-end Tools: Webpack, Babel, ESLint - Project Management: Agile, Scrum, Jira ### Education - Bachelor's Degree in Computer Science, Peking University, 2012
Resume B I am a senior backend engineer with over eight years of software development experience. I specialize in designing and building efficient and reliable backend systems. I am proficient in various programming languages and technology stacks, including Java, Python, Node.js, and Go. I am well-versed in database design and optimization, with extensive experience in efficient querying and analysis on large datasets. I have been involved in multiple complex projects, where I was responsible for backend architecture and database design to ensure system stability and performance. Additionally, I also have experience in teamwork and project management, and can lead teams to achieve project goals. ### Project Experience #### 1. Financial Trading Platform Development (ABC Bank) - Served as the lead backend technical lead, responsible for designing and implementing the backend system of the financial trading platform. - Used Java and Spring framework to build the core trading engine, achieving high-concurrency transaction processing and real-time risk management. - Designed a high-availability database architecture, ensuring the security and reliability of transaction data. - Implemented complex transaction reporting and data analysis modules, providing crucial support for trading strategies. #### 2. E-commerce Platform Upgrade (XYZ Company) - Led a five-person backend development team, responsible for upgrading the company's e-commerce platform. - Used Python and Django framework to redesign and implement the backend services of the platform, improving system stability and scalability. - Integrated third-party payment and logistics services, optimizing the user shopping experience. - Introduced distributed caching and message queues, enhancing system performance and response speed. #### 3. Human Resource Management System Development (DEF Enterprise) - Designed and implemented a comprehensive human resource management system, providing the company with a complete HR solution. - Used Node.js and Express to build the backend services of the system, implementing modules for employee information management, recruitment processes, and performance assessments. - Optimized database queries and indexing, ensuring efficient operation of the system with large amounts of data. - Integrated single sign-on and LDAP authentication, enhancing system security and user experience. ### Skills and Expertise - Backend Development Languages: Java, Python, Node.js, Go - Backend Frameworks: Spring, Django, Express - Databases: MySQL, PostgreSQL, MongoDB - Project Management: Agile, Scrum, Jira ### Education - Master's Degree in Computer Science, [University Name], 2010 - Bachelor's Degree in Software Engineering, [University Name], 2008
有了数据以后,我们就可以给 flappy agent 添加一个方法,告诉他如何读取这些简历数据。可以通过创建 InvokeFunction
的方式来实现。
import { createFlappyAgent, createInvokeFunction, z, ChatGPT, } from "@pleisto/node-flappy"; import OpenAI from "openai"; import fs from "fs"; const gpt35 = new ChatGPT( new OpenAI({ apiKey: process.env.OPENAI_API_KEY!, baseURL: process.env.OPENAI_API_BASE!, }), "gpt-3.5-turbo" ); const getResumes = createInvokeFunction({ name: "getResumes", description: "Get all resumes.", args: z.null(), returnType: z.array(z.string()), resolve: async () => { const dirPath = "./data"; return fs .readdirSync(dirPath) .map((filename) => fs.readFileSync(`${dirPath}/${filename}`, "utf-8").toString() ); }, }); const resumeAssistant = createFlappyAgent({ llm: gpt35, functions: [ getResumes, ], });
通过 createInvokeFunction
我们为 agent 创建了一个读取所有简历的方法,有了这个方法,LLM 就会自行理解调用者的需求,在合适的时机去执行这个方法以达到目的。
createInvodeFunction
的参数包括:
name
方法名称,一个可读性的名称便于 LLM 理解。description
方法描述,补充性描述便于 LLM 理解方法的作用。args
方法的参数类型。returnType
方法的返回值类型。resolve
方法的执行体。flappy 引入了 Zod 作为类型描述工具,开发者可以通过引入 z
变量来使用 Zod
。
至此,通过定义一个 InvokeFunction
,开发者成功描述了一个清晰的获取简历数据的方法,LLM 也知道了这个方法的作用。
有了简历数据后,就需要对自然语言写成的简历进行整理,使其变成结构化数据以便于分析使用。flappy 提供了另外一个方法 SynthesizedFunction
专门用于处理这种需求。
import { createFlappyAgent, createInvokeFunction, createSynthesizedFunction, z, ChatGPT, } from "@pleisto/node-flappy"; import OpenAI from "openai"; import fs from "fs"; const gpt35 = new ChatGPT( new OpenAI({ apiKey: process.env.OPENAI_API_KEY!, baseURL: process.env.OPENAI_API_BASE!, }), "gpt-3.5-turbo" ); const getResumes = createInvokeFunction({ name: "getResumes", description: "Get all resumes.", args: z.null(), returnType: z.array(z.string()), resolve: async () => { const dirPath = "./data"; return fs .readdirSync(dirPath) .map((filename) => fs.readFileSync(`${dirPath}/${filename}`, "utf-8").toString() ); }, }); const resumeMetaType = z.object({ name: z.string(), profession: z.string(), experienceYears: z.number(), skills: z.array( z.object({ name: z.string(), }) ), education: z.object({ degree: z.string(), fieldOfStudy: z.string(), university: z.string(), year: z.number(), }), }); const getMetaFromOneResume = createSynthesizedFunction({ name: "getMeta", description: "Extract meta data from a resume full text.", args: z.object({ resume: z.string().describe("Resume full text."), }), returnType: resumeMetaType, }); const resumeAssistant = createFlappyAgent({ llm: gpt35, functions: [ getResumes, getMetaFromOneResume, ], });
通过 createSynthesizedFunction
,我们定义了一个方法,告诉 LLM 我们需要每一份简历里的哪些关键信息。如此,LLM 就知道了在分析每份简历的时候,应该提取什么数据了。
**注意,由于 LLM token 的长度限制问题,我们强烈建议一次性只提交一份数据交与 LLM 处理。**因此我们还需再提供一个方法,来遍历简历数据进行数据分析。
import { createFlappyAgent, createInvokeFunction, createSynthesizedFunction, z, ChatGPT, } from "@pleisto/node-flappy"; import OpenAI from "openai"; import fs from "fs"; const gpt35 = new ChatGPT( new OpenAI({ apiKey: process.env.OPENAI_API_KEY!, baseURL: process.env.OPENAI_API_BASE!, }), "gpt-3.5-turbo" ); const getResumes = createInvokeFunction({ name: "getResumes", description: "Get all resumes.", args: z.null(), returnType: z.array(z.string()), resolve: async () => { const dirPath = "./data"; return fs .readdirSync(dirPath) .map((filename) => fs.readFileSync(`${dirPath}/${filename}`, "utf-8").toString() ); }, }); const resumeMetaType = z.object({ name: z.string(), profession: z.string(), experienceYears: z.number(), skills: z.array( z.object({ name: z.string(), }) ), education: z.object({ degree: z.string(), fieldOfStudy: z.string(), university: z.string(), year: z.number(), }), }); const getMetaFromOneResume = createSynthesizedFunction({ name: "getMeta", description: "Extract meta data from a resume full text.", args: z.object({ resume: z.string().describe("Resume full text."), }), returnType: resumeMetaType, }); interface ResumeMeta { name: string; profession: string; experienceYears: number; skills: Array<{ name: string }>; education: { degree: string; fieldOfStudy: string; university: string; year: number; }; } const mapResumesToMeta = createInvokeFunction({ name: "mapResumesToMeta", args: z.object({ resumes: z.array(z.string().describe("resume full text list")), }), returnType: z.array( z.object({ name: z.string(), profession: z.string(), experienceYears: z.number(), skills: z.array( z.object({ name: z.string(), }) ), education: z.object({ degree: z.string(), fieldOfStudy: z.string(), university: z.string(), year: z.number(), }), }) ), async resolve({ resumes }) { const data: Array<ResumeMeta> = []; for (const resume of resumes) { data.push(await getMetaFromOneResume.call(resumeAssistant, { resume })); } return data; }, }); const resumeAssistant = createFlappyAgent({ llm: gpt35, functions: [ getResumes, getMetaFromOneResume, mapResumesToMeta ], });
通过再创建一个 InvokeFunction
,遍历所有简历信息,并手动触发 SynthesizedFunction
,我们就可以得到所有简历的关键信息了。
有了简历的关键信息后,就可以添加筛选方法了。根据需求,我们需要一个筛选工作年限的方法。
import { createFlappyAgent, createInvokeFunction, createSynthesizedFunction, z, ChatGPT, } from "@pleisto/node-flappy"; import OpenAI from "openai"; import fs from "fs"; const gpt35 = new ChatGPT( new OpenAI({ apiKey: process.env.OPENAI_API_KEY!, baseURL: process.env.OPENAI_API_BASE!, }), "gpt-3.5-turbo" ); const getResumes = createInvokeFunction({ name: "getResumes", description: "Get all resumes.", args: z.null(), returnType: z.array(z.string()), resolve: async () => { const dirPath = "./data"; return fs .readdirSync(dirPath) .map((filename) => fs.readFileSync(`${dirPath}/${filename}`, "utf-8").toString() ); }, }); const resumeMetaType = z.object({ name: z.string(), profession: z.string(), experienceYears: z.number(), skills: z.array( z.object({ name: z.string(), }) ), education: z.object({ degree: z.string(), fieldOfStudy: z.string(), university: z.string(), year: z.number(), }), }); const getMetaFromOneResume = createSynthesizedFunction({ name: "getMeta", description: "Extract meta data from a resume full text.", args: z.object({ resume: z.string().describe("Resume full text."), }), returnType: resumeMetaType, }); interface ResumeMeta { name: string; profession: string; experienceYears: number; skills: Array<{ name: string }>; education: { degree: string; fieldOfStudy: string; university: string; year: number; }; } const mapResumesToMeta = createInvokeFunction({ name: "mapResumesToMeta", args: z.object({ resumes: z.array(z.string().describe("resume full text list")), }), returnType: z.array( z.object({ name: z.string(), profession: z.string(), experienceYears: z.number(), skills: z.array( z.object({ name: z.string(), }) ), education: z.object({ degree: z.string(), fieldOfStudy: z.string(), university: z.string(), year: z.number(), }), }) ), async resolve({ resumes }) { const data: Array<ResumeMeta> = []; for (const resume of resumes) { data.push(await getMetaFromOneResume.call(resumeAssistant, resume)); } return data; }, }); const filterResumeMetaOverExperienceYears = createInvokeFunction({ name: "filterResumeMetaOverExperienceYears", args: z.object({ resumes: z.array(resumeMetaType), years: z.number(), }), returnType: z.array(resumeMetaType), resolve: async ({ resumes, years }) => resumes.filter((r: ResumeMeta) => r.experienceYears > years), }); const resumeAssistant = createFlappyAgent({ llm: gpt35, functions: [ getResumes, getMetaFromOneResume, mapResumesToMeta, filterResumeMetaOverExperienceYears ], });
有了这些方法后,一切准备就绪,让我们来试试效果吧。
添加执行计划的代码。
async function run() {
const result = await resumeAssistant.executePlan(
"Retrieve metadata of resumes with more than 7 years of work experience."
);
console.log("Result:", result);
}
void run();
运行。
yarn ts-node index.ts
查看日志,最终你会得到(如果你在执行过程中报错了,只需多执行几次就行。)
Result: [
{
name: 'Senior Backend Engineer',
profession: 'Backend Engineer',
experienceYears: 8,
skills: [ [Object], [Object], [Object], [Object] ],
education: {
degree: "Master's Degree",
fieldOfStudy: 'Computer Science',
university: '[University Name]',
year: 2010
}
}
]
LLM 成功帮我们筛选出了工作年限大于7年的工作简历,并且提取了我们需要的关键信息。回顾输出的 DEBUG 信息,你会看到 agent 是如何做计划的:
[ { thought: 'Retrieve all resumes', id: 1, functionName: 'getResumes', args: {} }, { thought: 'Map each resume to its metadata', id: 2, functionName: 'mapResumesToMeta', args: { resumes: '%@_1' } }, { thought: 'Filter resumes with more than 7 years of work experience', id: 3, functionName: 'filterResumeMetaOverExperienceYears', args: { resumes: '%@_2', years: 7 } } ]
通过三个步骤,读取简历,分析简历数据,筛选简历,并且利用前一个步骤的执行结果串联。是不是非常聪明呢?
如果你对这个例子感兴趣,你可以在 https://github.com/pleisto/flappy-resume-assistant 看到完整的代码。
欢迎试用我们的例子以及对我们的代码提交贡献.
GitHub 仓库:https://github.com/pleisto/flappy
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。