4.7.1-alpha2 (#1153)

Co-authored-by: UUUUnotfound <31206589+UUUUnotfound@users.noreply.github.com>
Co-authored-by: Hexiao Zhang <731931282qq@gmail.com>
Co-authored-by: heheer <71265218+newfish-cmyk@users.noreply.github.com>
This commit is contained in:
Archer 2024-04-08 21:17:33 +08:00 committed by GitHub
parent 3b0b2d68cc
commit 1fbc407ecf
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
84 changed files with 1773 additions and 715 deletions

View File

@ -1,4 +1,4 @@
name: Build docs images and copy image to docker hub
name: Deploy image by kubeconfig
on:
workflow_dispatch:
push:
@ -68,7 +68,7 @@ jobs:
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
outputs:
tags: ${{ steps.datetime.outputs.datetime }}
tags: ${{ steps.datetime.outputs.datetime }}
update-docs-image:
needs: build-fastgpt-docs-images
runs-on: ubuntu-20.04
@ -85,4 +85,4 @@ jobs:
env:
KUBE_CONFIG: ${{ secrets.KUBE_CONFIG }}
with:
args: annotate deployment/fastgpt-docs originImageName="registry.cn-hangzhou.aliyuncs.com/${{ secrets.ALI_HUB_USERNAME }}/fastgpt-docs:${{ needs.build-fastgpt-docs-images.outputs.tags }}" --overwrite
args: annotate deployment/fastgpt-docs originImageName="registry.cn-hangzhou.aliyuncs.com/${{ secrets.ALI_HUB_USERNAME }}/fastgpt-docs:${{ needs.build-fastgpt-docs-images.outputs.tags }}" --overwrite

View File

@ -1,4 +1,4 @@
name: deploy-docs
name: Deploy image to vercel
on:
workflow_dispatch:
@ -47,7 +47,7 @@ jobs:
- name: Add cdn for images
run: |
sed -i "s#\](/imgs/#\](https://cdn.jsdelivr.us/gh/yangchuansheng/fastgpt-imgs@main/imgs/#g" $(grep -rl "\](/imgs/" docSite/content/docs)
sed -i "s#\](/imgs/#\](https://cdn.jsdelivr.net/gh/yangchuansheng/fastgpt-imgs@main/imgs/#g" $(grep -rl "\](/imgs/" docSite/content/docs)
# Step 3 - Install Hugo (specific version)
- name: Install Hugo

View File

@ -1,4 +1,4 @@
name: preview-docs
name: Preview FastGPT docs
on:
pull_request_target:
@ -47,7 +47,7 @@ jobs:
- name: Add cdn for images
run: |
sed -i "s#\](/imgs/#\](https://cdn.jsdelivr.us/gh/yangchuansheng/fastgpt-imgs@main/imgs/#g" $(grep -rl "\](/imgs/" docSite/content/docs)
sed -i "s#\](/imgs/#\](https://cdn.jsdelivr.net/gh/yangchuansheng/fastgpt-imgs@main/imgs/#g" $(grep -rl "\](/imgs/" docSite/content/docs)
# Step 3 - Install Hugo (specific version)
- name: Install Hugo

View File

@ -1,4 +1,4 @@
name: Release
name: Release helm chart
on:
push:

View File

@ -103,7 +103,7 @@ fastgpt.run 域名会弃用。
> [Sealos](https://sealos.io) 的服务器在国外,不需要额外处理网络问题,无需服务器、无需魔法、无需域名,支持高并发 & 动态伸缩。点击以下按钮即可一键部署 👇
[![](https://cdn.jsdelivr.us/gh/labring-actions/templates@main/Deploy-on-Sealos.svg)](https://cloud.sealos.io/?openapp=system-fastdeploy%3FtemplateName%3Dfastgpt)
[![](https://cdn.jsdelivr.net/gh/labring-actions/templates@main/Deploy-on-Sealos.svg)](https://cloud.sealos.io/?openapp=system-fastdeploy%3FtemplateName%3Dfastgpt)
由于需要部署数据库,部署完后需要等待 2~4 分钟才能正常访问。默认用了最低配置,首次访问时会有些慢。相关使用教程可查看:[Sealos 部署 FastGPT](https://doc.fastgpt.in/docs/development/sealos/)

View File

@ -106,7 +106,7 @@ Project tech stack: NextJs + TS + ChakraUI + Mongo + Postgres (Vector plugin)
- **⚡ Deployment**
[![](https://cdn.jsdelivr.us/gh/labring-actions/templates@main/Deploy-on-Sealos.svg)](https://cloud.sealos.io/?openapp=system-fastdeploy%3FtemplateName%3Dfastgpt)
[![](https://cdn.jsdelivr.net/gh/labring-actions/templates@main/Deploy-on-Sealos.svg)](https://cloud.sealos.io/?openapp=system-fastdeploy%3FtemplateName%3Dfastgpt)
Give it a 2-4 minute wait after deployment as it sets up the database. Initially, it might be a tad slow since we're using the basic settings.

View File

@ -94,7 +94,7 @@ https://github.com/labring/FastGPT/assets/15308462/7d3a38df-eb0e-4388-9250-2409b
- **⚡ デプロイ**
[![](https://cdn.jsdelivr.us/gh/labring-actions/templates@main/Deploy-on-Sealos.svg)](https://cloud.sealos.io/?openapp=system-fastdeploy%3FtemplateName%3Dfastgpt)
[![](https://cdn.jsdelivr.net/gh/labring-actions/templates@main/Deploy-on-Sealos.svg)](https://cloud.sealos.io/?openapp=system-fastdeploy%3FtemplateName%3Dfastgpt)
デプロイ 後、データベースをセットアップするので、24分待 ってください。基本設定 を 使 っているので、最初 は 少 し 遅 いかもしれません。

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 91 KiB

View File

@ -156,7 +156,7 @@ llm模型全部合并
请使用 4.6.6-alpha 以上版本,配置文件中的 `reRankModels` 为重排模型虽然是数组不过目前仅有第1个生效。
1. [部署 ReRank 模型](/docs/development/custom-models/reranker/)
1. [部署 ReRank 模型](/docs/development/custom-models/bge-rerank/)
1. 找到 FastGPT 的配置文件中的 `reRankModels` 4.6.6 以前是 `ReRankModels`
2. 修改对应的值:(记得去掉注释)

View File

@ -0,0 +1,121 @@
---
title: '接入 bge-rerank 重排模型'
description: '接入 bge-rerank 重排模型'
icon: 'sort'
draft: false
toc: true
weight: 910
---
## 不同模型推荐配置
推荐配置如下:
{{< table "table-hover table-striped-columns" >}}
| 模型名 | 内存 | 显存 | 硬盘空间 | 启动命令 |
|------|---------|---------|----------|--------------------------|
| bge-rerank-base | >=4GB | >=4GB | >=8GB | python app.py |
| bge-rerank-large | >=8GB | >=8GB | >=8GB | python app.py |
| bge-rerank-v2-m3 | >=8GB | >=8GB | >=8GB | python app.py |
{{< /table >}}
## 源码部署
### 1. 安装环境
- Python 3.9, 3.10
- CUDA 11.7
- 科学上网环境
### 2. 下载代码
3 个模型代码分别为:
1. [https://github.com/labring/FastGPT/tree/main/python/reranker/bge-reranker-base](https://github.com/labring/FastGPT/tree/main/python/reranker/bge-reranker-base)
2. [https://github.com/labring/FastGPT/tree/main/python/reranker/bge-reranker-large](https://github.com/labring/FastGPT/tree/main/python/reranker/bge-reranker-large)
3. [https://github.com/labring/FastGPT/tree/main/python/reranker/bge-rerank-v2-m3](https://github.com/labring/FastGPT/tree/main/python/reranker/bge-rerank-v2-m3)
### 3. 安装依赖
```sh
pip install -r requirements.txt
```
### 4. 下载模型
3个模型的 huggingface 仓库地址如下:
1. [https://huggingface.co/BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base)
2. [https://huggingface.co/BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large)
3. [https://huggingface.co/BAAI/bge-rerank-v2-m3](https://huggingface.co/BAAI/bge-rerank-v2-m3)
在对应代码目录下 clone 模型。目录结构:
```
bge-reranker-base/
app.py
Dockerfile
requirements.txt
```
### 5. 运行代码
```bash
python app.py
```
启动成功后应该会显示如下地址:
![](/imgs/rerank1.png)
> 这里的 `http://0.0.0.0:6006` 就是连接地址。
## docker 部署
**镜像名分别为:**
1. registry.cn-hangzhou.aliyuncs.com/fastgpt/bge-rerank-base:v0.1 (4 GB+)
2. registry.cn-hangzhou.aliyuncs.com/fastgpt/bge-rerank-large:v0.1 (5 GB+)
3. registry.cn-hangzhou.aliyuncs.com/fastgpt/bge-rerank-v2-m3:v0.1 (5 GB+)
**端口**
6006
**环境变量**
```
ACCESS_TOKEN=访问安全凭证请求时Authorization: Bearer ${ACCESS_TOKEN}
```
**运行命令示例**
```sh
# auth token 为mytoken
docker run -d --name reranker -p 6006:6006 -e ACCESS_TOKEN=mytoken --gpus all registry.cn-hangzhou.aliyuncs.com/fastgpt/bge-rerank-base:v0.1
```
**docker-compose.yml示例**
```
version: "3"
services:
reranker:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/rerank:v0.2
container_name: reranker
# GPU运行环境如果宿主机未安装将deploy配置隐藏即可
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
ports:
- 6006:6006
environment:
- ACCESS_TOKEN=mytoken
```
## 接入 FastGPT
参考 [ReRank模型接入](/docs/development/configuration/#rerank-接入)host 变量为部署的域名。

View File

@ -1,90 +0,0 @@
---
title: '接入 ReRank 重排模型'
description: '接入 ReRank 重排模型'
icon: 'sort'
draft: false
toc: true
weight: 910
---
## 推荐配置
推荐配置如下:
{{< table "table-hover table-striped-columns" >}}
| 类型 | 内存 | 显存 | 硬盘空间 | 启动命令 |
|------|---------|---------|----------|--------------------------|
| base | >=4GB | >=3GB | >=8GB | python app.py |
{{< /table >}}
## 部署
### 环境要求
- Python 3.10.11
- CUDA 11.7
- 科学上网环境
### 源码部署
1. 根据上面的环境配置配置好环境,具体教程自行 GPT
2. 下载 [python 文件](https://github.com/labring/FastGPT/tree/main/python/reranker/bge-reranker-base)
3. 在命令行输入命令 `pip install -r requirements.txt`
4. 按照[https://huggingface.co/BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base)下载模型仓库到app.py同级目录
5. 添加环境变量 `export ACCESS_TOKEN=XXXXXX` 配置 token这里的 token 只是加一层验证,防止接口被人盗用,默认值为 `ACCESS_TOKEN`
6. 执行命令 `python app.py`
然后等待模型下载,直到模型加载完毕为止。如果出现报错先问 GPT。
启动成功后应该会显示如下地址:
![](/imgs/chatglm2.png)
> 这里的 `http://0.0.0.0:6006` 就是连接地址。
### docker 部署
+ 镜像名: `registry.cn-hangzhou.aliyuncs.com/fastgpt/rerank:v0.2`
+ 端口号: 6006
+ 大小约8GB
**设置安全凭证即oneapi中的渠道密钥**
```
ACCESS_TOKEN=mytoken
```
**运行命令示例**
- 无需GPU环境使用CPU运行
```sh
docker run -d --name reranker -p 6006:6006 -e ACCESS_TOKEN=mytoken registry.cn-hangzhou.aliyuncs.com/fastgpt/rerank:v0.2
```
- 需要CUDA 11.7环境
```sh
docker run -d --gpus all --name reranker -p 6006:6006 -e ACCESS_TOKEN=mytoken registry.cn-hangzhou.aliyuncs.com/fastgpt/rerank:v0.2
```
**docker-compose.yml示例**
```
version: "3"
services:
reranker:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/rerank:v0.2
container_name: reranker
# GPU运行环境如果宿主机未安装将deploy配置隐藏即可
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
ports:
- 6006:6006
environment:
- ACCESS_TOKEN=mytoken
```
## 接入 FastGPT
参考 [ReRank模型接入](/docs/development/configuration/#rerank-接入)host 变量为部署的域名。

View File

@ -32,7 +32,7 @@ FastGPT 使用了 one-api 项目来管理模型池,其可以兼容 OpenAI 、A
可选择 [Sealos 快速部署 OneAPI](/docs/development/one-api),更多部署方法可参考该项目的 [README](https://github.com/songquanpeng/one-api),也可以直接通过以下按钮一键部署:
<a href="https://template.cloud.sealos.io/deploy?templateName=one-api" rel="external" target="_blank"><img src="https://cdn.jsdelivr.us/gh/labring-actions/templates@main/Deploy-on-Sealos.svg" alt="Deploy on Sealos"/></a>
<a href="https://template.cloud.sealos.io/deploy?templateName=one-api" rel="external" target="_blank"><img src="https://cdn.jsdelivr.net/gh/labring-actions/templates@main/Deploy-on-Sealos.svg" alt="Deploy on Sealos"/></a>
## 一、安装 Docker 和 docker-compose

View File

@ -29,7 +29,7 @@ MySQL 版本支持多实例,高并发。
直接点击以下按钮即可一键部署 👇
<a href="https://template.cloud.sealos.io/deploy?templateName=one-api" rel="external" target="_blank"><img src="https://cdn.jsdelivr.us/gh/labring-actions/templates@main/Deploy-on-Sealos.svg" alt="Deploy on Sealos"/></a>
<a href="https://template.cloud.sealos.io/deploy?templateName=one-api" rel="external" target="_blank"><img src="https://cdn.jsdelivr.net/gh/labring-actions/templates@main/Deploy-on-Sealos.svg" alt="Deploy on Sealos"/></a>
部署完后会跳转「应用管理」,数据库在另一个应用「数据库」中。需要等待 1~3 分钟数据库运行后才能访问成功。

View File

@ -21,7 +21,7 @@ FastGPT 使用了 one-api 项目来管理模型池,其可以兼容 OpenAI 、A
## 一键部署
Sealos 的服务器在国外,不需要额外处理网络问题,无需服务器、无需魔法、无需域名,支持高并发 & 动态伸缩。点击以下按钮即可一键部署 👇
<a href="https://template.cloud.sealos.io/deploy?templateName=fastgpt" rel="external" target="_blank"><img src="https://cdn.jsdelivr.us/gh/labring-actions/templates@main/Deploy-on-Sealos.svg" alt="Deploy on Sealos"/></a>
<a href="https://template.cloud.sealos.io/deploy?templateName=fastgpt" rel="external" target="_blank"><img src="https://cdn.jsdelivr.net/gh/labring-actions/templates@main/Deploy-on-Sealos.svg" alt="Deploy on Sealos"/></a>
由于需要部署数据库,部署完后需要等待 2~4 分钟才能正常访问。默认用了最低配置,首次访问时会有些慢。

View File

@ -1,5 +1,5 @@
---
title: 'V4.7'
title: 'V4.7(需要初始化)'
description: 'FastGPT V4.7更新说明'
icon: 'upgrade'
draft: false
@ -26,7 +26,7 @@ curl --location --request POST 'https://{{host}}/api/admin/initv47' \
## 3. 升级 ReRank 模型
4.7对ReRank模型进行了格式变动兼容 cohere 的格式,可以直接使用 cohere 提供的 API。如果是本地的 ReRank 模型,需要修改镜像为:`registry.cn-hangzhou.aliyuncs.com/fastgpt/rerank:v0.2` 。
4.7对ReRank模型进行了格式变动兼容 cohere 的格式,可以直接使用 cohere 提供的 API。如果是本地的 ReRank 模型,需要修改镜像为:`registry.cn-hangzhou.aliyuncs.com/fastgpt/bge-rerank-base:v0.1` 。
cohere的重排模型对中文不是很好感觉不如 bge 的好用,接入教程如下:

View File

@ -21,11 +21,13 @@ curl --location --request POST 'https://{{host}}/api/admin/clearInvalidData' \
## V4.7.1 更新说明
1. 新增 - Pptx 和 xlsx 文件读取。但所有文件读取都放服务端,会消耗更多的服务器资源,以及无法在上传时预览更多内容。
2. 新增 - 集成 Laf 云函数,可以读取 Laf 账号中的云函数作为 HTTP 模块。
3. 新增 - 定时器清理垃圾数据。采用小范围清理会清理最近n个小时的所以请保证服务持续运行长时间不允许可以继续执行 clearInvalidData 的接口进行全量清理。)
4. 商业版新增 - 后台配置系统通知。
5. 修改 - csv导入模板取消 header 校验,自动获取前两列。
6. 修复 - 工具调用模块连线数据类型校验错误。
7. 修复 - 自定义索引输入时,解构数据失败。
8. 修复 - rerank 模型数据格式。
1. 新增 - 语音输入完整配置。支持选择是否打开语音输入(包括分享页面),支持语音输入后自动发送,支持语音输入后自动语音播放(流式)。
2. 新增 - Pptx 和 xlsx 文件读取。但所有文件读取都放服务端,会消耗更多的服务器资源,以及无法在上传时预览更多内容。
3. 新增 - 集成 Laf 云函数,可以读取 Laf 账号中的云函数作为 HTTP 模块。
4. 新增 - 定时器清理垃圾数据。采用小范围清理会清理最近n个小时的所以请保证服务持续运行长时间不允许可以继续执行 clearInvalidData 的接口进行全量清理。)
5. 商业版新增 - 后台配置系统通知。
6. 修改 - csv导入模板取消 header 校验,自动获取前两列。
7. 修复 - 工具调用模块连线数据类型校验错误。
8. 修复 - 自定义索引输入时,解构数据失败。
9. 修复 - rerank 模型数据格式。
10. 修复 - 问题补全历史记录BUG

View File

@ -0,0 +1,88 @@
---
title: "Laf 函数调用"
description: "FastGPT Laf 函数调用模块介绍"
icon: "Laf"
draft: false
toc: true
weight: 355
---
## 特点
- 可重复添加
- 有外部输入
- 手动配置
- 触发执行
- 核中核模块
![](/imgs/laf1.webp)
## 介绍
Laf 函数调用模块可以调用 Laf 账号下的云函数,其操作与 HTTP 模块相同,可以理解为封装了请求 Laf 云函数的 http 模块,值得注意的不同之处为:
- 只能使用 POST 请求
- 请求自带系统参数 systemParams
## 具体使用
要能调用 Laf 云函数,首先需要绑定 Laf 账号和应用,并且在应用中创建云函数。
Laf 提供了 PAT(访问凭证) 来实现 Laf 平台外的快捷登录,可以访问 [Laf 文档](https://doc.Laf.run/zh/cli/#%E7%99%BB%E5%BD%95)查看详细如何获取 PAT。
在获取到 PAT 后,我们可以进入 fastgpt 的账号页或是直接在高级编排中使用 Laf 模块,填入 PAT 验证后,选择需要绑定的应用(应用需要是 Running 状态),即可调用 Laf 云函数。
> 如果需要解绑则取消绑定后,点击“更新”即可
![](/imgs/laf2.webp)
为了更便捷地调用 Laf 云函数,可以参照下面的代码编写云函数,以便 openAPI 识别
```ts
import cloud from '@Lafjs/cloud'
interface IRequestBody {
username: string // 用户名
passwd?: string // 密码
}
interface IResponse {
message: string // 返回信息
data: any // 返回数据
}
type extendedBody = IRequestBody & {
systemParams?: {
appId: string,
variables: string,
histories: string,
cTime: string,
chatId: string,
responseChatItemId: string
}
}
export default async function (ctx: FunctionContext): Promise<IResponse> {
const body: extendedBody = ctx.body;
console.log(body.systemParams.chatId);
return {
message: 'ok',
data: '查找到用户名为' + body.username + '的用户'
};
}
```
具体操作可以是,进入 Laf 的函数页面,新建函数(注意 fastgpt 只会调用 post 请求的函数然后复制上面的代码或者点击更多模板搜索“fastgpt”使用下面的模板
![](/imgs/laf3.webp)
这样就能直接通过点击“同步参数”,一键填写输入输出
![](/imgs/laf4.webp)
当然也可以手动添加,手动修改后的参数不会被“同步参数”修改
## 作用
Laf 账号是绑定在团队上的,团队的成员可以轻松调用已经编写好的云函数

View File

@ -58,7 +58,7 @@
<!-- change -->
<script
src="https://cdn.jsdelivr.us/npm/medium-zoom/dist/medium-zoom.min.js"
src="https://cdn.jsdelivr.net/npm/medium-zoom/dist/medium-zoom.min.js"
crossorigin="anonymous"
referrerpolicy="no-referrer"
></script>

View File

@ -1,5 +1,5 @@
<head>
<script defer type="text/javascript" src="{{ "js/jsdelivr-auto-fallback.js" | absURL }}"></script>
<!-- <script defer type="text/javascript" src="{{ "js/jsdelivr-auto-fallback.js" | absURL }}"></script> -->
<meta charset="utf-8" />
<title>
{{- $url := replace .Permalink ( printf "%s" .Site.BaseURL) "" }}
@ -106,6 +106,6 @@
{{- end -}}
{{- end -}}
<!-- change -->
<link rel="preload" href="https://cdn.jsdelivr.us/npm/lxgw-wenkai-screen-webfont@1.1.0/style.css" as="style" />
<link rel="stylesheet" href="https://cdn.jsdelivr.us/npm/lxgw-wenkai-screen-webfont@1.1.0/style.css" />
<link rel="preload" href="https://cdn.jsdelivr.net/npm/lxgw-wenkai-screen-webfont@1.1.0/style.css" as="style" />
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/lxgw-wenkai-screen-webfont@1.1.0/style.css" />
</head>

View File

@ -4,7 +4,7 @@
let failed;
let isRunning;
const DEST_LIST = [
'cdn.jsdelivr.us',
'cdn.jsdelivr.net',
'jsd.cdn.zzko.cn',
'jsd.onmicrosoft.cn'
];

View File

@ -0,0 +1 @@
export type AuthGoogleTokenProps = { googleToken: string; remoteip?: string | null };

View File

@ -1,6 +1,6 @@
import type { LLMModelItemType } from '../ai/model.d';
import { AppTypeEnum } from './constants';
import { AppSchema, AppSimpleEditFormType } from './type';
import { AppSchema } from './type';
export type CreateAppParams = {
name?: string;

View File

@ -1,3 +1,5 @@
import { AppWhisperConfigType } from './type';
export enum AppTypeEnum {
simple = 'simple',
advanced = 'advanced'
@ -10,3 +12,9 @@ export const AppTypeMap = {
label: 'advanced'
}
};
export const defaultWhisperConfig: AppWhisperConfigType = {
open: false,
autoSend: false,
autoTTSResponse: false
};

View File

@ -1,9 +1,5 @@
import type {
AppTTSConfigType,
FlowNodeTemplateType,
ModuleItemType,
VariableItemType
} from '../module/type.d';
import type { FlowNodeTemplateType, ModuleItemType } from '../module/type.d';
import { AppTypeEnum } from './constants';
import { PermissionTypeEnum } from '../../support/permission/constant';
import type { DatasetModuleProps } from '../module/node/type.d';
@ -82,5 +78,31 @@ export type AppSimpleEditFormType = {
voice?: string | undefined;
speed?: number | undefined;
};
whisper: AppWhisperConfigType;
};
};
/* app function config */
// variable
export type VariableItemType = {
id: string;
key: string;
label: string;
type: `${VariableInputEnum}`;
required: boolean;
maxLen: number;
enums: { value: string }[];
};
// tts
export type AppTTSConfigType = {
type: 'none' | 'web' | 'model';
model?: string;
voice?: string;
speed?: number;
};
// whisper
export type AppWhisperConfigType = {
open: boolean;
autoSend: boolean;
autoTTSResponse: boolean;
};

View File

@ -9,6 +9,7 @@ import type { FlowNodeInputItemType } from '../module/node/type.d';
import { getGuideModule, splitGuideModule } from '../module/utils';
import { ModuleItemType } from '../module/type.d';
import { DatasetSearchModeEnum } from '../dataset/constants';
import { defaultWhisperConfig } from './constants';
export const getDefaultAppForm = (): AppSimpleEditFormType => {
return {
@ -36,7 +37,8 @@ export const getDefaultAppForm = (): AppSimpleEditFormType => {
questionGuide: false,
tts: {
type: 'web'
}
},
whisper: defaultWhisperConfig
}
};
};
@ -107,14 +109,15 @@ export const appModules2Form = ({ modules }: { modules: ModuleItemType[] }) => {
ModuleInputKeyEnum.datasetSearchExtensionBg
);
} else if (module.flowType === FlowNodeTypeEnum.userGuide) {
const { welcomeText, variableModules, questionGuide, ttsConfig } = splitGuideModule(
getGuideModule(modules)
);
const { welcomeText, variableModules, questionGuide, ttsConfig, whisperConfig } =
splitGuideModule(getGuideModule(modules));
defaultAppForm.userGuide = {
welcomeText: welcomeText,
variables: variableModules,
questionGuide: questionGuide,
tts: ttsConfig
tts: ttsConfig,
whisper: whisperConfig
};
} else if (module.flowType === FlowNodeTypeEnum.pluginModule) {
defaultAppForm.selectedTools.push({

View File

@ -109,7 +109,7 @@ export type ChatItemType = (UserChatItemType | SystemChatItemType | AIChatItemTy
};
export type ChatSiteItemType = (UserChatItemType | SystemChatItemType | AIChatItemType) & {
dataId?: string;
dataId: string;
status: `${ChatStatusEnum}`;
moduleName?: string;
ttsBuffer?: Uint8Array;

View File

@ -37,6 +37,7 @@ export enum ModuleInputKeyEnum {
userChatInput = 'userChatInput',
questionGuide = 'questionGuide',
tts = 'tts',
whisper = 'whisper',
answerText = 'text',
agents = 'agents', // cq agent key

View File

@ -63,24 +63,6 @@ export type ModuleItemType = {
};
/* --------------- function type -------------------- */
// variable
export type VariableItemType = {
id: string;
key: string;
label: string;
type: `${VariableInputEnum}`;
required: boolean;
maxLen: number;
enums: { value: string }[];
};
// tts
export type AppTTSConfigType = {
type: 'none' | 'web' | 'model';
model?: string;
voice?: string;
speed?: number;
};
export type SelectAppItemType = {
id: string;
name: string;

View File

@ -6,10 +6,12 @@ import {
variableMap
} from './constants';
import { FlowNodeInputItemType, FlowNodeOutputItemType } from './node/type';
import { AppTTSConfigType, ModuleItemType, VariableItemType } from './type';
import { ModuleItemType } from './type';
import type { VariableItemType, AppTTSConfigType, AppWhisperConfigType } from '../app/type';
import { Input_Template_Switch } from './template/input';
import { EditorVariablePickerType } from '../../../web/components/common/Textarea/PromptEditor/type';
import { Output_Template_Finish } from './template/output';
import { defaultWhisperConfig } from '../app/constants';
/* module */
export const getGuideModule = (modules: ModuleItemType[]) =>
@ -30,11 +32,16 @@ export const splitGuideModule = (guideModules?: ModuleItemType) => {
(item) => item.key === ModuleInputKeyEnum.tts
)?.value || { type: 'web' };
const whisperConfig: AppWhisperConfigType =
guideModules?.inputs?.find((item) => item.key === ModuleInputKeyEnum.whisper)?.value ||
defaultWhisperConfig;
return {
welcomeText,
variableModules,
questionGuide,
ttsConfig
ttsConfig,
whisperConfig
};
};

View File

@ -5,6 +5,7 @@ export type PathDataType = {
path: string;
params: any[];
request: any;
response: any;
};
export type OpenApiJsonSchema = {

View File

@ -43,7 +43,8 @@ export const str2OpenApiSchema = async (yamlStr = ''): Promise<OpenApiJsonSchema
name: methodInfo.operationId || path,
description: methodInfo.description || methodInfo.summary,
params: methodInfo.parameters,
request: methodInfo?.requestBody
request: methodInfo?.requestBody,
response: methodInfo.responses
};
return result;
});

View File

@ -7,3 +7,7 @@ export const getUserFingerprint = async () => {
const result = await fp.get();
console.log(result.visitorId);
};
export const hasHttps = () => {
return window.location.protocol === 'https:';
};

View File

@ -70,6 +70,7 @@ export const iconPaths = {
'core/app/simpleMode/template': () => import('./icons/core/app/simpleMode/template.svg'),
'core/app/simpleMode/tts': () => import('./icons/core/app/simpleMode/tts.svg'),
'core/app/simpleMode/variable': () => import('./icons/core/app/simpleMode/variable.svg'),
'core/app/simpleMode/whisper': () => import('./icons/core/app/simpleMode/whisper.svg'),
'core/app/toolCall': () => import('./icons/core/app/toolCall.svg'),
'core/app/ttsFill': () => import('./icons/core/app/ttsFill.svg'),
'core/app/variable/external': () => import('./icons/core/app/variable/external.svg'),
@ -77,12 +78,14 @@ export const iconPaths = {
'core/app/variable/select': () => import('./icons/core/app/variable/select.svg'),
'core/app/variable/textarea': () => import('./icons/core/app/variable/textarea.svg'),
'core/chat/QGFill': () => import('./icons/core/chat/QGFill.svg'),
'core/chat/cancelSpeak': () => import('./icons/core/chat/cancelSpeak.svg'),
'core/chat/chatFill': () => import('./icons/core/chat/chatFill.svg'),
'core/chat/chatLight': () => import('./icons/core/chat/chatLight.svg'),
'core/chat/chatModelTag': () => import('./icons/core/chat/chatModelTag.svg'),
'core/chat/feedback/badLight': () => import('./icons/core/chat/feedback/badLight.svg'),
'core/chat/feedback/goodLight': () => import('./icons/core/chat/feedback/goodLight.svg'),
'core/chat/fileSelect': () => import('./icons/core/chat/fileSelect.svg'),
'core/chat/finishSpeak': () => import('./icons/core/chat/finishSpeak.svg'),
'core/chat/quoteFill': () => import('./icons/core/chat/quoteFill.svg'),
'core/chat/quoteSign': () => import('./icons/core/chat/quoteSign.svg'),
'core/chat/recordFill': () => import('./icons/core/chat/recordFill.svg'),
@ -91,7 +94,6 @@ export const iconPaths = {
'core/chat/setTopLight': () => import('./icons/core/chat/setTopLight.svg'),
'core/chat/speaking': () => import('./icons/core/chat/speaking.svg'),
'core/chat/stopSpeech': () => import('./icons/core/chat/stopSpeech.svg'),
'core/chat/stopSpeechFill': () => import('./icons/core/chat/stopSpeechFill.svg'),
'core/dataset/commonDataset': () => import('./icons/core/dataset/commonDataset.svg'),
'core/dataset/datasetFill': () => import('./icons/core/dataset/datasetFill.svg'),
'core/dataset/datasetLight': () => import('./icons/core/dataset/datasetLight.svg'),

View File

@ -0,0 +1,6 @@
<svg t="1712207338160" class="icon" viewBox="0 0 1024 1024" version="1.1" xmlns="http://www.w3.org/2000/svg" p-id="6114"
width="128" height="128">
<path
d="M370.569846 945.230769c-18.825846 0.787692-34.658462-14.808615-35.446154-34.776615 0.787692-19.968 16.620308-35.524923 35.446154-34.776616h106.180923v-106.338461c-138.358154-10.436923-252.888615-118.153846-279.394461-262.774154a36.745846 36.745846 0 0 1 6.852923-26.545231 32.649846 32.649846 0 0 1 22.803692-13.154461c18.628923-3.426462 36.470154 9.412923 40.369231 29.065846 24.260923 122.249846 127.133538 208.817231 244.775384 205.902769 117.563077 2.875077 220.396308-83.613538 244.736-205.824 3.938462-19.613538 21.740308-32.374154 40.329847-28.987077a32.649846 32.649846 0 0 1 22.803692 13.115077c5.592615 7.483077 8.073846 17.092923 6.892308 26.545231-26.505846 144.580923-141.075692 252.297846-279.433847 262.656v106.338461h106.220308c18.786462-0.787692 34.619077 14.808615 35.367385 34.776616a37.179077 37.179077 0 0 1-10.909539 25.206154 32.964923 32.964923 0 0 1-24.457846 9.570461h-283.175384z m-36.076308-483.958154v-208.738461C338.628923 152.891077 417.595077 75.342769 511.488 78.769231c93.892923-3.426462 172.898462 74.161231 176.955077 173.883077v208.738461c-4.056615 99.721846-83.062154 177.309538-176.955077 173.883077-93.971692 3.426462-172.977231-74.24-176.994462-174.001231z"
fill="#F06E23" p-id="6115"></path>
</svg>

After

Width:  |  Height:  |  Size: 1.4 KiB

View File

@ -2,7 +2,7 @@
<g clip-path="url(#clip0_74_2)">
<path fill-rule="evenodd" clip-rule="evenodd"
d="M10 2.49999C5.85791 2.49999 2.50004 5.85786 2.50004 10C2.50004 14.1421 5.85791 17.5 10 17.5C14.1422 17.5 17.5 14.1421 17.5 10C17.5 5.85786 14.1422 2.49999 10 2.49999ZM0.833374 10C0.833374 4.93739 4.93743 0.833328 10 0.833328C15.0627 0.833328 19.1667 4.93739 19.1667 10C19.1667 15.0626 15.0627 19.1667 10 19.1667C4.93743 19.1667 0.833374 15.0626 0.833374 10ZM6.66671 7.5C6.66671 7.03976 7.0398 6.66666 7.50004 6.66666H12.5C12.9603 6.66666 13.3334 7.03976 13.3334 7.5V12.5C13.3334 12.9602 12.9603 13.3333 12.5 13.3333H7.50004C7.0398 13.3333 6.66671 12.9602 6.66671 12.5V7.5ZM8.33337 8.33333V11.6667H11.6667V8.33333H8.33337Z"
fill="#3370FF" />
fill="#fd853a" />
</g>
<defs>
<clipPath id="clip0_74_2">

Before

Width:  |  Height:  |  Size: 944 B

After

Width:  |  Height:  |  Size: 944 B

View File

@ -0,0 +1,6 @@
<svg t="1712578349044" class="icon" viewBox="0 0 1024 1024" version="1.1" xmlns="http://www.w3.org/2000/svg" p-id="1183"
width="128" height="128">
<path
d="M512 105.472c225.28 0 407.04 181.76 407.04 407.04s-181.76 407.04-407.04 407.04-407.04-181.76-407.04-407.04 181.76-407.04 407.04-407.04z m0-74.24c-265.216 0-480.768 215.552-480.768 480.768s215.552 480.768 480.768 480.768 480.768-215.552 480.768-480.768-215.552-480.768-480.768-480.768z m254.976 296.96l-331.776 331.776-129.024-129.024-53.248 53.248 155.648 155.648 26.624 25.6 26.624-25.6 358.4-358.4-53.248-53.248z"
p-id="1184" fill="#039855"></path>
</svg>

After

Width:  |  Height:  |  Size: 637 B

View File

@ -205,7 +205,7 @@ const Button = defineStyleConfig({
bg: 'primary.50'
},
_disabled: {
bg: 'myGray.50'
bg: 'myGray.50 !important'
}
},
grayDanger: {

View File

@ -1,6 +1,6 @@
{
"name": "app",
"version": "4.7",
"version": "4.7.1",
"private": false,
"scripts": {
"dev": "next dev",

View File

@ -1,13 +1,10 @@
### FastGPT V4.7
1. 新增 - 工具调用模块可以让LLM模型根据用户意图动态的选择其他模型或插件执行。
2. 新增 - 分类和内容提取支持 functionCall 模式。部分模型支持 functionCall 不支持 ToolCall也可以使用了。需要把 LLM 模型配置文件里的 `functionCall` 设置为 `true` `toolChoice`设置为 `false`。如果 `toolChoice` 为 true会走 tool 模式。
3. 新增 - HTTP插件可实现OpenAPI快速生成插件。
4. 优化 - 高级编排性能。
5. 优化 - AI模型选择。
6. 优化 - 手动输入知识库弹窗。
7. 优化 - 变量输入弹窗。
8. 优化 - 浏览器读取文件自动推断编码,减少乱码情况。
9. [点击查看高级编排介绍文档](https://doc.fastgpt.in/docs/workflow/intro)
10. [使用文档](https://doc.fastgpt.in/docs/intro/)
11. [点击查看商业版](https://doc.fastgpt.in/docs/commercial/)
1. 新增 - 语音输入完整配置。支持选择是否打开语音输入(包括分享页面),支持语音输入后自动发送,支持语音输入后自动语音播放(流式)。
2. 新增 - Pptx 和 xlsx 文件读取。但所有文件读取都放服务端,会消耗更多的服务器资源,以及无法在上传时预览更多内容。
3. 新增 - 集成 Laf 云函数,可以读取 Laf 账号中的云函数作为 HTTP 模块。
4. 修改 - csv导入模板取消 header 校验,自动获取前两列。
5. 修复 - 问题补全历史记录BUG
6. [点击查看高级编排介绍文档](https://doc.fastgpt.in/docs/workflow/intro)
7. [使用文档](https://doc.fastgpt.in/docs/intro/)
8. [点击查看商业版](https://doc.fastgpt.in/docs/commercial/)

View File

@ -275,6 +275,7 @@
"App intro": "App intro",
"App params config": "App Config",
"Chat Variable": "",
"Config whisper": "Config whisper",
"External using": "External use",
"Make a brief introduction of your app": "Make a brief introduction of your app",
"Max histories": "Dialog round",
@ -297,6 +298,7 @@
"Simple Config Tip": "Only basic functions are included. For complex agent functions, use advanced orchestration.",
"TTS": "Audio Speech",
"TTS Tip": "After this function is enabled, the voice playback function can be used after each conversation. Use of this feature may incur additional charges.",
"TTS start": "Reading content",
"Team tags": "Team tags",
"Temperature": "Temperature",
"Tool call": "Tool call",
@ -309,6 +311,9 @@
"This plugin cannot be called as a tool": "This tool cannot be used in easy mode"
},
"Welcome Text": "Welcome Text",
"Whisper": "Whisper",
"Whisper Tip": "",
"Whisper config": "Whisper config",
"create app": "Create App",
"deterministic": "Deterministic",
"edit": {
@ -395,11 +400,23 @@
"Test Listen": "Test",
"Test Listen Text": "Hello, this is a voice test, if you can hear this sentence, it means that the voice playback function is normal",
"Web": "Browser (free)"
},
"whisper": {
"Auto send": "Auto send",
"Auto send tip": "After the voice input is completed, you can send it directly, without manually clicking the send button",
"Auto tts response": "Auto tts response",
"Auto tts response tip": "Questions sent through voice input will be answered directly in the form of voice. Please ensure that the voice broadcast function is enabled.",
"Close": "Close",
"Not tts tip": "You have not turned on Voice playback and the feature is not available",
"Open": "Open",
"Switch": "Open whisper"
}
},
"chat": {
"Admin Mark Content": "Corrected response",
"Audio Speech Error": "Audio Speech Error",
"Cancel Speak": "Cancel speak",
"Canceled Speak": "Voice input has been cancelled",
"Chat API is error or undefined": "The session interface reported an error or returned null",
"Confirm to clear history": "Confirm to clear history?",
"Confirm to clear share chat history": " Are you sure to delete all chats?",
@ -415,6 +432,7 @@
"Feedback Submit": "Submit",
"Feedback Success": "Feedback Success",
"Feedback Update Failed": "Feedback Update Failed",
"Finish Speak": "Finish speak",
"History": "History",
"History Amount": "{{amount}} records",
"Mark": "Mark",

View File

@ -275,6 +275,7 @@
"App intro": "应用介绍",
"App params config": "应用配置",
"Chat Variable": "对话框变量",
"Config whisper": "配置语音输入",
"External using": "外部使用途径",
"Make a brief introduction of your app": "给你的 AI 应用一个介绍",
"Max histories": "聊天记录数量",
@ -295,8 +296,9 @@
"Share link desc": "分享链接给其他用户,无需登录即可直接进行使用",
"Share link desc detail": "可以直接分享该模型给其他用户去进行对话,对方无需登录即可直接进行对话。注意,这个功能会消耗你账号的余额,请保管好链接!",
"Simple Config Tip": "仅包含基础功能,复杂 agent 功能请使用高级编排。",
"TTS": "语音播",
"TTS": "语音播",
"TTS Tip": "开启后,每次对话后可使用语音播放功能。使用该功能可能产生额外费用。",
"TTS start": "朗读内容",
"Team tags": "团队标签",
"Temperature": "温度",
"Tool call": "工具调用",
@ -309,6 +311,9 @@
"This plugin cannot be called as a tool": "该工具无法在简易模式中使用"
},
"Welcome Text": "对话开场白",
"Whisper": "语音输入",
"Whisper Tip": "配置语音输入相关参数",
"Whisper config": "语音输入配置",
"create app": "创建属于你的 AI 应用",
"deterministic": "严谨",
"edit": {
@ -395,11 +400,23 @@
"Test Listen": "试听",
"Test Listen Text": "你好,这是语音测试,如果你能听到这句话,说明语音播放功能正常",
"Web": "浏览器自带(免费)"
},
"whisper": {
"Auto send": "自动发送",
"Auto send tip": "语音输入完毕后直接发送,不需要再手动点击发送按键",
"Auto tts response": "自动语音回复",
"Auto tts response tip": "通过语音输入发送的问题,会直接以语音的形式响应,请确保打开了语音播报功能。",
"Close": "关闭",
"Not tts tip": "你没有开启语音播放,该功能无法使用",
"Open": "开启",
"Switch": "开启语音输入"
}
},
"chat": {
"Admin Mark Content": "纠正后的回复",
"Audio Speech Error": "语音播报异常",
"Cancel Speak": "取消语音输入",
"Canceled Speak": "语音输入已取消",
"Chat API is error or undefined": "对话接口报错或返回为空",
"Confirm to clear history": "确认清空该应用的在线聊天记录?分享和 API 调用的记录不会被清空。",
"Confirm to clear share chat history": "确认删除所有聊天记录?",
@ -415,6 +432,7 @@
"Feedback Submit": "提交反馈",
"Feedback Success": "反馈成功!",
"Feedback Update Failed": "更新反馈状态失败",
"Finish Speak": "语音输入完成",
"History": "记录",
"History Amount": "{{amount}}条记录",
"Mark": "标注预期回答",
@ -1473,7 +1491,7 @@
"usage": {
"Ai model": "AI模型",
"App name": "应用名",
"Audio Speech": "语音播",
"Audio Speech": "语音播",
"Bill Module": "扣费模块",
"Chars length": "文本长度",
"Data Length": "数据长度",

View File

@ -1,7 +1,7 @@
import { useSpeech } from '@/web/common/hooks/useSpeech';
import { useSystemStore } from '@/web/common/system/useSystemStore';
import { Box, Flex, Image, Spinner, Textarea } from '@chakra-ui/react';
import React, { useRef, useEffect, useCallback, useMemo } from 'react';
import React, { useRef, useEffect, useCallback, useTransition } from 'react';
import { useTranslation } from 'next-i18next';
import MyTooltip from '../MyTooltip';
import MyIcon from '@fastgpt/web/components/common/Icon';
@ -12,32 +12,28 @@ import { ChatFileTypeEnum } from '@fastgpt/global/core/chat/constants';
import { addDays } from 'date-fns';
import { useRequest } from '@fastgpt/web/hooks/useRequest';
import { MongoImageTypeEnum } from '@fastgpt/global/common/file/image/constants';
import { OutLinkChatAuthProps } from '@fastgpt/global/support/permission/chat';
import { ChatBoxInputFormType, ChatBoxInputType, UserInputFileItemType } from './type';
import { textareaMinH } from './constants';
import { UseFormReturn, useFieldArray } from 'react-hook-form';
import { useChatProviderStore } from './Provider';
const nanoid = customAlphabet('abcdefghijklmnopqrstuvwxyz1234567890', 6);
const MessageInput = ({
onSendMessage,
onStop,
isChatting,
TextareaDom,
showFileSelector = false,
resetInputVal,
shareId,
outLinkUid,
teamId,
teamToken,
chatForm
}: OutLinkChatAuthProps & {
onSendMessage: (val: ChatBoxInputType) => void;
chatForm,
appId
}: {
onSendMessage: (val: ChatBoxInputType & { autoTTSResponse?: boolean }) => void;
onStop: () => void;
isChatting: boolean;
showFileSelector?: boolean;
TextareaDom: React.MutableRefObject<HTMLTextAreaElement | null>;
resetInputVal: (val: ChatBoxInputType) => void;
chatForm: UseFormReturn<ChatBoxInputFormType>;
appId?: string;
}) => {
const { setValue, watch, control } = chatForm;
const inputValue = watch('input');
@ -52,15 +48,8 @@ const MessageInput = ({
name: 'files'
});
const {
isSpeaking,
isTransCription,
stopSpeak,
startSpeak,
speakingTimeString,
renderAudioGraph,
stream
} = useSpeech({ shareId, outLinkUid, teamId, teamToken });
const { shareId, outLinkUid, teamId, teamToken, isChatting, whisperConfig, autoTTSResponse } =
useChatProviderStore();
const { isPc, whisperModel } = useSystemStore();
const canvasRef = useRef<HTMLCanvasElement>(null);
const { t } = useTranslation();
@ -163,6 +152,16 @@ const MessageInput = ({
replaceFile([]);
}, [TextareaDom, fileList, onSendMessage, replaceFile]);
/* whisper init */
const {
isSpeaking,
isTransCription,
stopSpeak,
startSpeak,
speakingTimeString,
renderAudioGraph,
stream
} = useSpeech({ appId, shareId, outLinkUid, teamId, teamToken });
useEffect(() => {
if (!stream) {
return;
@ -180,6 +179,28 @@ const MessageInput = ({
};
renderCurve();
}, [renderAudioGraph, stream]);
const finishWhisperTranscription = useCallback(
(text: string) => {
if (!text) return;
if (whisperConfig?.autoSend) {
onSendMessage({
text,
files: fileList,
autoTTSResponse
});
replaceFile([]);
} else {
resetInputVal({ text });
}
},
[autoTTSResponse, fileList, onSendMessage, replaceFile, resetInputVal, whisperConfig?.autoSend]
);
const onWhisperRecord = useCallback(() => {
if (isSpeaking) {
return stopSpeak();
}
startSpeak(finishWhisperTranscription);
}, [finishWhisperTranscription, isSpeaking, startSpeak, stopSpeak]);
return (
<Box m={['0 auto', '10px auto']} w={'100%'} maxW={['auto', 'min(800px, 100%)']} px={[0, 5]}>
@ -369,7 +390,7 @@ const MessageInput = ({
bottom={['10px', '12px']}
>
{/* voice-input */}
{!shareId && !havInput && !isChatting && !!whisperModel && (
{whisperConfig.open && !havInput && !isChatting && !!whisperModel && (
<>
<canvas
ref={canvasRef}
@ -380,32 +401,49 @@ const MessageInput = ({
zIndex: 0
}}
/>
<Flex
mr={2}
alignItems={'center'}
justifyContent={'center'}
flexShrink={0}
h={['26px', '32px']}
w={['26px', '32px']}
borderRadius={'md'}
cursor={'pointer'}
_hover={{ bg: '#F5F5F8' }}
onClick={() => {
if (isSpeaking) {
return stopSpeak();
}
startSpeak((text) => resetInputVal({ text }));
}}
>
<MyTooltip label={isSpeaking ? t('core.chat.Stop Speak') : t('core.chat.Record')}>
{isSpeaking && (
<MyTooltip label={t('core.chat.Cancel Speak')}>
<Flex
mr={2}
alignItems={'center'}
justifyContent={'center'}
flexShrink={0}
h={['26px', '32px']}
w={['26px', '32px']}
borderRadius={'md'}
cursor={'pointer'}
_hover={{ bg: '#F5F5F8' }}
onClick={() => stopSpeak(true)}
>
<MyIcon
name={'core/chat/cancelSpeak'}
width={['20px', '22px']}
height={['20px', '22px']}
/>
</Flex>
</MyTooltip>
)}
<MyTooltip label={isSpeaking ? t('core.chat.Finish Speak') : t('core.chat.Record')}>
<Flex
mr={2}
alignItems={'center'}
justifyContent={'center'}
flexShrink={0}
h={['26px', '32px']}
w={['26px', '32px']}
borderRadius={'md'}
cursor={'pointer'}
_hover={{ bg: '#F5F5F8' }}
onClick={onWhisperRecord}
>
<MyIcon
name={isSpeaking ? 'core/chat/stopSpeechFill' : 'core/chat/recordFill'}
name={isSpeaking ? 'core/chat/finishSpeak' : 'core/chat/recordFill'}
width={['20px', '22px']}
height={['20px', '22px']}
color={isSpeaking ? 'primary.500' : 'myGray.600'}
/>
</MyTooltip>
</Flex>
</Flex>
</MyTooltip>
</>
)}
{/* send and stop icon */}

View File

@ -0,0 +1,176 @@
import React, { useContext, createContext, useState, useMemo, useEffect, useCallback } from 'react';
import { useAudioPlay } from '@/web/common/utils/voice';
import { OutLinkChatAuthProps } from '@fastgpt/global/support/permission/chat';
import { ModuleItemType } from '@fastgpt/global/core/module/type';
import { splitGuideModule } from '@fastgpt/global/core/module/utils';
import {
AppTTSConfigType,
AppWhisperConfigType,
VariableItemType
} from '@fastgpt/global/core/app/type';
import { ChatSiteItemType } from '@fastgpt/global/core/chat/type';
type useChatStoreType = OutLinkChatAuthProps & {
welcomeText: string;
variableModules: VariableItemType[];
questionGuide: boolean;
ttsConfig: AppTTSConfigType;
whisperConfig: AppWhisperConfigType;
autoTTSResponse: boolean;
startSegmentedAudio: () => Promise<any>;
splitText2Audio: (text: string, done?: boolean | undefined) => void;
finishSegmentedAudio: () => void;
audioLoading: boolean;
audioPlaying: boolean;
hasAudio: boolean;
playAudioByText: ({
text,
buffer
}: {
text: string;
buffer?: Uint8Array | undefined;
}) => Promise<{
buffer?: Uint8Array | undefined;
}>;
cancelAudio: () => void;
audioPlayingChatId: string | undefined;
setAudioPlayingChatId: React.Dispatch<React.SetStateAction<string | undefined>>;
chatHistories: ChatSiteItemType[];
setChatHistories: React.Dispatch<React.SetStateAction<ChatSiteItemType[]>>;
isChatting: boolean;
};
const StateContext = createContext<useChatStoreType>({
welcomeText: '',
variableModules: [],
questionGuide: false,
ttsConfig: {
type: 'none',
model: undefined,
voice: undefined,
speed: undefined
},
whisperConfig: {
open: false,
autoSend: false,
autoTTSResponse: false
},
autoTTSResponse: false,
startSegmentedAudio: function (): Promise<any> {
throw new Error('Function not implemented.');
},
splitText2Audio: function (text: string, done?: boolean | undefined): void {
throw new Error('Function not implemented.');
},
chatHistories: [],
setChatHistories: function (value: React.SetStateAction<ChatSiteItemType[]>): void {
throw new Error('Function not implemented.');
},
isChatting: false,
audioLoading: false,
audioPlaying: false,
hasAudio: false,
playAudioByText: function ({
text,
buffer
}: {
text: string;
buffer?: Uint8Array | undefined;
}): Promise<{ buffer?: Uint8Array | undefined }> {
throw new Error('Function not implemented.');
},
cancelAudio: function (): void {
throw new Error('Function not implemented.');
},
audioPlayingChatId: undefined,
setAudioPlayingChatId: function (value: React.SetStateAction<string | undefined>): void {
throw new Error('Function not implemented.');
},
finishSegmentedAudio: function (): void {
throw new Error('Function not implemented.');
}
});
export type ChatProviderProps = OutLinkChatAuthProps & {
userGuideModule?: ModuleItemType;
// not chat test params
chatId?: string;
children: React.ReactNode;
};
export const useChatProviderStore = () => useContext(StateContext);
const Provider = ({
shareId,
outLinkUid,
teamId,
teamToken,
userGuideModule,
children
}: ChatProviderProps) => {
const [chatHistories, setChatHistories] = useState<ChatSiteItemType[]>([]);
const { welcomeText, variableModules, questionGuide, ttsConfig, whisperConfig } = useMemo(
() => splitGuideModule(userGuideModule),
[userGuideModule]
);
// segment audio
const [audioPlayingChatId, setAudioPlayingChatId] = useState<string>();
const {
audioLoading,
audioPlaying,
hasAudio,
playAudioByText,
cancelAudio,
startSegmentedAudio,
finishSegmentedAudio,
splitText2Audio
} = useAudioPlay({
ttsConfig,
shareId,
outLinkUid,
teamId,
teamToken
});
const autoTTSResponse =
whisperConfig?.open && whisperConfig?.autoSend && whisperConfig?.autoTTSResponse && hasAudio;
const isChatting = useMemo(
() =>
chatHistories[chatHistories.length - 1] &&
chatHistories[chatHistories.length - 1]?.status !== 'finish',
[chatHistories]
);
const value: useChatStoreType = {
shareId,
outLinkUid,
teamId,
teamToken,
welcomeText,
variableModules,
questionGuide,
ttsConfig,
whisperConfig,
autoTTSResponse,
startSegmentedAudio,
finishSegmentedAudio,
splitText2Audio,
audioLoading,
audioPlaying,
hasAudio,
playAudioByText,
cancelAudio,
audioPlayingChatId,
setAudioPlayingChatId,
chatHistories,
setChatHistories,
isChatting
};
return <StateContext.Provider value={value}>{children}</StateContext.Provider>;
};
export default React.memo(Provider);

View File

@ -2,21 +2,18 @@ import { useCopyData } from '@/web/common/hooks/useCopyData';
import { useAudioPlay } from '@/web/common/utils/voice';
import { Flex, FlexProps, Image, css, useTheme } from '@chakra-ui/react';
import { ChatSiteItemType } from '@fastgpt/global/core/chat/type';
import { AppTTSConfigType } from '@fastgpt/global/core/module/type';
import { OutLinkChatAuthProps } from '@fastgpt/global/support/permission/chat';
import MyTooltip from '@fastgpt/web/components/common/MyTooltip';
import React from 'react';
import React, { useMemo } from 'react';
import { useTranslation } from 'next-i18next';
import MyIcon from '@fastgpt/web/components/common/Icon';
import { formatChatValue2InputType } from '../utils';
import { ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
import { useChatProviderStore } from '../Provider';
export type ChatControllerProps = {
isChatting: boolean;
isLastChild: boolean;
chat: ChatSiteItemType;
setChatHistories?: React.Dispatch<React.SetStateAction<ChatSiteItemType[]>>;
showVoiceIcon?: boolean;
ttsConfig?: AppTTSConfigType;
onRetry?: () => void;
onDelete?: () => void;
onMark?: () => void;
@ -27,33 +24,29 @@ export type ChatControllerProps = {
};
const ChatController = ({
isChatting,
chat,
setChatHistories,
isLastChild,
showVoiceIcon,
ttsConfig,
onReadUserDislike,
onCloseUserLike,
onMark,
onRetry,
onDelete,
onAddUserDislike,
onAddUserLike,
shareId,
outLinkUid,
teamId,
teamToken
}: OutLinkChatAuthProps & ChatControllerProps & FlexProps) => {
onAddUserLike
}: ChatControllerProps & FlexProps) => {
const theme = useTheme();
const { t } = useTranslation();
const { copyData } = useCopyData();
const { audioLoading, audioPlaying, hasAudio, playAudio, cancelAudio } = useAudioPlay({
ttsConfig,
shareId,
outLinkUid,
teamId,
teamToken
});
const {
isChatting,
setChatHistories,
audioLoading,
audioPlaying,
hasAudio,
playAudioByText,
cancelAudio,
audioPlayingChatId,
setAudioPlayingChatId
} = useChatProviderStore();
const controlIconStyle = {
w: '14px',
cursor: 'pointer',
@ -67,6 +60,11 @@ const ChatController = ({
display: 'flex'
};
const { t } = useTranslation();
const { copyData } = useCopyData();
const chatText = useMemo(() => formatChatValue2InputType(chat.value).text || '', [chat.value]);
return (
<Flex
{...controlContainerStyle}
@ -86,7 +84,7 @@ const ChatController = ({
{...controlIconStyle}
name={'copy'}
_hover={{ color: 'primary.600' }}
onClick={() => copyData(formatChatValue2InputType(chat.value).text || '')}
onClick={() => copyData(chatText)}
/>
</MyTooltip>
{!!onDelete && !isChatting && (
@ -113,51 +111,65 @@ const ChatController = ({
)}
{showVoiceIcon &&
hasAudio &&
(audioLoading ? (
<MyTooltip label={t('common.Loading')}>
<MyIcon {...controlIconStyle} name={'common/loading'} />
</MyTooltip>
) : audioPlaying ? (
<Flex alignItems={'center'}>
<MyTooltip label={t('core.chat.tts.Stop Speech')}>
(() => {
const isPlayingChat = chat.dataId === audioPlayingChatId;
if (isPlayingChat && audioPlaying) {
return (
<Flex alignItems={'center'}>
<MyTooltip label={t('core.chat.tts.Stop Speech')}>
<MyIcon
{...controlIconStyle}
borderRight={'none'}
name={'core/chat/stopSpeech'}
color={'#E74694'}
onClick={cancelAudio}
/>
</MyTooltip>
<Image
src="/icon/speaking.gif"
w={'23px'}
alt={''}
borderRight={theme.borders.base}
/>
</Flex>
);
}
if (isPlayingChat && audioLoading) {
return (
<MyTooltip label={t('common.Loading')}>
<MyIcon {...controlIconStyle} name={'common/loading'} />
</MyTooltip>
);
}
return (
<MyTooltip label={t('core.app.TTS start')}>
<MyIcon
{...controlIconStyle}
borderRight={'none'}
name={'core/chat/stopSpeech'}
color={'#E74694'}
onClick={() => cancelAudio()}
name={'common/voiceLight'}
_hover={{ color: '#E74694' }}
onClick={async () => {
setAudioPlayingChatId(chat.dataId);
const response = await playAudioByText({
buffer: chat.ttsBuffer,
text: chatText
});
if (!setChatHistories || !response.buffer) return;
setChatHistories((state) =>
state.map((item) =>
item.dataId === chat.dataId
? {
...item,
ttsBuffer: response.buffer
}
: item
)
);
}}
/>
</MyTooltip>
<Image src="/icon/speaking.gif" w={'23px'} alt={''} borderRight={theme.borders.base} />
</Flex>
) : (
<MyTooltip label={t('core.app.TTS')}>
<MyIcon
{...controlIconStyle}
name={'common/voiceLight'}
_hover={{ color: '#E74694' }}
onClick={async () => {
const response = await playAudio({
buffer: chat.ttsBuffer,
chatItemId: chat.dataId,
text: formatChatValue2InputType(chat.value).text || ''
});
if (!setChatHistories || !response.buffer) return;
setChatHistories((state) =>
state.map((item) =>
item.dataId === chat.dataId
? {
...item,
ttsBuffer: response.buffer
}
: item
)
);
}}
/>
</MyTooltip>
))}
);
})()}
{!!onMark && (
<MyTooltip label={t('core.chat.Mark')}>
<MyIcon

View File

@ -25,6 +25,7 @@ import {
ChatStatusEnum
} from '@fastgpt/global/core/chat/constants';
import FilesBlock from './FilesBox';
import { useChatProviderStore } from '../Provider';
const colorMap = {
[ChatStatusEnum.loading]: {
@ -56,11 +57,9 @@ const ChatItem = ({
status: `${ChatStatusEnum}`;
name: string;
};
isLastChild?: boolean;
questionGuides?: string[];
children?: React.ReactNode;
} & ChatControllerProps) => {
const theme = useTheme();
const styleMap: BoxProps =
type === ChatRoleEnum.Human
? {
@ -77,7 +76,9 @@ const ChatItem = ({
textAlign: 'left',
bg: 'myGray.50'
};
const { chat, isChatting } = chatControllerProps;
const { isChatting } = useChatProviderStore();
const { chat } = chatControllerProps;
const ContentCard = useMemo(() => {
if (type === 'Human') {
@ -209,7 +210,7 @@ ${toolResponse}`}
<Flex w={'100%'} alignItems={'center'} gap={2} justifyContent={styleMap.justifyContent}>
{isChatting && type === ChatRoleEnum.AI && isLastChild ? null : (
<Box order={styleMap.order} ml={styleMap.ml}>
<ChatController {...chatControllerProps} />
<ChatController {...chatControllerProps} isLastChild={isLastChild} />
</Box>
)}
<ChatAvatar src={avatar} type={type} />

View File

@ -1,4 +1,4 @@
import { VariableItemType } from '@fastgpt/global/core/module/type';
import { VariableItemType } from '@fastgpt/global/core/app/type.d';
import React, { useState } from 'react';
import { UseFormReturn } from 'react-hook-form';
import { useTranslation } from 'next-i18next';

View File

@ -11,3 +11,9 @@ export const MessageCardStyle: BoxProps = {
maxW: ['calc(100% - 25px)', 'calc(100% - 40px)'],
color: 'myGray.900'
};
export enum FeedbackTypeEnum {
user = 'user',
admin = 'admin',
hidden = 'hidden'
}

View File

@ -11,7 +11,6 @@ import React, {
import Script from 'next/script';
import { throttle } from 'lodash';
import type {
AIChatItemType,
AIChatItemValueItemType,
ChatSiteItemType,
UserChatItemValueItemType
@ -39,7 +38,6 @@ import type { AdminMarkType } from './SelectMarkCollection';
import MyTooltip from '../MyTooltip';
import { postQuestionGuide } from '@/web/core/ai/api';
import { splitGuideModule } from '@fastgpt/global/core/module/utils';
import type {
generatingMessageProps,
StartChatFnProps,
@ -55,6 +53,8 @@ import { ChatItemValueTypeEnum, ChatRoleEnum } from '@fastgpt/global/core/chat/c
import { formatChatValue2InputType } from './utils';
import { textareaMinH } from './constants';
import { SseResponseEventEnum } from '@fastgpt/global/core/module/runtime/constants';
import ChatProvider, { useChatProviderStore } from './Provider';
import ChatItem from './components/ChatItem';
import dynamic from 'next/dynamic';
@ -82,9 +82,9 @@ type Props = OutLinkChatAuthProps & {
userGuideModule?: ModuleItemType;
showFileSelector?: boolean;
active?: boolean; // can use
appId: string;
// not chat test params
appId?: string;
chatId?: string;
onUpdateVariable?: (e: Record<string, any>) => void;
@ -112,7 +112,6 @@ const ChatBox = (
showEmptyIntro = false,
appAvatar,
userAvatar,
userGuideModule,
showFileSelector,
active = true,
appId,
@ -137,7 +136,6 @@ const ChatBox = (
const questionGuideController = useRef(new AbortController());
const isNewChatReplace = useRef(false);
const [chatHistories, setChatHistories] = useState<ChatSiteItemType[]>([]);
const [feedbackId, setFeedbackId] = useState<string>();
const [readFeedbackData, setReadFeedbackData] = useState<{
chatItemId: string;
@ -146,17 +144,20 @@ const ChatBox = (
const [adminMarkData, setAdminMarkData] = useState<AdminMarkType & { chatItemId: string }>();
const [questionGuides, setQuestionGuide] = useState<string[]>([]);
const isChatting = useMemo(
() =>
chatHistories[chatHistories.length - 1] &&
chatHistories[chatHistories.length - 1]?.status !== 'finish',
[chatHistories]
);
const {
welcomeText,
variableModules,
questionGuide,
startSegmentedAudio,
finishSegmentedAudio,
setAudioPlayingChatId,
splitText2Audio,
chatHistories,
setChatHistories,
isChatting
} = useChatProviderStore();
const { welcomeText, variableModules, questionGuide, ttsConfig } = useMemo(
() => splitGuideModule(userGuideModule),
[userGuideModule]
);
/* variable */
const filterVariableModules = useMemo(
() => variableModules.filter((item) => item.type !== VariableInputEnum.external),
[variableModules]
@ -171,10 +172,9 @@ const ChatBox = (
chatStarted: false
}
});
const { setValue, watch, handleSubmit, control } = chatForm;
const { setValue, watch, handleSubmit } = chatForm;
const variables = watch('variables');
const chatStarted = watch('chatStarted');
const variableIsFinish = useMemo(() => {
if (!filterVariableModules || filterVariableModules.length === 0 || chatHistories.length > 0)
return true;
@ -212,12 +212,21 @@ const ChatBox = (
);
// eslint-disable-next-line react-hooks/exhaustive-deps
const generatingMessage = useCallback(
({ event, text = '', status, name, tool }: generatingMessageProps) => {
({
event,
text = '',
status,
name,
tool,
autoTTSResponse
}: generatingMessageProps & { autoTTSResponse?: boolean }) => {
setChatHistories((state) =>
state.map((item, index) => {
if (index !== state.length - 1) return item;
if (item.obj !== ChatRoleEnum.AI) return item;
autoTTSResponse && splitText2Audio(formatChatValue2InputType(item.value).text || '');
const lastValue: AIChatItemValueItemType = JSON.parse(
JSON.stringify(item.value[item.value.length - 1])
);
@ -299,7 +308,7 @@ const ChatBox = (
);
generatingScroll();
},
[generatingScroll]
[generatingScroll, setChatHistories, splitText2Audio]
);
// 重置输入内容
@ -357,8 +366,10 @@ const ChatBox = (
({
text = '',
files = [],
history = chatHistories
history = chatHistories,
autoTTSResponse = false
}: ChatBoxInputType & {
autoTTSResponse?: boolean;
history?: ChatSiteItemType[];
}) => {
handleSubmit(async ({ variables }) => {
@ -370,7 +381,7 @@ const ChatBox = (
});
return;
}
questionGuideController.current?.abort('stop');
text = text.trim();
if (!text && files.length === 0) {
@ -381,6 +392,15 @@ const ChatBox = (
return;
}
const responseChatId = getNanoid(24);
questionGuideController.current?.abort('stop');
// set auto audio playing
if (autoTTSResponse) {
await startSegmentedAudio();
setAudioPlayingChatId(responseChatId);
}
const newChatList: ChatSiteItemType[] = [
...history,
{
@ -409,7 +429,7 @@ const ChatBox = (
status: 'finish'
},
{
dataId: getNanoid(24),
dataId: responseChatId,
obj: ChatRoleEnum.AI,
value: [
{
@ -447,7 +467,7 @@ const ChatBox = (
chatList: newChatList,
messages,
controller: abortSignal,
generatingMessage,
generatingMessage: (e) => generatingMessage({ ...e, autoTTSResponse }),
variables
});
@ -485,6 +505,9 @@ const ChatBox = (
generatingScroll();
isPc && TextareaDom.current?.focus();
}, 100);
// tts audio
autoTTSResponse && splitText2Audio(responseText, true);
} catch (err: any) {
toast({
title: t(getErrText(err, 'core.chat.error.Chat error')),
@ -509,11 +532,14 @@ const ChatBox = (
})
);
}
autoTTSResponse && finishSegmentedAudio();
})();
},
[
chatHistories,
createQuestionGuide,
finishSegmentedAudio,
generatingMessage,
generatingScroll,
handleSubmit,
@ -521,6 +547,10 @@ const ChatBox = (
isPc,
onStartChat,
resetInputVal,
setAudioPlayingChatId,
setChatHistories,
splitText2Audio,
startSegmentedAudio,
t,
toast
]
@ -875,9 +905,9 @@ const ChatBox = (
type={item.obj}
avatar={item.obj === 'Human' ? userAvatar : appAvatar}
chat={item}
isChatting={isChatting}
onRetry={retryInput(item.dataId)}
onDelete={delOneMessage(item.dataId)}
isLastChild={index === chatHistories.length - 1}
/>
)}
{item.obj === 'AI' && (
@ -886,17 +916,14 @@ const ChatBox = (
type={item.obj}
avatar={appAvatar}
chat={item}
isChatting={isChatting}
isLastChild={index === chatHistories.length - 1}
{...(item.obj === 'AI' && {
setChatHistories,
showVoiceIcon,
ttsConfig,
shareId,
outLinkUid,
teamId,
teamToken,
statusBoxData,
isLastChild: index === chatHistories.length - 1,
questionGuides,
onMark: onMark(
item,
@ -957,15 +984,11 @@ const ChatBox = (
<MessageInput
onSendMessage={sendPrompt}
onStop={() => chatController.current?.abort('stop')}
isChatting={isChatting}
TextareaDom={TextareaDom}
resetInputVal={resetInputVal}
showFileSelector={showFileSelector}
shareId={shareId}
outLinkUid={outLinkUid}
teamId={teamId}
teamToken={teamToken}
chatForm={chatForm}
appId={appId}
/>
)}
{/* user feedback modal */}
@ -1063,5 +1086,14 @@ const ChatBox = (
</Flex>
);
};
const ForwardChatBox = forwardRef(ChatBox);
export default React.memo(forwardRef(ChatBox));
const ChatBoxContainer = (props: Props, ref: ForwardedRef<ComponentRef>) => {
return (
<ChatProvider {...props}>
<ForwardChatBox {...props} ref={ref} />
</ChatProvider>
);
};
export default React.memo(forwardRef(ChatBoxContainer));

View File

@ -55,7 +55,7 @@ const SettingLLMModel = ({ llmModelType = LLMModelTypeEnum.all, defaultData, onC
leftIcon={
<Avatar
borderRadius={'0'}
src={selectedModel.avatar || HUGGING_FACE_ICON}
src={selectedModel?.avatar || HUGGING_FACE_ICON}
fallbackSrc={HUGGING_FACE_ICON}
w={'18px'}
/>

View File

@ -5,7 +5,7 @@ import { Box, Button, Flex, ModalBody, useDisclosure, Image } from '@chakra-ui/r
import React, { useCallback, useMemo } from 'react';
import { useTranslation } from 'next-i18next';
import { TTSTypeEnum } from '@/constants/app';
import type { AppTTSConfigType } from '@fastgpt/global/core/module/type.d';
import type { AppTTSConfigType } from '@fastgpt/global/core/app/type.d';
import { useAudioPlay } from '@/web/common/utils/voice';
import { useSystemStore } from '@/web/common/system/useSystemStore';
import MyModal from '@fastgpt/web/components/common/MyModal';
@ -46,7 +46,9 @@ const TTSSelect = ({
[formatValue, list, t]
);
const { playAudio, cancelAudio, audioLoading, audioPlaying } = useAudioPlay({ ttsConfig: value });
const { playAudioByText, cancelAudio, audioLoading, audioPlaying } = useAudioPlay({
ttsConfig: value
});
const onclickChange = useCallback(
(e: string) => {
@ -137,9 +139,7 @@ const TTSSelect = ({
color={'primary.600'}
isLoading={audioLoading}
leftIcon={<MyIcon name={'core/chat/stopSpeech'} w={'16px'} />}
onClick={() => {
cancelAudio();
}}
onClick={cancelAudio}
>
{t('core.chat.tts.Stop Speech')}
</Button>
@ -149,7 +149,7 @@ const TTSSelect = ({
isLoading={audioLoading}
leftIcon={<MyIcon name={'core/app/headphones'} w={'16px'} />}
onClick={() => {
playAudio({
playAudioByText({
text: t('core.app.tts.Test Listen Text')
});
}}

View File

@ -26,7 +26,7 @@ import {
} from '@chakra-ui/react';
import { QuestionOutlineIcon, SmallAddIcon } from '@chakra-ui/icons';
import { VariableInputEnum, variableMap } from '@fastgpt/global/core/module/constants';
import type { VariableItemType } from '@fastgpt/global/core/module/type.d';
import type { VariableItemType } from '@fastgpt/global/core/app/type.d';
import MyIcon from '@fastgpt/web/components/common/Icon';
import { useForm } from 'react-hook-form';
import { useFieldArray } from 'react-hook-form';

View File

@ -0,0 +1,116 @@
import MyIcon from '@fastgpt/web/components/common/Icon';
import MyTooltip from '@/components/MyTooltip';
import { Box, Button, Flex, ModalBody, useDisclosure, Switch } from '@chakra-ui/react';
import React, { useMemo } from 'react';
import { useTranslation } from 'next-i18next';
import type { AppWhisperConfigType } from '@fastgpt/global/core/app/type.d';
import MyModal from '@fastgpt/web/components/common/MyModal';
import QuestionTip from '@fastgpt/web/components/common/MyTooltip/QuestionTip';
const WhisperConfig = ({
isOpenAudio,
value,
onChange
}: {
isOpenAudio: boolean;
value: AppWhisperConfigType;
onChange: (e: AppWhisperConfigType) => void;
}) => {
const { t } = useTranslation();
const { isOpen, onOpen, onClose } = useDisclosure();
const isOpenWhisper = value.open;
const isAutoSend = value.autoSend;
const formLabel = useMemo(() => {
if (!isOpenWhisper) {
return t('core.app.whisper.Close');
}
return t('core.app.whisper.Open');
}, [t, isOpenWhisper]);
return (
<Flex alignItems={'center'}>
<MyIcon name={'core/app/simpleMode/whisper'} mr={2} w={'20px'} />
<Box>{t('core.app.Whisper')}</Box>
<Box flex={1} />
<MyTooltip label={t('core.app.Config whisper')}>
<Button
variant={'transparentBase'}
iconSpacing={1}
size={'sm'}
fontSize={'md'}
mr={'-5px'}
onClick={onOpen}
>
{formLabel}
</Button>
</MyTooltip>
<MyModal
title={t('core.app.Whisper config')}
iconSrc="core/app/simpleMode/whisper"
isOpen={isOpen}
onClose={onClose}
>
<ModalBody px={[5, 16]} py={[4, 8]}>
<Flex justifyContent={'space-between'} alignItems={'center'}>
{t('core.app.whisper.Switch')}
<Switch
isChecked={isOpenWhisper}
size={'lg'}
onChange={(e) => {
onChange({
...value,
open: e.target.checked
});
}}
/>
</Flex>
{isOpenWhisper && (
<Flex mt={8} alignItems={'center'}>
{t('core.app.whisper.Auto send')}
<QuestionTip label={t('core.app.whisper.Auto send tip')} />
<Box flex={'1 0 0'} />
<Switch
isChecked={value.autoSend}
size={'lg'}
onChange={(e) => {
onChange({
...value,
autoSend: e.target.checked
});
}}
/>
</Flex>
)}
{isOpenWhisper && isAutoSend && (
<>
<Flex mt={8} alignItems={'center'}>
{t('core.app.whisper.Auto tts response')}
<QuestionTip label={t('core.app.whisper.Auto tts response tip')} />
<Box flex={'1 0 0'} />
<Switch
isChecked={value.autoTTSResponse}
size={'lg'}
onChange={(e) => {
onChange({
...value,
autoTTSResponse: e.target.checked
});
}}
/>
</Flex>
{!isOpenAudio && (
<Box mt={1} color={'myGray.600'} fontSize={'sm'}>
{t('core.app.whisper.Not tts tip')}
</Box>
)}
</>
)}
</ModalBody>
</MyModal>
</Flex>
);
};
export default React.memo(WhisperConfig);

View File

@ -121,6 +121,7 @@ const ChatTest = (
<Box flex={1}>
<ChatBox
ref={ChatBoxRef}
appId={app._id}
appAvatar={app.avatar}
userAvatar={userInfo?.avatar}
showMarkIcon

View File

@ -16,13 +16,17 @@ import { useSystemStore } from '@/web/common/system/useSystemStore';
import { ChevronRightIcon } from '@chakra-ui/icons';
import { useQuery } from '@tanstack/react-query';
import dynamic from 'next/dynamic';
import { FlowNodeInputTypeEnum } from '@fastgpt/global/core/module/node/constant';
import {
FlowNodeInputTypeEnum,
FlowNodeOutputTypeEnum
} from '@fastgpt/global/core/module/node/constant';
import { useToast } from '@fastgpt/web/hooks/useToast';
import Divider from '../modules/Divider';
import RenderToolInput from '../render/RenderToolInput';
import RenderInput from '../render/RenderInput';
import RenderOutput from '../render/RenderOutput';
import { getErrText } from '@fastgpt/global/common/error/utils';
import { useRequest } from '@fastgpt/web/hooks/useRequest';
const LafAccountModal = dynamic(() => import('@/components/support/laf/LafAccountModal'));
@ -31,7 +35,7 @@ const NodeLaf = (props: NodeProps<FlowModuleItemType>) => {
const { toast } = useToast();
const { feConfigs } = useSystemStore();
const { data, selected } = props;
const { moduleId, inputs } = data;
const { moduleId, inputs, outputs } = data;
const requestUrl = inputs.find((item) => item.key === ModuleInputKeyEnum.httpReqUrl);
@ -49,7 +53,11 @@ const NodeLaf = (props: NodeProps<FlowModuleItemType>) => {
);
}
const { data: lafData, isLoading: isLoadingFunctions } = useQuery(
const {
data: lafData,
isLoading: isLoadingFunctions,
refetch: refetchFunction
} = useQuery(
['getLafFunctionList'],
async () => {
// load laf app detail
@ -94,61 +102,99 @@ const NodeLaf = (props: NodeProps<FlowModuleItemType>) => {
[lafFunctionSelectList, requestUrl?.value]
);
const onSyncParams = useCallback(() => {
const lafFunction = lafData?.lafFunctions.find((item) => item.requestUrl === selectedFunction);
const { mutate: onSyncParams, isLoading: isSyncing } = useRequest({
mutationFn: async () => {
await refetchFunction();
const lafFunction = lafData?.lafFunctions.find(
(item) => item.requestUrl === selectedFunction
);
if (!lafFunction) return;
if (!lafFunction) return;
const bodyParams =
lafFunction?.request?.content?.['application/json']?.schema?.properties || {};
const bodyParams =
lafFunction?.request?.content?.['application/json']?.schema?.properties || {};
const requiredParams =
lafFunction?.request?.content?.['application/json']?.schema?.required || [];
const requiredParams =
lafFunction?.request?.content?.['application/json']?.schema?.required || [];
const allParams = [
...Object.keys(bodyParams).map((key) => ({
name: key,
desc: bodyParams[key].description,
required: requiredParams?.includes(key) || false,
value: `{{${key}}}`,
type: 'string'
}))
].filter((item) => !inputs.find((input) => input.key === item.name));
const allParams = [
...Object.keys(bodyParams).map((key) => ({
name: key,
desc: bodyParams[key].description,
required: requiredParams?.includes(key) || false,
value: `{{${key}}}`,
type: 'string'
}))
].filter((item) => !inputs.find((input) => input.key === item.name));
// add params
allParams.forEach((param) => {
onChangeNode({
moduleId,
type: 'addInput',
key: param.name,
value: {
// add params
allParams.forEach((param) => {
onChangeNode({
moduleId,
type: 'addInput',
key: param.name,
valueType: ModuleIOValueTypeEnum.string,
label: param.name,
type: FlowNodeInputTypeEnum.target,
required: param.required,
description: param.desc || '',
toolDescription: param.desc || '未设置参数描述',
edit: true,
editField: {
key: true,
name: true,
description: true,
required: true,
dataType: true,
inputType: true,
isToolInput: true
},
connected: false
}
value: {
key: param.name,
valueType: ModuleIOValueTypeEnum.string,
label: param.name,
type: FlowNodeInputTypeEnum.target,
required: param.required,
description: param.desc || '',
toolDescription: param.desc || '未设置参数描述',
edit: true,
editField: {
key: true,
name: true,
description: true,
required: true,
dataType: true,
inputType: true,
isToolInput: true
},
connected: false
}
});
});
});
toast({
status: 'success',
title: t('common.Sync success')
});
}, [inputs, lafData?.lafFunctions, moduleId, selectedFunction, t, toast]);
const responseParams =
lafFunction?.response?.default.content?.['application/json'].schema.properties || {};
const requiredResponseParams =
lafFunction?.response?.default.content?.['application/json'].schema.required || [];
const allResponseParams = [
...Object.keys(responseParams).map((key) => ({
valueType: responseParams[key].type,
name: key,
desc: responseParams[key].description,
required: requiredResponseParams?.includes(key) || false
}))
].filter((item) => !outputs.find((output) => output.key === item.name));
allResponseParams.forEach((param) => {
onChangeNode({
moduleId,
type: 'addOutput',
key: param.name,
value: {
key: param.name,
valueType: param.valueType,
label: param.name,
type: FlowNodeOutputTypeEnum.source,
required: param.required,
description: param.desc || '',
edit: true,
editField: {
key: true,
description: true,
dataType: true,
defaultValue: true
},
targets: []
}
});
});
},
successToast: t('common.Sync success')
});
return (
<NodeCard minW={'350px'} selected={selected} {...data}>
@ -174,9 +220,9 @@ const NodeLaf = (props: NodeProps<FlowModuleItemType>) => {
{/* auto set params and go to edit */}
{!!selectedFunction && (
<Flex justifyContent={'flex-end'} mt={2} gap={2}>
{/* <Button variant={'whiteBase'} size={'sm'} onClick={onSyncParams}>
<Button isLoading={isSyncing} variant={'grayBase'} size={'sm'} onClick={onSyncParams}>
{t('core.module.Laf sync params')}
</Button> */}
</Button>
<Button
variant={'grayBase'}
size={'sm'}

View File

@ -7,14 +7,14 @@ import { ModuleInputKeyEnum } from '@fastgpt/global/core/module/constants';
import { welcomeTextTip } from '@fastgpt/global/core/module/template/tip';
import { onChangeNode } from '../../FlowProvider';
import VariableEdit from '../modules/VariableEdit';
import VariableEdit from '../../../../app/VariableEdit';
import MyIcon from '@fastgpt/web/components/common/Icon';
import MyTooltip from '@/components/MyTooltip';
import Container from '../modules/Container';
import NodeCard from '../render/NodeCard';
import type { VariableItemType } from '@fastgpt/global/core/module/type.d';
import QGSwitch from '@/components/core/module/Flow/components/modules/QGSwitch';
import TTSSelect from '@/components/core/module/Flow/components/modules/TTSSelect';
import type { VariableItemType } from '@fastgpt/global/core/app/type.d';
import QGSwitch from '@/components/core/app/QGSwitch';
import TTSSelect from '@/components/core/app/TTSSelect';
import { splitGuideModule } from '@fastgpt/global/core/module/utils';
import { useTranslation } from 'next-i18next';

View File

@ -1,4 +1,4 @@
import type { AppTTSConfigType } from '@fastgpt/global/core/module/type.d';
import type { AppTTSConfigType } from '@fastgpt/global/core/app/type.d';
import { ModuleItemType } from '../module/type';
import { AdminFbkType, ChatItemType } from '@fastgpt/global/core/chat/type';
import type { OutLinkChatAuthProps } from '@fastgpt/global/support/permission/chat.d';

View File

@ -12,7 +12,6 @@ import { MongoTTSBuffer } from '@fastgpt/service/common/buffer/tts/schema';
/*
1. get tts from chatItem store
2. get tts from ai
3. save tts to chatItem store if chatItemId is provided
4. push bill
*/
@ -34,6 +33,7 @@ export default async function handler(req: NextApiRequest, res: NextApiResponse)
throw new Error('voice not found');
}
/* get audio from buffer */
const ttsBuffer = await MongoTTSBuffer.findOne(
{
bufferId: voiceData.bufferId,
@ -46,6 +46,7 @@ export default async function handler(req: NextApiRequest, res: NextApiResponse)
return res.end(new Uint8Array(ttsBuffer.buffer.buffer));
}
/* request audio */
await text2Speech({
res,
input,
@ -54,6 +55,7 @@ export default async function handler(req: NextApiRequest, res: NextApiResponse)
speed: ttsConfig.speed,
onSuccess: async ({ model, buffer }) => {
try {
/* bill */
pushAudioSpeechUsage({
model: model,
charsLength: input.length,
@ -62,6 +64,7 @@ export default async function handler(req: NextApiRequest, res: NextApiResponse)
source: authType2UsageSource({ authType })
});
/* create buffer */
await MongoTTSBuffer.create({
bufferId: voiceData.bufferId,
text: JSON.stringify({ text: input, speed: ttsConfig.speed }),

View File

@ -7,6 +7,8 @@ import fs from 'fs';
import { getAIApi } from '@fastgpt/service/core/ai/config';
import { pushWhisperUsage } from '@/service/support/wallet/usage/push';
import { authChatCert } from '@/service/support/permission/auth/chat';
import { MongoApp } from '@fastgpt/service/core/app/schema';
import { getGuideModule, splitGuideModule } from '@fastgpt/global/core/module/utils';
const upload = getUploadModel({
maxSize: 2
@ -18,8 +20,9 @@ export default withNextCors(async function handler(req: NextApiRequest, res: Nex
try {
const {
file,
data: { duration, teamId: spaceTeamId, teamToken }
data: { appId, duration, teamId: spaceTeamId, teamToken }
} = await upload.doUpload<{
appId: string;
duration: number;
shareId?: string;
teamId?: string;
@ -31,8 +34,6 @@ export default withNextCors(async function handler(req: NextApiRequest, res: Nex
filePaths = [file.path];
const { teamId, tmbId } = await authChatCert({ req, authToken: true });
if (!global.whisperModel) {
throw new Error('whisper model not found');
}
@ -41,6 +42,18 @@ export default withNextCors(async function handler(req: NextApiRequest, res: Nex
throw new Error('file not found');
}
// auth role
const { teamId, tmbId } = await authChatCert({ req, authToken: true });
// auth app
const app = await MongoApp.findById(appId, 'modules').lean();
if (!app) {
throw new Error('app not found');
}
const { whisperConfig } = splitGuideModule(getGuideModule(app?.modules));
if (!whisperConfig?.open) {
throw new Error('Whisper is not open in the app');
}
const ai = getAIApi();
const result = await ai.audio.transcriptions.create({

View File

@ -32,6 +32,7 @@ import MyBox from '@/components/common/MyBox';
import { usePagination } from '@fastgpt/web/hooks/usePagination';
import DateRangePicker, { DateRangeType } from '@fastgpt/web/components/common/DateRangePicker';
import { formatChatValue2InputType } from '@/components/ChatBox/utils';
import { getNanoid } from '@fastgpt/global/common/string/tools';
const Logs = ({ appId }: { appId: string }) => {
const { t } = useTranslation();
@ -234,6 +235,7 @@ const DetailLogsModal = ({
onSuccess(res) {
const history = res.history.map((item) => ({
...item,
dataId: item.dataId || getNanoid(),
status: 'finish' as any
}));
ChatBoxRef.current?.resetHistory(history);

View File

@ -99,6 +99,7 @@ const ChatTest = ({ appId }: { appId: string }) => {
<Box flex={1}>
<ChatBox
ref={ChatBoxRef}
appId={appDetail._id}
appAvatar={appDetail.avatar}
userAvatar={userInfo?.avatar}
showMarkIcon

View File

@ -6,7 +6,7 @@ import { useForm, useFieldArray } from 'react-hook-form';
import { useSystemStore } from '@/web/common/system/useSystemStore';
import { appModules2Form, getDefaultAppForm } from '@fastgpt/global/core/app/utils';
import type { AppSimpleEditFormType } from '@fastgpt/global/core/app/type.d';
import { chatNodeSystemPromptTip, welcomeTextTip } from '@fastgpt/global/core/module/template/tip';
import { welcomeTextTip } from '@fastgpt/global/core/module/template/tip';
import { useRequest } from '@fastgpt/web/hooks/useRequest';
import { useConfirm } from '@fastgpt/web/hooks/useConfirm';
import { useRouter } from 'next/router';
@ -20,7 +20,7 @@ import dynamic from 'next/dynamic';
import MyTooltip from '@/components/MyTooltip';
import Avatar from '@/components/Avatar';
import MyIcon from '@fastgpt/web/components/common/Icon';
import VariableEdit from '@/components/core/module/Flow/components/modules/VariableEdit';
import VariableEdit from '@/components/core/app/VariableEdit';
import MyTextarea from '@/components/common/Textarea/MyTextarea/index';
import PromptEditor from '@fastgpt/web/components/common/Textarea/PromptEditor';
import { formatEditorVariablePickerIcon } from '@fastgpt/global/core/module/utils';
@ -28,14 +28,26 @@ import SearchParamsTip from '@/components/core/dataset/SearchParamsTip';
import SettingLLMModel from '@/components/core/ai/SettingLLMModel';
import { SettingAIDataType } from '@fastgpt/global/core/module/node/type';
import DeleteIcon, { hoverDeleteStyles } from '@fastgpt/web/components/common/Icon/delete';
import { TTSTypeEnum } from '@/constants/app';
const DatasetSelectModal = dynamic(() => import('@/components/core/module/DatasetSelectModal'));
const DatasetParamsModal = dynamic(() => import('@/components/core/module/DatasetParamsModal'));
const ToolSelectModal = dynamic(() => import('./ToolSelectModal'));
const TTSSelect = dynamic(
() => import('@/components/core/module/Flow/components/modules/TTSSelect')
);
const QGSwitch = dynamic(() => import('@/components/core/module/Flow/components/modules/QGSwitch'));
const TTSSelect = dynamic(() => import('@/components/core/app/TTSSelect'));
const QGSwitch = dynamic(() => import('@/components/core/app/QGSwitch'));
const WhisperConfig = dynamic(() => import('@/components/core/app/WhisperConfig'));
const BoxStyles: BoxProps = {
px: 5,
py: '16px',
borderBottomWidth: '1px',
borderBottomColor: 'borderColor.low'
};
const LabelStyles: BoxProps = {
w: ['60px', '100px'],
flexShrink: 0,
fontSize: ['sm', 'md']
};
const EditForm = ({
divRef,
@ -131,18 +143,6 @@ const EditForm = ({
);
useQuery(['loadAllDatasets'], loadAllDatasets);
const BoxStyles: BoxProps = {
px: 5,
py: '16px',
borderBottomWidth: '1px',
borderBottomColor: 'borderColor.low'
};
const LabelStyles: BoxProps = {
w: ['60px', '100px'],
flexShrink: 0,
fontSize: ['sm', 'md']
};
return (
<Box>
{/* title */}
@ -154,7 +154,7 @@ const EditForm = ({
py={4}
justifyContent={'space-between'}
alignItems={'center'}
zIndex={10}
zIndex={100}
px={4}
{...(isSticky && {
borderBottom: theme.borders.base,
@ -414,6 +414,18 @@ const EditForm = ({
/>
</Box>
{/* whisper */}
<Box {...BoxStyles}>
<WhisperConfig
isOpenAudio={getValues('userGuide.tts').type !== TTSTypeEnum.none}
value={getValues('userGuide.whisper')}
onChange={(e) => {
setValue('userGuide.whisper', e);
setRefresh((state) => !state);
}}
/>
</Box>
{/* question guide */}
<Box {...BoxStyles} borderBottom={'none'}>
<QGSwitch

View File

@ -146,6 +146,7 @@ const Chat = ({ appId, chatId }: { appId: string; chatId: string }) => {
const res = await getInitChatInfo({ appId, chatId });
const history = res.history.map((item) => ({
...item,
dataId: item.dataId || nanoid(),
status: ChatStatusEnum.finish
}));

View File

@ -141,6 +141,7 @@ const OutLink = ({
/* post message to report result */
const result: ChatSiteItemType[] = GPTMessages2Chats(prompts).map((item) => ({
...item,
dataId: item.dataId || nanoid(),
status: 'finish'
}));
@ -183,6 +184,7 @@ const OutLink = ({
});
const history = res.history.map((item) => ({
...item,
dataId: item.dataId || nanoid(),
status: ChatStatusEnum.finish
}));

View File

@ -210,6 +210,7 @@ const OutLink = () => {
const history = res.history.map((item) => ({
...item,
dataId: item.dataId || nanoid(),
status: ChatStatusEnum.finish
}));

View File

@ -5,7 +5,7 @@ import { useTranslation } from 'next-i18next';
import { getErrText } from '@fastgpt/global/common/error/utils';
import { OutLinkChatAuthProps } from '@fastgpt/global/support/permission/chat';
export const useSpeech = (props?: OutLinkChatAuthProps) => {
export const useSpeech = (props?: OutLinkChatAuthProps & { appId?: string }) => {
const { t } = useTranslation();
const mediaRecorder = useRef<MediaRecorder>();
const [mediaStream, setMediaStream] = useState<MediaStream>();
@ -15,6 +15,7 @@ export const useSpeech = (props?: OutLinkChatAuthProps) => {
const [audioSecond, setAudioSecond] = useState(0);
const intervalRef = useRef<any>();
const startTimestamp = useRef(0);
const cancelWhisperSignal = useRef(false);
const speakingTimeString = useMemo(() => {
const minutes: number = Math.floor(audioSecond / 60);
@ -51,6 +52,8 @@ export const useSpeech = (props?: OutLinkChatAuthProps) => {
const startSpeak = async (onFinish: (text: string) => void) => {
try {
cancelWhisperSignal.current = false;
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
setMediaStream(stream);
@ -73,42 +76,45 @@ export const useSpeech = (props?: OutLinkChatAuthProps) => {
};
mediaRecorder.current.onstop = async () => {
const formData = new FormData();
let options = {};
if (MediaRecorder.isTypeSupported('audio/webm')) {
options = { type: 'audio/webm' };
} else if (MediaRecorder.isTypeSupported('video/mp3')) {
options = { type: 'video/mp3' };
} else {
console.error('no suitable mimetype found for this device');
}
const blob = new Blob(chunks, options);
const duration = Math.round((Date.now() - startTimestamp.current) / 1000);
if (!cancelWhisperSignal.current) {
const formData = new FormData();
let options = {};
if (MediaRecorder.isTypeSupported('audio/webm')) {
options = { type: 'audio/webm' };
} else if (MediaRecorder.isTypeSupported('video/mp3')) {
options = { type: 'video/mp3' };
} else {
console.error('no suitable mimetype found for this device');
}
const blob = new Blob(chunks, options);
const duration = Math.round((Date.now() - startTimestamp.current) / 1000);
formData.append('file', blob, 'recording.mp3');
formData.append(
'data',
JSON.stringify({
...props,
duration
})
);
formData.append('file', blob, 'recording.mp3');
formData.append(
'data',
JSON.stringify({
...props,
duration
})
);
setIsTransCription(true);
try {
const result = await POST<string>('/v1/audio/transcriptions', formData, {
timeout: 60000,
headers: {
'Content-Type': 'multipart/form-data; charset=utf-8'
}
});
onFinish(result);
} catch (error) {
toast({
status: 'warning',
title: getErrText(error, t('common.speech.error tip'))
});
setIsTransCription(true);
try {
const result = await POST<string>('/v1/audio/transcriptions', formData, {
timeout: 60000,
headers: {
'Content-Type': 'multipart/form-data; charset=utf-8'
}
});
onFinish(result);
} catch (error) {
toast({
status: 'warning',
title: getErrText(error, t('common.speech.error tip'))
});
}
}
setIsTransCription(false);
setIsSpeaking(false);
};
@ -128,7 +134,8 @@ export const useSpeech = (props?: OutLinkChatAuthProps) => {
}
};
const stopSpeak = () => {
const stopSpeak = (cancel = false) => {
cancelWhisperSignal.current = cancel;
if (mediaRecorder.current) {
mediaRecorder.current?.stop();
clearInterval(intervalRef.current);
@ -147,6 +154,13 @@ export const useSpeech = (props?: OutLinkChatAuthProps) => {
};
}, []);
// listen minuted. over 60 seconds, stop speak
useEffect(() => {
if (audioSecond >= 60) {
stopSpeak();
}
}, [audioSecond]);
return {
startSpeak,
stopSpeak,

View File

@ -1,246 +1,357 @@
import { useState, useCallback, useEffect, useMemo, useRef } from 'react';
import { useToast } from '@fastgpt/web/hooks/useToast';
import { getErrText } from '@fastgpt/global/common/error/utils';
import type { AppTTSConfigType } from '@fastgpt/global/core/module/type.d';
import type { AppTTSConfigType } from '@fastgpt/global/core/app/type.d';
import { TTSTypeEnum } from '@/constants/app';
import { useTranslation } from 'next-i18next';
import type { OutLinkChatAuthProps } from '@fastgpt/global/support/permission/chat.d';
const contentType = 'audio/mpeg';
const splitMarker = 'SPLIT_MARKER';
export const useAudioPlay = (props?: OutLinkChatAuthProps & { ttsConfig?: AppTTSConfigType }) => {
const { t } = useTranslation();
const { ttsConfig, shareId, outLinkUid, teamId, teamToken } = props || {};
const { toast } = useToast();
const [audio, setAudio] = useState<HTMLAudioElement>();
const audioRef = useRef<HTMLAudioElement>(new Audio());
const audio = audioRef.current;
const [audioLoading, setAudioLoading] = useState(false);
const [audioPlaying, setAudioPlaying] = useState(false);
const audioController = useRef(new AbortController());
// Check whether the voice is supported
const hasAudio = useMemo(() => {
const hasAudio = (() => {
if (ttsConfig?.type === TTSTypeEnum.none) return false;
if (ttsConfig?.type === TTSTypeEnum.model) return true;
const voices = window.speechSynthesis?.getVoices?.() || []; // 获取语言包
const voice = voices.find((item) => {
return item.lang === 'zh-CN';
return item.lang === 'zh-CN' || item.lang === 'zh';
});
return !!voice;
}, [ttsConfig]);
})();
const playAudio = async ({
text,
chatItemId,
buffer
}: {
text: string;
chatItemId?: string;
buffer?: Uint8Array;
}) =>
new Promise<{ buffer?: Uint8Array }>(async (resolve, reject) => {
text = text.replace(/\\n/g, '\n');
try {
// tts play
if (audio && ttsConfig && ttsConfig?.type === TTSTypeEnum.model) {
setAudioLoading(true);
const getAudioStream = useCallback(
async (input: string) => {
if (!input) return Promise.reject('Text is empty');
/* buffer tts */
if (buffer) {
playAudioBuffer({ audio, buffer });
setAudioLoading(false);
return resolve({ buffer });
}
setAudioLoading(true);
audioController.current = new AbortController();
audioController.current = new AbortController();
const response = await fetch('/api/core/chat/item/getSpeech', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
signal: audioController.current.signal,
body: JSON.stringify({
ttsConfig,
input: input.trim(),
shareId,
outLinkUid,
teamId,
teamToken
})
}).finally(() => {
setAudioLoading(false);
});
/* request tts */
const response = await fetch('/api/core/chat/item/getSpeech', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
signal: audioController.current.signal,
body: JSON.stringify({
chatItemId,
ttsConfig,
input: text,
shareId,
outLinkUid,
teamId,
teamToken
})
});
setAudioLoading(false);
if (!response.body || !response.ok) {
const data = await response.json();
toast({
status: 'error',
title: getErrText(data, t('core.chat.Audio Speech Error'))
});
return reject(data);
}
const audioBuffer = await readAudioStream({
audio,
stream: response.body,
contentType: 'audio/mpeg'
});
resolve({
buffer: audioBuffer
});
} else {
// window speech
window.speechSynthesis?.cancel();
const msg = new SpeechSynthesisUtterance(text);
const voices = window.speechSynthesis?.getVoices?.() || []; // 获取语言包
const voice = voices.find((item) => {
return item.lang === 'zh-CN';
});
if (voice) {
msg.onstart = () => {
setAudioPlaying(true);
};
msg.onend = () => {
setAudioPlaying(false);
msg.onstart = null;
msg.onend = null;
};
msg.voice = voice;
window.speechSynthesis?.speak(msg);
}
resolve({});
}
} catch (error) {
if (!response.body || !response.ok) {
const data = await response.json();
toast({
status: 'error',
title: getErrText(error, t('core.chat.Audio Speech Error'))
title: getErrText(data, t('core.chat.Audio Speech Error'))
});
reject(error);
return Promise.reject(data);
}
setAudioLoading(false);
return response.body;
},
[outLinkUid, shareId, t, teamId, teamToken, toast, ttsConfig]
);
const playWebAudio = useCallback((text: string) => {
// window speech
window.speechSynthesis?.cancel();
const msg = new SpeechSynthesisUtterance(text);
const voices = window.speechSynthesis?.getVoices?.() || []; // 获取语言包
const voice = voices.find((item) => {
return item.lang === 'zh-CN';
});
if (voice) {
msg.onstart = () => {
setAudioPlaying(true);
};
msg.onend = () => {
setAudioPlaying(false);
msg.onstart = null;
msg.onend = null;
};
msg.voice = voice;
window.speechSynthesis?.speak(msg);
}
}, []);
const cancelAudio = useCallback(() => {
try {
window.speechSynthesis?.cancel();
audioController.current.abort('');
} catch (error) {}
if (audio) {
audio.pause();
audio.src = '';
}
window.speechSynthesis?.cancel();
audioController.current?.abort();
setAudioPlaying(false);
}, [audio]);
// listen ttsUrl update
useEffect(() => {
setAudio(new Audio());
/* Perform a voice playback */
const playAudioByText = useCallback(
async ({ text, buffer }: { text: string; buffer?: Uint8Array }) => {
const playAudioBuffer = (buffer: Uint8Array) => {
const audioUrl = URL.createObjectURL(new Blob([buffer], { type: 'audio/mpeg' }));
audio.src = audioUrl;
audio.play();
};
const readAudioStream = (stream: ReadableStream<Uint8Array>) => {
if (!audio) return;
// Create media source and play audio
const ms = new MediaSource();
const url = URL.createObjectURL(ms);
audio.src = url;
audio.play();
let u8Arr: Uint8Array = new Uint8Array();
return new Promise<Uint8Array>(async (resolve, reject) => {
// Async to read data from ms
await new Promise((resolve) => {
ms.onsourceopen = resolve;
});
const sourceBuffer = ms.addSourceBuffer(contentType);
const reader = stream.getReader();
// read stream
try {
while (true) {
const { done, value } = await reader.read();
if (done || audio.paused) {
resolve(u8Arr);
if (sourceBuffer.updating) {
await new Promise((resolve) => (sourceBuffer.onupdateend = resolve));
}
ms.endOfStream();
return;
}
u8Arr = new Uint8Array([...u8Arr, ...value]);
await new Promise((resolve) => {
sourceBuffer.onupdateend = resolve;
sourceBuffer.appendBuffer(value.buffer);
});
}
} catch (error) {
reject(error);
}
});
};
return new Promise<{ buffer?: Uint8Array }>(async (resolve, reject) => {
text = text.replace(/\\n/g, '\n');
try {
// stop last audio
cancelAudio();
// tts play
if (audio && ttsConfig?.type === TTSTypeEnum.model) {
/* buffer tts */
if (buffer) {
playAudioBuffer(buffer);
return resolve({ buffer });
}
/* request tts */
const audioBuffer = await readAudioStream(await getAudioStream(text));
resolve({
buffer: audioBuffer
});
} else {
// window speech
playWebAudio(text);
resolve({});
}
} catch (error) {
toast({
status: 'error',
title: getErrText(error, t('core.chat.Audio Speech Error'))
});
reject(error);
}
});
},
[audio, cancelAudio, getAudioStream, playWebAudio, t, toast, ttsConfig?.type]
);
// segmented params
const segmentedMediaSource = useRef<MediaSource>();
const segmentedSourceBuffer = useRef<SourceBuffer>();
const segmentedTextList = useRef<string[]>([]);
const appendAudioPromise = useRef<Promise<any>>(Promise.resolve());
/* Segmented voice playback */
const startSegmentedAudio = useCallback(async () => {
if (!audio) return;
cancelAudio();
/* reset all source */
const buffer = segmentedSourceBuffer.current;
if (buffer) {
buffer.updating && (await new Promise((resolve) => (buffer.onupdateend = resolve)));
segmentedSourceBuffer.current = undefined;
}
if (segmentedMediaSource.current) {
if (segmentedMediaSource.current?.readyState === 'open') {
segmentedMediaSource.current.endOfStream();
}
segmentedMediaSource.current = undefined;
}
/* init source */
segmentedTextList.current = [];
appendAudioPromise.current = Promise.resolve();
/* start ms and source buffer */
const ms = new MediaSource();
segmentedMediaSource.current = ms;
const url = URL.createObjectURL(ms);
audio.src = url;
audio.play();
await new Promise((resolve) => {
ms.onsourceopen = resolve;
});
const sourceBuffer = ms.addSourceBuffer(contentType);
segmentedSourceBuffer.current = sourceBuffer;
}, [audio, cancelAudio]);
const finishSegmentedAudio = useCallback(() => {
appendAudioPromise.current = appendAudioPromise.current.finally(() => {
if (segmentedMediaSource.current?.readyState === 'open') {
segmentedMediaSource.current.endOfStream();
}
});
}, []);
const appendAudioStream = useCallback(
(input: string) => {
const buffer = segmentedSourceBuffer.current;
if (!buffer) return;
let u8Arr: Uint8Array = new Uint8Array();
return new Promise<Uint8Array>(async (resolve, reject) => {
// read stream
try {
const stream = await getAudioStream(input);
const reader = stream.getReader();
while (true) {
const { done, value } = await reader.read();
if (done || !audio?.played) {
buffer.updating && (await new Promise((resolve) => (buffer.onupdateend = resolve)));
return resolve(u8Arr);
}
u8Arr = new Uint8Array([...u8Arr, ...value]);
await new Promise((resolve) => {
buffer.onupdateend = resolve;
buffer.appendBuffer(value.buffer);
});
}
} catch (error) {
reject(error);
}
});
},
[audio?.played, getAudioStream, segmentedSourceBuffer]
);
/* split audio text and fetch tts */
const splitText2Audio = useCallback(
(text: string, done?: boolean) => {
if (ttsConfig?.type === TTSTypeEnum.model && ttsConfig?.model) {
const splitReg = /([。!?]|[.!?]\s)/g;
const storeText = segmentedTextList.current.join('');
const newText = text.slice(storeText.length);
const splitTexts = newText
.replace(splitReg, (() => `$1${splitMarker}`.trim())())
.split(`${splitMarker}`)
.filter((part) => part.trim());
if (splitTexts.length > 1 || done) {
let splitList = splitTexts.slice();
// concat same sentence
if (!done) {
splitList = splitTexts.slice(0, -1);
splitList = [splitList.join('')];
}
segmentedTextList.current = segmentedTextList.current.concat(splitList);
for (const item of splitList) {
appendAudioPromise.current = appendAudioPromise.current.then(() =>
appendAudioStream(item)
);
}
}
} else if (ttsConfig?.type === TTSTypeEnum.web && done) {
playWebAudio(text);
}
},
[appendAudioStream, playWebAudio, ttsConfig?.model, ttsConfig?.type]
);
// listen audio status
useEffect(() => {
if (audio) {
audio.onplay = () => {
setAudioPlaying(true);
};
audio.onended = () => {
setAudioPlaying(false);
};
audio.onerror = () => {
setAudioPlaying(false);
};
audio.oncancel = () => {
setAudioPlaying(false);
};
}
audio.onplay = () => {
setAudioPlaying(true);
};
audio.onended = () => {
setAudioPlaying(false);
};
audio.onerror = () => {
setAudioPlaying(false);
};
audio.oncancel = () => {
setAudioPlaying(false);
};
const listen = () => {
cancelAudio();
};
window.addEventListener('beforeunload', listen);
return () => {
if (audio) {
audio.onplay = null;
audio.onended = null;
audio.onerror = null;
}
audio.onplay = null;
audio.onended = null;
audio.onerror = null;
cancelAudio();
audio.remove();
window.removeEventListener('beforeunload', listen);
};
}, [audio, cancelAudio]);
useEffect(() => {
return () => {
setAudio(undefined);
};
}, []);
return {
audioPlaying,
audio,
audioLoading,
hasAudio,
playAudio,
cancelAudio
audioPlaying,
setAudioPlaying,
getAudioStream,
cancelAudio,
audioController,
hasAudio: useMemo(() => hasAudio, [hasAudio]),
playAudioByText,
startSegmentedAudio,
finishSegmentedAudio,
splitText2Audio
};
};
export function readAudioStream({
audio,
stream,
contentType = 'audio/mpeg'
}: {
audio: HTMLAudioElement;
stream: ReadableStream<Uint8Array>;
contentType?: string;
}): Promise<Uint8Array> {
// Create media source and play audio
const ms = new MediaSource();
const url = URL.createObjectURL(ms);
audio.src = url;
audio.play();
let u8Arr: Uint8Array = new Uint8Array();
return new Promise<Uint8Array>(async (resolve, reject) => {
// Async to read data from ms
await new Promise((resolve) => {
ms.onsourceopen = resolve;
});
const sourceBuffer = ms.addSourceBuffer(contentType);
const reader = stream.getReader();
// read stream
try {
while (true) {
const { done, value } = await reader.read();
if (done) {
resolve(u8Arr);
if (sourceBuffer.updating) {
await new Promise((resolve) => (sourceBuffer.onupdateend = resolve));
}
ms.endOfStream();
return;
}
u8Arr = new Uint8Array([...u8Arr, ...value]);
await new Promise((resolve) => {
sourceBuffer.onupdateend = resolve;
sourceBuffer.appendBuffer(value.buffer);
});
}
} catch (error) {
reject(error);
}
});
}
export function playAudioBuffer({
audio,
buffer
}: {
audio: HTMLAudioElement;
buffer: Uint8Array;
}) {
const audioUrl = URL.createObjectURL(new Blob([buffer], { type: 'audio/mpeg' }));
audio.src = audioUrl;
audio.play();
}

View File

@ -38,8 +38,14 @@ export async function postForm2Modules(data: AppSimpleEditFormType) {
{
key: ModuleInputKeyEnum.tts,
type: FlowNodeInputTypeEnum.hidden,
label: 'core.app.TTS',
label: '',
value: formData.userGuide.tts
},
{
key: ModuleInputKeyEnum.whisper,
type: FlowNodeInputTypeEnum.hidden,
label: '',
value: formData.userGuide.whisper
}
],
outputs: [],

114
python/bge-rerank/README.md Normal file
View File

@ -0,0 +1,114 @@
# 接入 bge-rerank 重排模型
## 不同模型推荐配置
推荐配置如下:
| 模型名 | 内存 | 显存 | 硬盘空间 | 启动命令 |
| ---------------- | ----- | ----- | -------- | ------------- |
| bge-rerank-base | >=4GB | >=4GB | >=8GB | python app.py |
| bge-rerank-large | >=8GB | >=8GB | >=8GB | python app.py |
| bge-rerank-v2-m3 | >=8GB | >=8GB | >=8GB | python app.py |
## 源码部署
### 1. 安装环境
- Python 3.9, 3.10
- CUDA 11.7
- 科学上网环境
### 2. 下载代码
3 个模型代码分别为:
1. [https://github.com/labring/FastGPT/tree/main/python/reranker/bge-reranker-base](https://github.com/labring/FastGPT/tree/main/python/reranker/bge-reranker-base)
2. [https://github.com/labring/FastGPT/tree/main/python/reranker/bge-reranker-large](https://github.com/labring/FastGPT/tree/main/python/reranker/bge-reranker-large)
3. [https://github.com/labring/FastGPT/tree/main/python/reranker/bge-rerank-v2-m3](https://github.com/labring/FastGPT/tree/main/python/reranker/bge-rerank-v2-m3)
### 3. 安装依赖
```sh
pip install -r requirements.txt
```
### 4. 下载模型
3个模型的 huggingface 仓库地址如下:
1. [https://huggingface.co/BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base)
2. [https://huggingface.co/BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large)
3. [https://huggingface.co/BAAI/bge-rerank-v2-m3](https://huggingface.co/BAAI/bge-rerank-v2-m3)
在对应代码目录下 clone 模型。目录结构:
```
bge-reranker-base/
app.py
Dockerfile
requirements.txt
```
### 5. 运行代码
```bash
python app.py
```
启动成功后应该会显示如下地址:
![](./rerank1.png)
> 这里的 `http://0.0.0.0:6006` 就是请求地址。
## docker 部署
**镜像名分别为:**
1. registry.cn-hangzhou.aliyuncs.com/fastgpt/bge-rerank-base:v0.1
2. registry.cn-hangzhou.aliyuncs.com/fastgpt/bge-rerank-large:v0.1
3. registry.cn-hangzhou.aliyuncs.com/fastgpt/bge-rerank-v2-m3:v0.1
**端口**
6006
**环境变量**
```
ACCESS_TOKEN=访问安全凭证请求时Authorization: Bearer ${ACCESS_TOKEN}
```
**运行命令示例**
```sh
# auth token 为mytoken
docker run -d --name reranker -p 6006:6006 -e ACCESS_TOKEN=mytoken --gpus all registry.cn-hangzhou.aliyuncs.com/fastgpt/bge-rerank-base:v0.1
```
**docker-compose.yml示例**
```
version: "3"
services:
reranker:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/rerank:v0.2
container_name: reranker
# GPU运行环境如果宿主机未安装将deploy配置隐藏即可
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
ports:
- 6006:6006
environment:
- ACCESS_TOKEN=mytoken
```
## 接入 FastGPT
参考 [ReRank模型接入](https://doc/fastai.site/docs/development/configuration/#rerank-接入)

View File

@ -17,20 +17,9 @@ from FlagEmbedding import FlagReranker
from pydantic import Field, BaseModel, validator
from typing import Optional, List
def response(code, msg, data=None):
time = str(datetime.datetime.now())
if data is None:
data = []
result = {
"code": code,
"message": msg,
"data": data,
"time": time
}
return result
def success(data=None, msg=''):
return
app = FastAPI()
security = HTTPBearer()
env_bearer_token = 'ACCESS_TOKEN'
class QADocs(BaseModel):
query: Optional[str]
@ -46,42 +35,35 @@ class Singleton(type):
RERANK_MODEL_PATH = os.path.join(os.path.dirname(__file__), "bge-reranker-base")
class Reranker(metaclass=Singleton):
class ReRanker(metaclass=Singleton):
def __init__(self, model_path):
self.reranker = FlagReranker(model_path,
use_fp16=False)
self.reranker = FlagReranker(model_path, use_fp16=False)
def compute_score(self, pairs: List[List[str]]):
if len(pairs) > 0:
result = self.reranker.compute_score(pairs)
result = self.reranker.compute_score(pairs, normalize=True)
if isinstance(result, float):
result = [result]
return result
else:
return None
class Chat(object):
def __init__(self, rerank_model_path: str = RERANK_MODEL_PATH):
self.reranker = Reranker(rerank_model_path)
self.reranker = ReRanker(rerank_model_path)
def fit_query_answer_rerank(self, query_docs: QADocs) -> List:
if query_docs is None or len(query_docs.documents) == 0:
return []
new_docs = []
pair = []
for answer in query_docs.documents:
pair.append([query_docs.query, answer])
scores = self.reranker.compute_score(pair)
for index, score in enumerate(scores):
new_docs.append({"index": index, "text": query_docs.documents[index], "score": 1 / (1 + np.exp(-score))})
#results = [{"document": {"text": documents["text"]}, "index": documents["index"], "relevance_score": documents["score"]} for documents in list(sorted(new_docs, key=lambda x: x["score"], reverse=True))]
results = [{"index": documents["index"], "relevance_score": documents["score"]} for documents in list(sorted(new_docs, key=lambda x: x["score"], reverse=True))]
return {"results": results}
app = FastAPI()
security = HTTPBearer()
env_bearer_token = 'ACCESS_TOKEN'
pair = [[query_docs.query, doc] for doc in query_docs.documents]
scores = self.reranker.compute_score(pair)
new_docs = []
for index, score in enumerate(scores):
new_docs.append({"index": index, "text": query_docs.documents[index], "score": score})
results = [{"index": documents["index"], "relevance_score": documents["score"]} for documents in list(sorted(new_docs, key=lambda x: x["score"], reverse=True))]
return results
@app.post('/v1/rerank')
async def handle_post_request(docs: QADocs, credentials: HTTPAuthorizationCredentials = Security(security)):
@ -89,8 +71,12 @@ async def handle_post_request(docs: QADocs, credentials: HTTPAuthorizationCreden
if env_bearer_token is not None and token != env_bearer_token:
raise HTTPException(status_code=401, detail="Invalid token")
chat = Chat()
qa_docs_with_rerank = chat.fit_query_answer_rerank(docs)
return response(200, msg="重排成功", data=qa_docs_with_rerank)
try:
results = chat.fit_query_answer_rerank(docs)
return {"results": results}
except Exception as e:
print(f"报错:\n{e}")
return {"error": "重排出错"}
if __name__ == "__main__":
token = os.getenv("ACCESS_TOKEN")

View File

@ -1,6 +1,6 @@
fastapi==0.104.1
transformers[sentencepiece]
FlagEmbedding==1.1.5
FlagEmbedding==1.2.8
pydantic==1.10.13
uvicorn==0.17.6
itsdangerous

View File

@ -0,0 +1,12 @@
FROM pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime
# please download the model from https://huggingface.co/BAAI/bge-reranker-large and put it in the same directory as Dockerfile
COPY ./bge-reranker-large ./bge-reranker-large
COPY requirements.txt .
RUN python3 -m pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
COPY app.py Dockerfile .
ENTRYPOINT python3 app.py

View File

@ -0,0 +1,88 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
@Time: 2023/11/7 22:45
@Author: zhidong
@File: reranker.py
@Desc:
"""
import os
import numpy as np
import logging
import uvicorn
import datetime
from fastapi import FastAPI, Security, HTTPException
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
from FlagEmbedding import FlagReranker
from pydantic import Field, BaseModel, validator
from typing import Optional, List
app = FastAPI()
security = HTTPBearer()
env_bearer_token = 'ACCESS_TOKEN'
class QADocs(BaseModel):
query: Optional[str]
documents: Optional[List[str]]
class Singleton(type):
def __call__(cls, *args, **kwargs):
if not hasattr(cls, '_instance'):
cls._instance = super().__call__(*args, **kwargs)
return cls._instance
RERANK_MODEL_PATH = os.path.join(os.path.dirname(__file__), "bge-reranker-large")
class ReRanker(metaclass=Singleton):
def __init__(self, model_path):
self.reranker = FlagReranker(model_path, use_fp16=False)
def compute_score(self, pairs: List[List[str]]):
if len(pairs) > 0:
result = self.reranker.compute_score(pairs, normalize=True)
if isinstance(result, float):
result = [result]
return result
else:
return None
class Chat(object):
def __init__(self, rerank_model_path: str = RERANK_MODEL_PATH):
self.reranker = ReRanker(rerank_model_path)
def fit_query_answer_rerank(self, query_docs: QADocs) -> List:
if query_docs is None or len(query_docs.documents) == 0:
return []
pair = [[query_docs.query, doc] for doc in query_docs.documents]
scores = self.reranker.compute_score(pair)
new_docs = []
for index, score in enumerate(scores):
new_docs.append({"index": index, "text": query_docs.documents[index], "score": score})
results = [{"index": documents["index"], "relevance_score": documents["score"]} for documents in list(sorted(new_docs, key=lambda x: x["score"], reverse=True))]
return results
@app.post('/v1/rerank')
async def handle_post_request(docs: QADocs, credentials: HTTPAuthorizationCredentials = Security(security)):
token = credentials.credentials
if env_bearer_token is not None and token != env_bearer_token:
raise HTTPException(status_code=401, detail="Invalid token")
chat = Chat()
try:
results = chat.fit_query_answer_rerank(docs)
return {"results": results}
except Exception as e:
print(f"报错:\n{e}")
return {"error": "重排出错"}
if __name__ == "__main__":
token = os.getenv("ACCESS_TOKEN")
if token is not None:
env_bearer_token = token
try:
uvicorn.run(app, host='0.0.0.0', port=6006)
except Exception as e:
print(f"API启动失败\n报错:\n{e}")

View File

@ -0,0 +1,7 @@
fastapi==0.104.1
transformers[sentencepiece]
FlagEmbedding==1.2.8
pydantic==1.10.13
uvicorn==0.17.6
itsdangerous
protobuf

View File

@ -0,0 +1,12 @@
FROM pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime
# please download the model from https://huggingface.co/BAAI/bge-reranker-v2-m3 and put it in the same directory as Dockerfile
COPY ./bge-reranker-v2-m3 ./bge-reranker-v2-m3
COPY requirements.txt .
RUN python3 -m pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
COPY app.py Dockerfile .
ENTRYPOINT python3 app.py

View File

@ -0,0 +1,88 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
@Time: 2023/11/7 22:45
@Author: zhidong
@File: reranker.py
@Desc:
"""
import os
import numpy as np
import logging
import uvicorn
import datetime
from fastapi import FastAPI, Security, HTTPException
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
from FlagEmbedding import FlagReranker
from pydantic import Field, BaseModel, validator
from typing import Optional, List
app = FastAPI()
security = HTTPBearer()
env_bearer_token = 'ACCESS_TOKEN'
class QADocs(BaseModel):
query: Optional[str]
documents: Optional[List[str]]
class Singleton(type):
def __call__(cls, *args, **kwargs):
if not hasattr(cls, '_instance'):
cls._instance = super().__call__(*args, **kwargs)
return cls._instance
RERANK_MODEL_PATH = os.path.join(os.path.dirname(__file__), "bge-reranker-v2-m3")
class ReRanker(metaclass=Singleton):
def __init__(self, model_path):
self.reranker = FlagReranker(model_path, use_fp16=False)
def compute_score(self, pairs: List[List[str]]):
if len(pairs) > 0:
result = self.reranker.compute_score(pairs, normalize=True)
if isinstance(result, float):
result = [result]
return result
else:
return None
class Chat(object):
def __init__(self, rerank_model_path: str = RERANK_MODEL_PATH):
self.reranker = ReRanker(rerank_model_path)
def fit_query_answer_rerank(self, query_docs: QADocs) -> List:
if query_docs is None or len(query_docs.documents) == 0:
return []
pair = [[query_docs.query, doc] for doc in query_docs.documents]
scores = self.reranker.compute_score(pair)
new_docs = []
for index, score in enumerate(scores):
new_docs.append({"index": index, "text": query_docs.documents[index], "score": score})
results = [{"index": documents["index"], "relevance_score": documents["score"]} for documents in list(sorted(new_docs, key=lambda x: x["score"], reverse=True))]
return results
@app.post('/v1/rerank')
async def handle_post_request(docs: QADocs, credentials: HTTPAuthorizationCredentials = Security(security)):
token = credentials.credentials
if env_bearer_token is not None and token != env_bearer_token:
raise HTTPException(status_code=401, detail="Invalid token")
chat = Chat()
try:
results = chat.fit_query_answer_rerank(docs)
return {"results": results}
except Exception as e:
print(f"报错:\n{e}")
return {"error": "重排出错"}
if __name__ == "__main__":
token = os.getenv("ACCESS_TOKEN")
if token is not None:
env_bearer_token = token
try:
uvicorn.run(app, host='0.0.0.0', port=6006)
except Exception as e:
print(f"API启动失败\n报错:\n{e}")

View File

@ -0,0 +1,7 @@
fastapi==0.104.1
transformers[sentencepiece]
FlagEmbedding==1.2.8
pydantic==1.10.13
uvicorn==0.17.6
itsdangerous
protobuf

Binary file not shown.

After

Width:  |  Height:  |  Size: 91 KiB

View File

@ -1,48 +0,0 @@
## 推荐配置
推荐配置如下:
{{< table "table-hover table-striped-columns" >}}
| 类型 | 内存 | 显存 | 硬盘空间 | 启动命令 |
|------|---------|---------|----------|--------------------------|
| base | >=4GB | >=3GB | >=8GB | python app.py |
{{< /table >}}
## 部署
### 环境要求
- Python 3.10.11
- CUDA 11.7
- 科学上网环境
### 源码部署
1. 根据上面的环境配置配置好环境,具体教程自行 GPT
2. 下载 [python 文件](app.py)
3. 在命令行输入命令 `pip install -r requirments.txt`
4. 按照[https://huggingface.co/BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base)下载模型仓库到app.py同级目录
5. 添加环境变量 `export ACCESS_TOKEN=XXXXXX` 配置 token这里的 token 只是加一层验证,防止接口被人盗用,默认值为 `ACCESS_TOKEN`
6. 执行命令 `python app.py`
然后等待模型下载,直到模型加载完毕为止。如果出现报错先问 GPT。
启动成功后应该会显示如下地址:
![](/imgs/chatglm2.png)
> 这里的 `http://0.0.0.0:6006` 就是连接地址。
### docker 部署
**镜像和端口**
+ 镜像名: `registry.cn-hangzhou.aliyuncs.com/fastgpt/rerank:v0.2`
+ 端口号: 6006
```
# 设置安全凭证即oneapi中的渠道密钥
通过环境变量ACCESS_TOKEN引入默认值ACCESS_TOKEN。
有关docker环境变量引入的方法请自寻教程此处不再赘述。
```