赞
踩
receiver:参数配置为上传接口。
- {
- "type": "input-image", // "type": "input-file",
- "label": "照片",
- "name": "url",
- "imageClassName": "r w-full",
- "receiver": "/lbserver/api/FileUpload/upload/mPersonnelInfo/Images/${TIMESTAMP(NOW(),'x')}",
- "accept": ".jpeg, .jpg, .png, .gif",
- "fixedSize": false,
- "hideUploadButton": false,
- "autoUpload": true,
- "compress": false,
- "compressOptions": {},
- "crop": false
- }
分块上传所需的处理如下流程图所示:
文件上传文件如果过大的话,如果不加任何处理,这个请求就会一直处于PENDING状态(最后肯定是超时的)
pending(挂起):网络处于挂起状态,指发送的请求是“进行中”的状态,但还没有接到服务端的响应,一旦服务端做出响应,时间将被更新为总运行时间。
0、前端amis分片逻辑如下:(了解即可,一般分片逻辑无需自己实现,用现成组件库)
• 由于前端已有 Blob Api 能操作文件二进制,因此最核心逻辑就是前端运用 Blob Api 对大文件进行文件分片切割,将一个大文件切成一个个小文件,然后将这些分片文件一个个上传。
• 现在 http 请求基本是 1.1 版本,浏览器能够同时进行多个请求,通过Promise进行异步并发控制处理。
• 当前端将所有分片上传完成之后,前端再通知后端进行分片合并成文件。
amis/src/renderers/Form/InputFile.tsx
- //调用startChunkApi 成功后执行startChunk进行分块
- self._send(file, startApi).then(startChunk).catch(reject);
-
- async function startChunk(ret: Payload) {
- onProgress(startProgress);
- const tasks = getTasks(file); //根据chunkSize分块大小(默认5M)生成分块任务集合
- progressArr = tasks.map(() => 0);
-
- if (!ret.data) {
- throw new Error(__('File.uploadFailed'));
- }
-
- state = {
- key: (ret.data as any).key,
- uploadId: (ret.data as any).uploadId,
- loaded: 0,
- total: tasks.length
- };
-
- let results: any[] = [];
- while (tasks.length) {
- const res = await Promise.all(
- tasks.splice(0, concurrency).map(async task => {//根据concurrency 控制并行上传数量,默认是 3
- return await uploadPartFile(state, config)(task); //Blob.slice API进行分块 并调用chunkApi上传
- })
- );
- results = results.concat(res);
- }
- finishChunk(results, state);//finishChunkApi 结束分片
- }
1.amis分块上传参数配置
Amis上传组件如果文件过大,则可能需要使用分块上传,默认大于 5M(chunkSize 配置决定) 的文件是会自动开启,可以通过 useChunk 配置成 false 关闭。(不要手动配置useChunk:true,会导致只使用chunk切片上传)
- {
- "type": "input-file",
- "id": "u:dbd914e494e9",
- "label": "File",
- "name": "file",
- "autoUpload": true,
- "uploadType": "fileReceptor",
- "accept": "*",
- "receiver": "/lbserver/api/FileUpload/upload/mProjectInfo/Images/${TIMESTAMP(NOW(),'x')}",
- "startChunkApi": "/lbserver/api/FileUpload/startChunkApi",
- "chunkApi": "/lbserver/api/FileUpload/chunkApi/upload/mProjectInfo/Images",
- "finishChunkApi": "/lbserver/api/FileUpload/finishChunkApi/upload/mProjectInfo/Images",
- "hidden": false,
- "btnLabel": "文件上传",
- "submitType": "asUpload"
- }
2.分块上传相关的三个后端接口(loopback4.0框架 文件上传基于multer):
multer中间件只处理 multipart/form-data 类型的表单数据的函数,主要用于上传文件。
Multer在解析完请求体后,会向request对象中添加一个body对象和一个file或files对象(上传多个文件时使用files对象 )。其中,body对象中包含所提交表单中的文本字段(如果有),而file(或files)对象中包含通过表单上传的文件。
- import { inject, service } from '@loopback/core';
- import {
- del,
- get,
- getModelSchemaRef,
- param,
- patch,
- post,
- Request,
- requestBody,
- response,
- Response,
- RestBindings,
- } from '@loopback/rest';
- import _ from 'lodash';
- import { FILE_UPLOAD_SERVICE } from '../../keys';
- import { FileUploadHandler } from '../../types';
-
- const moment = require('moment');
- const SparkMD5 = require('spark-md5');
- const util = require('util');
- const mime = require('mime');
- const fs = require('fs-extra');
- const path = require('path');
- const child_process = require('child_process');
-
- function getFilesAndFields(request: Request) {
- const uploadedFiles = request.files;
- const mapper = (f: globalThis.Express.Multer.File) => ({
- fieldname: f.fieldname,
- originalname:
- request.body && request.body.key && request.body.partNumber
- ? `${request.body.key}-${request.body.partNumber}`
- : f.originalname,
- encoding: f.encoding,
- mimetype: f.mimetype,
- size: f.size,
- });
- let files: object[] = [];
- if (Array.isArray(uploadedFiles)) {
- files = uploadedFiles.map(mapper);
- } else {
- for (const filename in uploadedFiles) {
- files.push(...uploadedFiles[filename].map(mapper));
- }
- }
- return { files, fields: request.body };
- }
-
-
- export class FileUploadController {
- constructor(
- @inject(FILE_UPLOAD_SERVICE) private handler: FileUploadHandler,
- ) { }
-
- @post(`FileUpload/startChunkApi`)
- @response(200, {
- description: 'FileUpload model instance',
- content: { 'application/json': { schema: getModelSchemaRef(FileUpload) } },
- })
- async startChunkApi(@requestBody() pl: any): Promise<any> {
- let uploadId = generateUUID();
- let key = `${moment().format('X')}-${pl.filename}`;
- return {
- status: 0,
- data: {
- date: new Date(),
- uploadId: uploadId,
- key: key,
- },
- };
- }
-
- @post(`FileUpload/chunkApi/{upload}/{model}/{type}`)
- @response(200, {
- description: 'FileUpload model instance',
- content: { 'application/json': { schema: getModelSchemaRef(FileUpload) } },
- })
- async chunkApi(
- @param.path.string('upload') upload: string,
- @param.path.string('model') model: string,
- @param.path.string('type') type: string,
- @requestBody.file()
- request: Request,
- @inject(RestBindings.Http.RESPONSE) response: Response,
- ): Promise<any> {
- // console.log(model, type);
- return new Promise<any>((resolve, reject) => {
- this.handler(request, response, err => {
- if (err) reject(err);
- else {
- let uploadId = request.body.uploadId; // id
- // let key = request.body.key;
- // let partNumber = request.body.partNumber;
- const f = getFilesAndFields(request);
- if (f.files && f.files.length > 0) {
- for (const i in f.files) {
- const m = f.files[i] as any;
- fs.mkdirpSync(
- path.resolve(`./public/${upload}/${model}/${type}/${uploadId}`),
- );
- const o_file = `./.sandbox/${m.originalname}`;
- let eTag = SparkMD5.hashBinary(fs.readFileSync(o_file, 'binary')); //不指定编码 返回buffer对象
- const m_file = `./public/${upload}/${model}/${type}/${uploadId}/${m.originalname}`;
- fs.rename(o_file, m_file, function (err: any) {
- if (err) {
- child_process.execSync(`mv ${o_file} ${m_file}`);
- console.log(err);
- }
- });
- const result = {
- name: m.originalname,
- eTag: eTag,
- };
- resolve({
- status: 0,
- msg: '',
- data: result,
- });
- }
- }
- }
- });
- });
- }
-
- @post(`FileUpload/finishChunkApi/{upload}/{model}/{type}`)
- @response(200, {
- description: 'FileUpload model instance',
- content: { 'application/json': { schema: getModelSchemaRef(FileUpload) } },
- })
- async finishChunkApi(
- @param.path.string('upload') upload: string,
- @param.path.string('model') model: string,
- @param.path.string('type') type: string,
- @requestBody() pl: any,
- ): Promise<any> {
- let uploadId = pl.uploadId;
- let key = pl.key;
- let partList = pl.partList;
- let pathurl = `/${upload}/${model}/${type}/${key}`;
- const m_dir = `./public/${upload}/${model}/${type}/${uploadId}`;
- const filePath = `./public/${upload}/${model}/${type}/${key}`;
- // console.log(uploadId, key, partList, pathurl, " asdasd")
- let self = this;
- let size = 0;
- function mergeFile(dirPath: string, filePath: string, partList: any) {
- let total = partList.length;
- return new Promise((resolve, reject) => {
- fs.readdir(dirPath, (err: any, files: any) => {
- if (err) {
- return reject(err);
- }
- if (files.length !== total || !files.length) {
- return reject('上传失败,切片数量不符');
- }
- function merge(i: number) {
- // 合并完成
- if (i === files.length) {
- fs.rmdir(dirPath, (err: any) => {
- console.log(err, 'rmdir');
- });
- let date = new Date();
- let m = {
- originalname: pl.filename,
- path: pathurl,
- timestamp: date,
- size: size,
- };
- return resolve({
- status: 0,
- data: {
- date: date,
- value: pathurl,
- url: pathurl,
- },
- });
- }
- let chunkpath = `${dirPath}/${key}-${i + 1}`;
- // console.log(chunkpath, 'chunkpath');
- fs.readFile(chunkpath, 'binary', (err: any, data: any) => {
- // console.log(data.length);
- size += data.length;
- let eTag = SparkMD5.hashBinary(data);
- if (_.find(partList, { partNumber: i + 1 }).eTag !== eTag) {
- return reject('上传失败,切片内容不符');
- }
- // 将切片追加到存储文件
- fs.appendFile(filePath, data, { encoding: 'binary' }, () => {
- // 删除切片文件
- fs.unlink(chunkpath, () => {
- // 递归合并
- merge(i + 1);
- });
- });
- });
- }
- merge(0);
- });
- });
- }
- try {
- return await mergeFile(m_dir, filePath, partList);
- } catch (err) {
- fs.rmdir(m_dir, { recursive: true }, (err: any) => {
- console.log(err);
- }); //出错后重新上传
- return {
- status: -1,
- msg: err,
- };
- }
- }
- }
-
file-upload.sevice.ts:
- import {
- BindingScope,
- config,
- ContextTags,
- injectable,
- Provider,
- } from '@loopback/core';
- import multer from 'multer';
- import {FILE_UPLOAD_SERVICE} from '../keys';
- import {FileUploadHandler} from '../types';
-
- /**
- * A provider to return an `Express` request handler from `multer` middleware
- */
- @injectable({
- scope: BindingScope.TRANSIENT,
- tags: {[ContextTags.KEY]: FILE_UPLOAD_SERVICE},
- })
- export class FileUploadProvider implements Provider<FileUploadHandler> {
- constructor(@config() private options: multer.Options = {}) {
- if (!this.options.storage) {
- // Default to in-memory storage
- this.options.storage = multer.memoryStorage();
- }
- }
-
- value(): FileUploadHandler {
- return multer(this.options).any();
- }
- }
application.ts:
- import { BootMixin } from '@loopback/boot';
- import { ApplicationConfig } from '@loopback/core';
- import { RepositoryMixin } from '@loopback/repository';
- import { RestApplication, RestBindings } from '@loopback/rest';
- import { ServiceMixin } from '@loopback/service-proxy';
- import multer from 'multer';
- import path from 'path';
- import { FILE_UPLOAD_SERVICE, STORAGE_DIRECTORY } from './keys';
-
- export class LbSmartApplication extends BootMixin(
- ServiceMixin(RepositoryMixin(RestApplication)),
- ) {
- constructor(options: ApplicationConfig = {}) {
- super(options);
-
- //...省略
-
- this.configureFileUpload(options.fileStorageDirectory);
- };
-
- /**
- * Configure `multer` options for file upload
- */
- protected configureFileUpload(destination?: string) {
- // Upload files to `dist/.sandbox` by default
- destination = destination ?? path.join(__dirname, '../.sandbox');
- this.bind(STORAGE_DIRECTORY).to(destination);
- const multerOptions: multer.Options = {
- storage: multer.diskStorage({
- destination,
- // Use the original file name as is
- filename: (req, file, cb) => {
- file.originalname = Buffer.from(file.originalname, "latin1").toString( "utf8");
- let originalname = file.originalname;
- if (req.body && req.body.key && req.body.partNumber) {
- originalname = `${req.body.key}-${req.body.partNumber}`;
- }
- cb(null, originalname);
- },
- }),
- };
- // Configure the file upload service with multer options
- this.configure(FILE_UPLOAD_SERVICE).to(multerOptions);
- }
- }
在信息安全领域,经常会用到MD5、SHA1、SHA256算法。这三种算法都属于散列算法,或者叫作哈希算法。它们具有输入任意长度,输出长度固定,以及单向性(无法根据散列值还原出消息)的特点。
关于MD5
MD5是一个安全散列算法,输入两个不同的明文不会得到相同的输出值,根据输出值,不能得到原始的明文,即其过程是不可逆的。所以要解密MD5没有现成的算法,只能穷举法,把可能出现的明文,用MD5算法散列之后,把得到的散列值和原始的数据形成一个一对一的映射表,通过匹配从映射表中找出破解密码所对应的原始明文。
关于SHA1
SHA1是一种密码散列函数,可以生成一个被称为消息摘要的160位(20字节)散列值,散列值通常的呈现形式为40个十六进制数。该算法输入报文的长度不限,产生的输出是一个160位的报文摘要。输入是按512 位的分组进行处理的。SHA-1是不可逆的、防冲突,并具有良好的雪崩效应。
关于SHA256
sha256是一种密码散列函数,也可以说是哈希函数。对于任意长度的消息,SHA256都会产生一个256bit长度的散列值,称为消息摘要,可以用一个长度为64的十六进制字符串表示。sha256是SHA-2下细分出的一种算法。SHA-2下又可再分为六个不同的算法标准,包括了:SHA-224、SHA-256、SHA-384、SHA-512、SHA-512/224、SHA-512/256。
关于RSA
是典型的非对称加密算法(对称加密算法又称传统加密算法。 加密和解密使用同一个密钥),主要具有加密解密、数字签名和加签验签的功能。
加密解密:私钥解密,公钥加密。 数字签名-俗称加签验签:私钥加签,公钥验签。
MD5、SHA1、SHA256有哪些区别?
相同点:
都是密码散列函数,加密不可逆;
都可以实现对任何长度对象加密,都不能防止碰撞;
不同点:
1、校验值的长度不同,MD5校验位的长度是16个字节(128位);SHA1是20个字节(160位);SHA256是32个字节(256位)。
2、运行速度不同,SHA256的运行速度最慢,然后是SHA1,最后是MD5。
MD5、SHA1、SHA256安全性如何?
在安全性方面,SHA256的安全性最高,然后是SHA1,最后是MD5。虽然SHA256的安全性比较高,但是耗时要比其他两种多很多。
md5、SHA1、SHA256不能解密吗?
SHA256是目前比较流行的计算机算法之一,相对md5和SHA1而言,SHA256很安全。SHA256是牢不可破的函数,它的256位密钥从未被泄露过。而MD5就不一样了,单纯使用比较容易遭到撞库攻击。通过预先计算知道MD5的对应关系,存在数据库中,然后使用的时候反查,MD5就可能被解密。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。