当前位置:   article > 正文

react实现一个语音助手组件+语音识别_react 录音插件

react 录音插件

前端访问用户的媒体设备以进行音频录制,后端解析语音给反馈

该组件将演示如何使用navigator.mediaDevices.getUserMedia API来录制音频,并在停止录制时将录制的音频blob发送到服务器

其中包含一个开始和停止录制的按钮,这将演示与MediaRecorder的交互。

创建一个 AudioRecorder 子组件

import React, { useEffect, useState } from 'react';

const AudioRecorder = ({ onUploadComplete }) => { // Accept a callback prop
  const [mediaRecorder, setMediaRecorder] = useState(null);
  const [isRecording, setIsRecording] = useState(false);

  useEffect(() => {
    const requestUserMedia = async () => {
      try {
        const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
        const recorder = new MediaRecorder(stream);
        setMediaRecorder(recorder);
      } catch (err) {
        console.error("Error accessing media devices:", err);
      }
    };

    requestUserMedia();
  }, []);

  const startRecording = () => {
    if (mediaRecorder) {
      mediaRecorder.start();
      setIsRecording(true);

      const audioChunks = [];
      mediaRecorder.ondataavailable = e => {
        audioChunks.push(e.data);
      };

      mediaRecorder.onstop = () => {
        const audioBlob = new Blob(audioChunks, { 'type' : 'audio/wav; codecs=opus' });
        sendAudioToServer(audioBlob);
      };
    }
  };

  const stopRecording = () => {
    if (mediaRecorder) {
      mediaRecorder.stop();
      setIsRecording(false);
    }
  };

  // Function to send the audio blob to a server
  const sendAudioToServer = async (audioBlob) => {
    const formData = new FormData();
    formData.append('file', audioBlob, 'audio.wav');

    try {
      const response = await fetch('https://yourserver.com/audio/upload', {
        method: 'POST',
        body: formData,
      });

      if (response.ok) {
        const result = await response.json();
        console.log('Audio uploaded successfully:', result);
        onUploadComplete(true, result); // Call the callback with success status and result
      } else {
        console.error('Upload failed:', response.statusText);
        onUploadComplete(false, response.statusText); // Call the callback with failure status
      }
    } catch (error) {
      console.error('Error sending audio to server:', error);
      onUploadComplete(false, error); // Call the callback with error status
    }
  };

  return (
    <div>
      <button onClick={isRecording ? stopRecording : startRecording}>
        {isRecording ? 'Stop Recording' : 'Start Recording'}
      </button>
    </div>
  );
};

export default AudioRecorder;

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80

父组件调用

通过 onUploadComplete 获取到接口的返回,来进行后续的操作

import React from 'react';
import AudioRecorder from './AudioRecorder'; // Adjust the import path as needed

const ParentComponent = () => {
  const handleUploadComplete = (success, result) => {
    if (success) {
      console.log('Upload successful:', result);
      // Handle success (e.g., update state or show success message)
    } else {
      console.error('Upload failed:', result);
      // Handle failure (e.g., update state or show error message)
    }
  };

  return (
    <div>
      <h1>Audio Recorder</h1>
      <AudioRecorder onUploadComplete={handleUploadComplete} />
    </div>
  );
};

export default ParentComponent;

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24

由前端来解析语音内容,直接回显在页面中

使用浏览器的SpeechRecognition API实现实时语音转文字功能,并将转换结果通过回调函数传递给父组件。

这个组件将包含以下功能:

  1. 在组件加载时初始化SpeechRecognition。
  2. 监听语音输入并将其实时转换为文字。
  3. 将转换结果通过回调函数传递给父组件。

创建SpeechToText组件

import React, { useEffect, useState } from 'react';

const SpeechToText = ({ onTranscriptUpdate }) => {
  const [transcript, setTranscript] = useState('');

  useEffect(() => {
    // 检查浏览器支持
    const SpeechRecognition = window.SpeechRecognition || window.webkitSpeechRecognition;
    if (SpeechRecognition) {
      const recognition = new SpeechRecognition();
      recognition.continuous = true; // 持续监听
      recognition.interimResults = true; // 返回临时结果
      recognition.lang = "zh-CN"; // 设置语言
      let result: [] = [];

      recognition.onresult = (event) => {
        // 这种实现方式会无法出现,中间态,直接输出一句话
        // let currentTranscript = '';
        // for (let i = event.resultIndex; i < event.results.length; ++i) {
        //  currentTranscript += event.results[i][0].transcript;
        // }
        // setTranscript(currentTranscript);
        // onTranscriptUpdate(currentTranscript); // 将结果传递给父组件

       const len = event.results.length;
        if(event.results[len-1].isFinal){
          if(event.results[len - 1][0].transcript){
            result.push(event.results[len - 1][0].transcript);
            setTranscript(join(result));
            onTranscriptUpdate(currentTranscript);
          }
        } else {
          const data = event.results[len - 1][0].transcript;
          setTranscript(join(result) + data);
          onTranscriptUpdate(currentTranscript);
        }
      };

      recognition.onerror = (event) => {
        console.error("Speech recognition error", event.error);
      };

      recognition.start(); // 开始识别

      return () => {
        recognition.stop(); // 组件卸载时停止识别
      }
    } else {
      console.log("Speech recognition not supported in this browser.");
    }
  }, [onTranscriptUpdate]);

  return (
    <div>
      <p>实时转录: {transcript}</p>
    </div>
  );
};

export default SpeechToText;

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61

父组件调用

import React, { useState } from 'react';
import SpeechToText from './SpeechToText'; // 调整导入路径

const ParentComponent = () => {
  const [transcript, setTranscript] = useState('');

  const handleTranscriptUpdate = (updatedTranscript) => {
    setTranscript(updatedTranscript);
  };

  return (
    <div>
      <h1>语音实时转文字</h1>
      <SpeechToText onTranscriptUpdate={handleTranscriptUpdate} />
      <p>转录结果: {transcript}</p>
    </div>
  );
};

export default ParentComponent;

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/IT小白/article/detail/571470
推荐阅读
相关标签
  

闽ICP备14008679号