赞
踩
系统提供了多样化的API,来帮助开发者完成音频录制的开发,不同的API适用于不同录音输出格式、音频使用场景或不同开发语言。因此,选择合适的音频录制API,有助于降低开发工作量,实现更佳的音频录制效果。
除上述方式为,也可以通过Media Kit实现音频播放。
应用可以调用麦克风录制音频,但该行为属于隐私敏感行为,在调用麦克风前,需要先向用户申请权限“ohos.permission.MICROPHONE”:
如何使用和管理麦克风请参考[管理麦克风]。
AudioCapturer是音频采集器,用于录制PCM(Pulse Code Modulation)音频数据,适合有音频开发经验的开发者实现更灵活的录制功能。
使用AudioCapturer录制音频涉及到AudioCapturer实例的创建、音频采集参数的配置、采集的开始与停止、资源的释放等。本开发指导将以一次录制音频数据的过程为例,向开发者讲解如何使用AudioCapturer进行音频录制,建议搭配[AudioCapturer的API说明]阅读。
下图展示了AudioCapturer的状态变化,在创建实例后,调用对应的方法可以进入指定的状态实现对应的行为。需要注意的是在确定的状态执行不合适的方法可能导致AudioCapturer发生错误,建议开发者在调用状态转换的方法前进行状态检查,避免程序运行产生预期以外的结果。
图1 AudioCapturer状态变化示意图
使用on(‘stateChange’)方法可以监听AudioCapturer的状态变化,每个状态对应值与说明见[AudioState]。
import { audio } from '@kit.AudioKit'; let audioStreamInfo: audio.AudioStreamInfo = { samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_48000, // 采样率 channels: audio.AudioChannel.CHANNEL_2, // 通道 sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, // 采样格式 encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW // 编码格式 }; let audioCapturerInfo: audio.AudioCapturerInfo = { source: audio.SourceType.SOURCE_TYPE_MIC, capturerFlags: 0 }; let audioCapturerOptions: audio.AudioCapturerOptions = { streamInfo: audioStreamInfo, capturerInfo: audioCapturerInfo }; audio.createAudioCapturer(audioCapturerOptions, (err, data) => { if (err) { console.error(`Invoke createAudioCapturer failed, code is ${err.code}, message is ${err.message}`); } else { console.info('Invoke createAudioCapturer succeeded.'); let audioCapturer = data; } });
import { BusinessError } from '@kit.BasicServicesKit'; import { fileIo } from '@kit.CoreFileKit'; let bufferSize: number = 0; class Options { offset?: number; length?: number; } let path = getContext().cacheDir; let filePath = path + '/StarWars10s-2C-48000-4SW.wav'; let file: fileIo.File = fileIo.openSync(filePath, fileIo.OpenMode.READ_WRITE | fileIo.OpenMode.CREATE); let readDataCallback = (buffer: ArrayBuffer) => { let options: Options = { offset: bufferSize, length: buffer.byteLength } fileIo.writeSync(file.fd, buffer, options); bufferSize += buffer.byteLength; } audioCapturer.on('readData', readDataCallback);
import { BusinessError } from '@kit.BasicServicesKit';
audioCapturer.start((err: BusinessError) => {
if (err) {
console.error(`Capturer start failed, code is ${err.code}, message is ${err.message}`);
} else {
console.info('Capturer start success.');
}
});
import { BusinessError } from '@kit.BasicServicesKit';
audioCapturer.stop((err: BusinessError) => {
if (err) {
console.error(`Capturer stop failed, code is ${err.code}, message is ${err.message}`);
} else {
console.info('Capturer stopped.');
}
});
import { BusinessError } from '@kit.BasicServicesKit';
audioCapturer.release((err: BusinessError) => {
if (err) {
console.error(`capturer release failed, code is ${err.code}, message is ${err.message}`);
} else {
console.info('capturer released.');
}
});
下面展示了使用AudioCapturer录制音频的完整示例代码。
import { audio } from '@kit.AudioKit'; import { BusinessError } from '@kit.BasicServicesKit'; import { fileIo } from '@kit.CoreFileKit'; const TAG = 'AudioCapturerDemo'; class Options { offset?: number; length?: number; } let context = getContext(this); let bufferSize: number = 0; let audioCapturer: audio.AudioCapturer | undefined = undefined; let audioStreamInfo: audio.AudioStreamInfo = { samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_48000, // 采样率 channels: audio.AudioChannel.CHANNEL_2, // 通道 sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, // 采样格式 encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW // 编码格式 } let audioCapturerInfo: audio.AudioCapturerInfo = { source: audio.SourceType.SOURCE_TYPE_MIC, // 音源类型 capturerFlags: 0 // 音频采集器标志 } let audioCapturerOptions: audio.AudioCapturerOptions = { streamInfo: audioStreamInfo, capturerInfo: audioCapturerInfo } let path = getContext().cacheDir; let filePath = path + '/StarWars10s-2C-48000-4SW.wav'; let file: fileIo.File = fileIo.openSync(filePath, fileIo.OpenMode.READ_WRITE | fileIo.OpenMode.CREATE); let readDataCallback = (buffer: ArrayBuffer) => { let options: Options = { offset: bufferSize, length: buffer.byteLength } fileIo.writeSync(file.fd, buffer, options); bufferSize += buffer.byteLength; } // 初始化,创建实例,设置监听事件 function init() { audio.createAudioCapturer(audioCapturerOptions, (err, capturer) => { // 创建AudioCapturer实例 if (err) { console.error(`Invoke createAudioCapturer failed, code is ${err.code}, message is ${err.message}`); return; } console.info(`${TAG}: create AudioCapturer success`); audioCapturer = capturer; if (audioCapturer !== undefined) { (audioCapturer as audio.AudioCapturer).on('readData', readDataCallback); } }); } // 开始一次音频采集 function start() { if (audioCapturer !== undefined) { let stateGroup = [audio.AudioState.STATE_PREPARED, audio.AudioState.STATE_PAUSED, audio.AudioState.STATE_STOPPED]; if (stateGroup.indexOf((audioCapturer as audio.AudioCapturer).state.valueOf()) === -1) { // 当且仅当状态为STATE_PREPARED、STATE_PAUSED和STATE_STOPPED之一时才能启动采集 console.error(`${TAG}: start failed`); return; } // 启动采集 (audioCapturer as audio.AudioCapturer).start((err: BusinessError) => { if (err) { console.error('Capturer start failed.'); } else { console.info('Capturer start success.'); } }); } } // 停止采集 function stop() { if (audioCapturer !== undefined) { // 只有采集器状态为STATE_RUNNING或STATE_PAUSED的时候才可以停止 if ((audioCapturer as audio.AudioCapturer).state.valueOf() !== audio.AudioState.STATE_RUNNING && (audioCapturer as audio.AudioCapturer).state.valueOf() !== audio.AudioState.STATE_PAUSED) { console.info('Capturer is not running or paused'); return; } //停止采集 (audioCapturer as audio.AudioCapturer).stop((err: BusinessError) => { if (err) { console.error('Capturer stop failed.'); } else { fileIo.close(file); console.info('Capturer stop success.'); } }); } } // 销毁实例,释放资源 function release() { if (audioCapturer !== undefined) { // 采集器状态不是STATE_RELEASED或STATE_NEW状态,才能release if ((audioCapturer as audio.AudioCapturer).state.valueOf() === audio.AudioState.STATE_RELEASED || (audioCapturer as audio.AudioCapturer).state.valueOf() === audio.AudioState.STATE_NEW) { console.info('Capturer already released'); return; } //释放资源 (audioCapturer as audio.AudioCapturer).release((err: BusinessError) => { if (err) { console.error('Capturer release failed.'); } else { console.info('Capturer release success.'); } }); } }
很多开发朋友不知道需要学习那些鸿蒙技术?鸿蒙开发岗位需要掌握那些核心技术点?为此鸿蒙的开发学习必须要系统性的进行。
而网上有关鸿蒙的开发资料非常的少,假如你想学好鸿蒙的应用开发与系统底层开发。你可以参考这份资料,少走很多弯路,节省没必要的麻烦。由两位前阿里高级研发工程师联合打造的《鸿蒙NEXT星河版OpenHarmony开发文档》里面内容包含了(ArkTS、ArkUI开发组件、Stage模型、多端部署、分布式应用开发、音频、视频、WebGL、OpenHarmony多媒体技术、Napi组件、OpenHarmony内核、Harmony南向开发、鸿蒙项目实战等等)鸿蒙(Harmony NEXT)技术知识点
如果你是一名Android、Java、前端等等开发人员,想要转入鸿蒙方向发展。可以直接领取这份资料辅助你的学习。下面是鸿蒙开发的学习路线图。
针对鸿蒙成长路线打造的鸿蒙学习文档。话不多说,我们直接看详细鸿蒙(OpenHarmony )手册(共计1236页)与鸿蒙(OpenHarmony )开发入门视频,帮助大家在技术的道路上更进一步。
鸿蒙—作为国家主力推送的国产操作系统。部分的高校已经取消了安卓课程,从而开设鸿蒙课程;企业纷纷跟进启动了鸿蒙研发。
并且鸿蒙是完全具备无与伦比的机遇和潜力的;预计到年底将有 5,000 款的应用完成原生鸿蒙开发,未来将会支持 50 万款的应用。那么这么多的应用需要开发,也就意味着需要有更多的鸿蒙人才。鸿蒙开发工程师也将会迎来爆发式的增长,学习鸿蒙势在必行! 自↓↓↓拿
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。