您需要额外的支持吗?仅为 HaishinKit 的贡献者和学术研究人员提供 Issues 和 Discussions 上的技术支持。通过成为赞助商,我们可以为您提供所需的支持。
赞助商:每月 50 美元:通过 GitHub Issues/Discussions 提供技术支持,并享有优先响应。
项目名称 | 注释 | 许可证 |
---|---|---|
HaishinKit for Android。 | 适用于 Android 的通过 RTMP 进行摄像头和麦克风流媒体传输的库。 | BSD 3-Clause "New" 或 "Revised" License |
HaishinKit for Flutter。 | 适用于 Flutter 的通过 RTMP 进行摄像头和麦克风流媒体传输的库。 | BSD 3-Clause "New" 或 "Revised" License |
注意
我正在研究 MOQT 的初步实现,用于研究目的。如果您有兴趣,请查看存储库。
从 2.0.0 版本开始,支持多流媒体,允许直播到不同的服务。视图也支持这一点,从而可以验证原始视频数据
let mixer = MediaMixer()
let stream0 = RTMPStream() // for Y Service.
let stream1 = RTMPStream() // for F Service.
let view = MTHKView()
view.track = 0 // Video Track Number 0 or 1, UInt8.max.
mixer.addOutput(stream0)
mixer.addOutput(stream1)
mixer.addOutput(view)
let view2 = MTHKView()
stream0.addOutput(view2)
通过离屏渲染功能,可以在广播或观看期间在视频上显示任何文本或位图。这允许各种应用,例如水印和时间显示。
接收 | 播放 |
---|---|
![]() |
![]() |
功能 | PiPHKView | MTHKView |
---|---|---|
引擎 | AVSampleBufferDisplayLayer | Metal |
发布 | ✔ | ✔ |
播放 | ✔ | ✔ |
VisualEffect | ✔ | ✔ |
MultiCamera | ✔ | ✔ |
PictureInPicture | ✔ |
示例项目适用于 iOS、macOS(ARM)、tvOS 和 visionOS。
重要提示
在 GitHub Issues 上发布问题之前,请检查最新的示例是否也出现相同的问题。
您可以通过更改以下文件的 URL 来验证。
git clone https://github.com/shogo4405/HaishinKit.swift.git
cd HaishinKit.swift
open Examples/Examples.xcodeproj
版本 | Xcode | Swift |
---|---|---|
2.0.0+ | 16.0+ | 5.10+ |
1.9.0+ | 15.4+ | 5.10+ |
- | iOS | tvOS | macOS | visionOS | watchOS |
---|---|---|---|---|---|
HaishinKit | 13.0+ | 13.0+ | 10.15+ | 1.0+ | - |
SRTHaishinKit | 13.0+ | 13.0+ | 10.15+ | 1.0+ | - |
请包含 Info.plist。
iOS 10.0+
macOS 10.14+
tvOS 17.0+
确保您已设置并激活您的 AVAudioSession iOS。
import AVFoundation
let session = AVAudioSession.sharedInstance()
do {
try session.setCategory(.playAndRecord, mode: .default, options: [.defaultToSpeaker, .allowBluetooth])
try session.setActive(true)
} catch {
print(error)
}
let mixer = MediaMixer()
await mixer.setFrameRate(30)
await mixer.setSessionPreset(AVCaptureSession.Preset.medium)
// Do not call beginConfiguration() and commitConfiguration() internally within the scope of the method, as they are called internally.
await mixer.configuration { session in
session.automaticallyConfiguresApplicationAudioSession = true
}
指定音频设备设置。
let front = AVCaptureDevice.default(for: .audio)
try? await mixer.attachAudio(front, track: 0) { audioDeviceUnit in }
如果您想混合多个音轨,请启用功能标志。
await mixer.setMultiTrackAudioMixingEnabled(true)
当您指定采样率时,它将执行重采样。此外,在多声道的情况下,可以应用降采样。
// Setting the value to 0 will be the same as the value specified in mainTrack.
var settings = AudioMixerSettings(
sampleRate: Float64 = 44100,
channels: UInt32 = 0,
)
settings.tracks = [
0: .init(
isMuted: Bool = false,
downmix: Bool = true,
channelMap: [Int]? = nil
)
]
async mixer.setAudioMixerSettings(settings)
var audioSettings = AudioCodecSettings()
/// Specifies the bitRate of audio output.
audioSettings.bitrate = 64 * 1000
/// Specifies the mixes the channels or not. Currently, it supports input sources with 4, 5, 6, and 8 channels.
audioSettings.downmix = true
/// Specifies the map of the output to input channels.
audioSettings.channelMap: [Int]? = nil
await stream.setAudioSettings(audioSettings)
指定视频捕获设置。
let front = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front)
do {
try await mixer.attachCamera(front, track: 0) { videoUnit in
videoUnit.isVideoMirrored = true
videoUnit.preferredVideoStabilizationMode = .standard
videoUnit.colorFormat = kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
}
} catch {
print(error)
}
var videoMixerSettings = VideoMixerSettings()
/// Specifies the image rendering mode.
videoMixerSettings.mode = .passthrough or .offscreen
/// Specifies the muted indicies whether freeze video signal or not.
videoMixerSettings.isMuted = false
/// Specifies the main track number.
videoMixerSettings.mainTrack = 0
await mixer.setVideoMixerSettings(videoMixerSettings)
var videoSettings = VideoCodecSettings(
videoSize: .init(width: 854, height: 480),
profileLevel: kVTProfileLevel_H264_Baseline_3_1 as String,
bitRate: 640 * 1000,
maxKeyFrameIntervalDuration: 2,
scalingMode: .trim,
bitRateMode: .average,
allowFrameReordering: nil,
isHardwareEncoderEnabled: true
)
await stream.setVideoSettings(videoSettings)
// Specifies the recording settings. 0" means the same of input.
let recorder = HKStreamRecorder()
stream.addOutput(recorder)
try await recorder.startRecording(fileName, settings: [
AVMediaType.audio: [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: 0,
AVNumberOfChannelsKey: 0,
// AVEncoderBitRateKey: 128000,
],
AVMediaType.video: [
AVVideoCodecKey: AVVideoCodecH264,
AVVideoHeightKey: 0,
AVVideoWidthKey: 0,
/*
AVVideoCompressionPropertiesKey: [
AVVideoMaxKeyFrameIntervalDurationKey: 2,
AVVideoProfileLevelKey: AVVideoProfileLevelH264Baseline30,
AVVideoAverageBitRateKey: 512000
]
*/
]
])
try await recorder.stopRecording()
BSD-3-Clause