Learn how to quickly set up and integrate bitHuman SDK into your application.
bitHuman SDK enables you to build interactive agents that respond realistically to audio input. This guide covers installation instructions, a hands-on example, and an overview of the core API features.
bithuman
To run the example, you’ll need to obtain the necessary credentials and models from the bitHuman platform. Follow these steps:
Access Developer Settings
Get API Secret
Download Model
We provide several pre-trained models that you can use directly:
Run a visual agent using bitHuman for visual rendering, OpenAI Realtime API for voice-to-voice and LiveKit for orchestration.
Make sure to add OPENAI_API_KEY for voice response, and to run a LiveKit room with webrtc, add LIVEKIT_API_KEY to your .env file.
Stream a bitHuman avatar to a LiveKit room using WebRTC, while controlling the avatar’s speech through a WebSocket interface.
Basic example that captures audio from your microphone, processes it with the bitHuman SDK, and displays the animated avatar in a local window.
Run a LiveKit agent with FastRTC WebRTC implementation:
This example covers the following steps:
The agent animates in sync with the audio input, providing a realistic interactive experience.
bitHuman SDK offers a straightforward yet powerful API for creating interactive agents.
AsyncBithuman
Main class to interact with bitHuman services.
runtime.interrupt()
AudioChunk
Represents audio data for processing.
VideoFrame
Represents visual output data from sdk.
Learn how to quickly set up and integrate bitHuman SDK into your application.
bitHuman SDK enables you to build interactive agents that respond realistically to audio input. This guide covers installation instructions, a hands-on example, and an overview of the core API features.
bithuman
To run the example, you’ll need to obtain the necessary credentials and models from the bitHuman platform. Follow these steps:
Access Developer Settings
Get API Secret
Download Model
We provide several pre-trained models that you can use directly:
Run a visual agent using bitHuman for visual rendering, OpenAI Realtime API for voice-to-voice and LiveKit for orchestration.
Make sure to add OPENAI_API_KEY for voice response, and to run a LiveKit room with webrtc, add LIVEKIT_API_KEY to your .env file.
Stream a bitHuman avatar to a LiveKit room using WebRTC, while controlling the avatar’s speech through a WebSocket interface.
Basic example that captures audio from your microphone, processes it with the bitHuman SDK, and displays the animated avatar in a local window.
Run a LiveKit agent with FastRTC WebRTC implementation:
This example covers the following steps:
The agent animates in sync with the audio input, providing a realistic interactive experience.
bitHuman SDK offers a straightforward yet powerful API for creating interactive agents.
AsyncBithuman
Main class to interact with bitHuman services.
runtime.interrupt()
AudioChunk
Represents audio data for processing.
VideoFrame
Represents visual output data from sdk.