Part III: Video
Text-to-video opens new horizons for diffusion models. The models generate n frames and make incredible animations and videos.
Open Stable_Vision_Stability_AI_Animation.ipynb
.
Text-to-video with Stability AI animation
First, make sure you have signed up on Stability AI and have your API key: https://platform.stability.ai/docs/features/animation.
We will now install the Stability SDK for animations:
!pip install "stability_sdk[anim_ui]" # Install the Animation SDK
!git clone --recurse-submodules https://github.com/Stability-AI/stability-sdk
We import the API and initialize the host. We also set our API key:
from stability_sdk import api
STABILITY_HOST = "grpc.stability.ai:443"
STABILITY_KEY = [ENTER YOUR KEY HERE]
context = api.Context(STABILITY_HOST, STABILITY_KEY)
We now import the modules and configure the parameters. The following code uses the default Stability AI arguments:
from stability_sdk.animation...