Producing high-quality customized movies stays a big problem, as a result of video era fashions are restricted to their pre-trained information. This limitation impacts industries similar to promoting, media manufacturing, training, and gaming, the place customization and management of video era is crucial.
To handle this, we developed a Video Retrieval Augmented Technology (VRAG) multimodal pipeline that transforms structured textual content into bespoke movies utilizing a library of pictures as reference. Utilizing Amazon Bedrock, Amazon Nova Reel, the Amazon OpenSearch Service vector engine, and Amazon Easy Storage Service (Amazon S3), the answer seamlessly integrates picture retrieval, prompt-based video era, and batch processing right into a single automated workflow. Customers present an object of curiosity, and the answer retrieves probably the most related picture from an listed dataset. They then outline an motion immediate (for instance, “Digicam rotates clockwise”), which is mixed with the retrieved picture to generate the video. Structured prompts from textual content recordsdata permit a number of movies to be generated in a single execution, making a scalable, reusable basis for AI-assisted media era.
On this put up, we discover our strategy to video era by means of VRAG, remodeling pure language textual content prompts and pictures into grounded, high-quality movies. By way of this totally automated resolution, you’ll be able to generate reasonable, AI-powered video sequences from structured textual content and picture inputs, streamlining the video creation course of.
Resolution overview
Our resolution is designed to take a structured textual content immediate, retrieve probably the most related picture, and use Amazon Nova Reel for video era. This resolution integrates a number of parts right into a seamless workflow:
- Picture retrieval and processing – Customers present an object of curiosity (for instance, “blue sky”) and the answer queries the OpenSearch vector engine to retrieve probably the most related picture from an listed dataset, which comprises pre-indexed pictures and descriptions. Essentially the most related picture is retrieved from an S3 bucket.
- Immediate-based video era – Customers outline an motion immediate (for instance, “Digicam pans down”), which is mixed with the retrieved picture to generate a video utilizing Amazon Nova Reel.
- Batch processing for a number of prompts – The answer reads a listing of textual content templates from prompts.txt, which comprise placeholders to allow batch processing of a number of video era requests with structured variations:
- – Dynamically changed with the queried object.
- – Dynamically changed with the digicam motion or scene motion.
- Monitoring and storage – The video era is asynchronous, so the answer displays the job standing. When it’s full, the video is saved in an S3 bucket and mechanically downloaded for preview. The generated movies are displayed within the pocket book, with the corresponding immediate proven as a caption.
The next diagram illustrates the answer structure.
The next diagram illustrates the end-to-end workflow utilizing a Jupyter pocket book.
This resolution can serve the next use instances:
- Instructional movies – Mechanically creating educational movies by pulling related pictures from an issue information base
- Advertising and marketing movies – Creating focused video adverts by pulling pictures that align with particular demographics or product options
- Personalised content material – Tailoring video content material to particular person customers by retrieving pictures based mostly on their particular pursuits
Within the following sections, we break down every part, the way it works, and how one can customise it to your personal AI-driven video workflows.
Instance enter
On this part, we show the video era capabilities of Amazon Nova Reel by means of two distinct enter strategies: text-only and textual content and picture inputs. These examples illustrate how video era might be additional custom-made by incorporating enter pictures, on this state of affairs for promoting. For our instance, a journey company needs to create an commercial that includes an exquisite seashore scene from a particular location and panning to a kayak to entice potential trip bookings. We evaluate the outcomes of utilizing a text-only enter strategy vs. VRAG with a static picture to attain this objective.
Textual content-only enter
For the text-only instance, we use the enter “Very sluggish pan down from blue sky to a colourful kayak floating on turquoise water.” We get the next end result.
Textual content and picture enter
Utilizing the identical textual content immediate, the journey company can now use a particular shot they took at their location. For this instance, we use the next picture.
Journey company can now add content material into their current shot utilizing VRAG. They use the identical immediate: “Very sluggish pan down from blue sky to a colourful kayak floating on turquoise water.” This generates the next video.
Stipulations
Earlier than you deploy this resolution, be sure the next conditions are in place:
Deploy the answer
For this put up, we use an AWS CloudFormation template to deploy the answer within the US East (N. Virginia) AWS Area. For a listing of Areas that help Amazon Nova Reel, see Mannequin help by AWS Area in Amazon Bedrock. Full the next steps:
- Select Launch Stack to deploy the stack:
- Enter a reputation for the stack, similar to vrag-blogpost, and observe the steps to deploy.
- On the CloudFormation console, find the vrag-blogpost stack and make sure that its standing is CREATE_COMPLETE.
- On the SageMaker AI console, select Notebooks within the navigation pane.
- On the Pocket book situations tab, find the pocket book occasion vrag-blogpost-notebook provisioned for this put up and selected Open JupyterLab.
- Open the folder sample-video-rag to view the notebooks wanted for this put up.
Run notebooks
We’ve got supplied seven sequential notebooks, numbered from _00 to _06, with step-by-step directions and targets that will help you construct your understanding of a VRAG resolution. Your output would possibly fluctuate from the examples on this put up.
Picture processing (pocket book _00)
In _00_image_processing, you employ Amazon Bedrock, Amazon S3, and SageMaker AI to carry out the next actions:
- Course of and resize pictures
- Generate Base64 encodings
- Retailer knowledge in Amazon S3
- Generate picture descriptions utilizing Amazon Nova
- Create a visualization of the outcomes
This pocket book illustrates the next capabilities:
- Automated processing pipeline:
- Bulk picture processing
- Clever resizing and optimization
- Base64 encoding for API compatibility
- Amazon S3 storage of pictures
- AI-powered evaluation:
- Superior picture description era
- Content material-based picture understanding
- Multi-modal AI integration
- Strong knowledge administration:
- Environment friendly storage group
- Metadata extraction and indexing
For this instance, we use the next enter picture.
We obtain the next generated picture caption as output: “The picture encompasses a brown purse with white floral patterns, a straw hat with a blue ribbon, and a bottle of fragrance. The purse is positioned on a floor, and the straw hat is positioned subsequent to it. The purse has a strap and a series connected to it, and the straw hat has a blue ribbon tied round it. The fragrance bottle is positioned subsequent to the purse.”
Picture ingestion (pocket book _01)
In _01_oss_ingestion.ipynb, you employ Amazon Bedrock (with Amazon Titan Embeddings to generate embeddings), Amazon S3, OpenSearch Serverless (for vector storage and search), and SageMaker AI (for pocket book internet hosting) to carry out the next actions:
- Course of and resize pictures
- Generate base64 encodings
- Retailer knowledge in Amazon S3
- Generate picture descriptions utilizing Amazon Nova
- Create visualization of the outcomes
This pocket book illustrates the next capabilities:
- Vector database administration:
- Index creation and configuration
- Bulk knowledge ingestion
- Environment friendly vector storage
- Embedding era:
- Multi-modal embedding creation
- Dimension optimization
- Batch processing help
- Semantic search capabilities:
- k-NN search implementation
- Question vector era
- Consequence visualization
For our enter, we use the question “Constructing” and obtain the next picture in consequence.
The picture has the related caption as output: “The picture depicts a contemporary architectural scene that includes a number of high-rise buildings with glass facades. The buildings are constructed with a mixture of glass and metal, giving them a smooth and up to date look. The glass panels mirror the encompassing surroundings, together with the sky and different buildings, making a dynamic interaction of sunshine and reflections. The sky above is partly cloudy, with patches of blue seen, suggesting a transparent day with some cloud cowl. The buildings are tall and slender, with vertical traces emphasised by the construction of the glass panels and metal framework. The reflections on the glass surfaces present the encompassing buildings and the sky, including depth to the picture. The general impression is one among modernity, effectivity, and concrete sophistication.”
Video era from textual content solely (pocket book _02)
In _02_video_gen_text_only.ipynb, you employ Amazon Bedrock (to entry Amazon Nova Reel) and SageMaker AI (for pocket book internet hosting) to carry out the next actions:
- Assemble the request payload for video era with textual content as immediate
- Provoke an asynchronous job utilizing Amazon Bedrock
- Observe progress and wait till completion
- Retrieve the generated video from Amazon S3 and render it within the pocket book
This pocket book illustrates the next capabilities:
- Automated processing of video era with textual content as enter
- Video era at scale with observability
We use the next enter immediate: “Closeup of a big seashell within the sand, mild waves move across the shell. Digicam zoom in.”We obtain the next generated video as output.
Video era from textual content and picture prompts (pocket book _03)
In _03_video_gen_text_image.ipynb, you employ Amazon Bedrock (to entry Amazon Nova Reel) and SageMaker AI (for pocket book internet hosting) to carry out the next actions:
- Assemble the request payload for video era with textual content and picture as immediate
- Provoke an asynchronous job utilizing Amazon Bedrock
- Observe progress and wait till completion
- Retrieve the generated video from Amazon S3 and render it within the pocket book
This pocket book illustrates the next capabilities:
- Automated processing of video era with textual content and picture as enter
- Video era at scale with observability
We use the immediate “digicam tilt up from the highway to the sky” and the next picture as enter.
We obtain the next generated video as output.
Video era from multi-modal inputs (pocket book _04)
In _04_video_gen_multi.ipynb, you employ Amazon Bedrock (to entry Amazon Nova Reel) and SageMaker AI (for pocket book internet hosting) to carry out the next actions:
- Generate embedding for enter immediate and search the OpenSearch Serverless vector assortment index
- Mix textual content and retrieved pictures to generate movies
This pocket book illustrates the next capabilities:
- The VRAG course of
- Video era at scale with observability
We use the next immediate as enter: “A clear cinematic shot of purple sneakers positioned beneath falling snow, whereas the surroundings stays silent and nonetheless.”We obtain the next video as output.
Replace pictures with in-painting (pocket book _05)
In _05_inpainting.ipynb, you employ Amazon Bedrock (to entry Amazon Nova Reel) and SageMaker AI (for pocket book internet hosting) to carry out the next actions:
- Learn base 64 picture
- Generate pictures with in-painting
This pocket book illustrates the next capabilities:
- Exchange and choose areas of a picture based mostly on surrounding context and prompts
- Take away undesirable objects and repair parts of pictures or creatively modify particular areas of a picture
Generate movies with enhanced pictures (pocket book _06)
In _06_video_gen_inpainting.ipynb, you employ Amazon Bedrock (to entry Amazon Nova Reel) and SageMaker AI (for pocket book internet hosting) to carry out the next actions:
- Seek for related pictures in OpenSearch Service utilizing pure language queries
- Use express picture masks to outline areas for in-painting
- Generate movies utilizing enhanced pictures
This pocket book illustrates the next capabilities:
- Use in-painting to generate a picture
- Generate a video utilizing the improved picture
The next screenshot exhibits the picture and masks we use for in-painting.
The next screenshot exhibits the generated pictures (few-shot) we obtain as output.
From the generated picture, we obtain the next video as output.
Finest practices
An environment friendly AI video era course of requires seamless integration of knowledge administration, search optimization, and compliance measures. The method should deal with high-quality enter knowledge whereas sustaining optimized OpenSearch queries and Amazon Bedrock integration for dependable processing. Correct Amazon S3 administration and enhanced consumer expertise options facilitate easy operation, and strict adherence to EU AI Act tips maintains regulatory compliance.
For optimum implementation in manufacturing environments, think about these key components:
- Information high quality – The standard of the generated video is closely depending on the standard and relevance of the picture database utilized in RAG
- Picture captioning – For optimum outcomes, think about incorporating picture captions or metadata to offer extra context for the RAG resolution
- Video enhancing – Though RAG can present the core visible components, extra video enhancing strategies is likely to be required to create a elegant last product
Clear up
To keep away from incurring future costs, clear up the assets created on this put up.
- Empty the S3 bucket created by the CloudFormation stack. On the Amazon S3 console, choose the bucket, select Empty, and make sure the deletion.
- On the AWS CloudFormation console, choose the vrag-blogpost stack, select Delete, and make sure. This removes all provisioned assets, together with the SageMaker pocket book occasion, OpenSearch Serverless assortment, and IAM roles.
Conclusion
VRAG represents a big development in AI-powered video creation, seamlessly integrating current picture databases with consumer prompts to supply contextually related video content material. This resolution demonstrates highly effective functions throughout training, advertising, leisure, and past. As video era expertise continues to evolve, VRAG gives a strong basis for creating partaking, context-aware video content material at scale. By following these finest practices and sustaining deal with knowledge high quality, organizations can use this expertise to rework their video content material creation processes whereas producing constant, high-quality outputs. Check out VRAG for your self with the notebooks supplied on this put up, and share your suggestions within the feedback part.
In regards to the Authors
Nick Biso is a Machine Studying Engineer at AWS Skilled Companies. He solves advanced organizational and technical challenges utilizing knowledge science and engineering. As well as, he builds and deploys AI/ML fashions on the AWS Cloud. His ardour extends to his proclivity for journey and numerous cultural experiences.
Madhunika Mikkili is a Information and Machine Studying Engineer at AWS. She is obsessed with serving to prospects obtain their objectives utilizing knowledge analytics and machine studying.
Shuai Cao is a Senior Utilized Science Supervisor centered on generative AI at Amazon Internet Companies. He leads groups of knowledge scientists, machine studying engineers, and software architects to ship AI/ML options for patrons. Outdoors of labor, he enjoys composing and arranging music.
Seif Elharaki is a Senior Cloud Utility Architect who focuses on constructing AI/ML functions for the manufacturing vertical. He combines his experience in cloud applied sciences with a deep understanding of commercial processes to create modern options. Outdoors of labor, Seif is an enthusiastic hobbyist recreation developer, having fun with coding enjoyable video games utilizing instruments like Unreal Engine and Unity.
Vishwa Gupta is a Principal Advisor with AWS Skilled Companies. He helps prospects implement generative AI, machine studying, and analytics options. Outdoors of labor, he enjoys spending time with household, touring, and making an attempt new meals.
Raechel Frick is a Sr Product Advertising and marketing Supervisor for Amazon Nova. With over 20 years of expertise within the tech trade, she brings a customer-first strategy and progress mindset to constructing built-in advertising packages. Based mostly within the better Seattle space, Raechel balances her skilled life with being a soccer mother and cheerleading coach.
Maria Masood makes a speciality of agentic AI, reinforcement fine-tuning, and multi-turn agent coaching. She has experience in Machine Studying, spanning massive language mannequin customization, reward modeling, and constructing end-to-end coaching pipelines for AI brokers. A sustainability fanatic at coronary heart, Maria enjoys gardening and making lattes.

