video-generation

Tag

Cards List
#video-generation

@__JohnNguyen__: Today we released the code for our CVPR 2026 paper, Flowception. Flowception bridges fully bidirectional sequence model…

X AI KOLs Following · 3h ago Cached

Meta's FAIR team released the code for Flowception, a CVPR 2026 paper presenting a non-autoregressive video generation framework that interleaves frame insertion with continuous denoising to reduce error accumulation and computational cost.

0 favorites 0 likes
#video-generation

@0xMulight: Combining Codex, HyperFrames, and Remotion to create a ~75-second Chinese educational video about UFOs. Based on a public repository on GitHub, it explains how this repository organizes declassified UAP/UFO documents into readable reports. This time, I divided the tasks as follows: HyperFrames: Responsible for actual…

X AI KOLs Timeline · 11h ago Cached

The author demonstrates how to collaborate using Codex, HyperFrames, and Remotion tools to produce a Chinese educational video about declassified UFO files. Additionally, it introduces a Claude Code skills repository on GitHub that automates the organization and analysis of publicly declassified UAP/UFO government documents.

0 favorites 0 likes
#video-generation

@yoheinakajima: the new http://di.gg looks great!

X AI KOLs Following · 18h ago Cached

This is an aggregation of trending AI news from Digg, covering topics such as Neuralink brain implants, NVIDIA's performance fixes for Claude Code, Anthropic's policy stances, and the release of Flowception video modeling code.

0 favorites 0 likes
#video-generation

@svpino: Huge leap in video generation! Look at the faces here. For the first time, we have a tool that doesn't change character…

X AI KOLs Following · 2d ago

BACH is introduced as a significant advancement in video generation, achieving unprecedented character consistency across scenes without face morphing or drift.

0 favorites 0 likes
#video-generation

Think, then Score: Decoupled Reasoning and Scoring for Video Reward Modeling

Hugging Face Daily Papers · 2d ago Cached

This paper introduces DeScore, a video reward model that decouples reasoning and scoring processes to improve training efficiency and generalization. It addresses the limitations of existing discriminative and generative reward models by using a 'think-then-score' paradigm with multimodal large language models.

0 favorites 0 likes
#video-generation

Fluent Frame

Product Hunt · 3d ago

Fluent Frame is a new tool that allows users to ship polished product videos as quickly as they deploy software features.

0 favorites 0 likes
#video-generation

Stream-T1: Test-Time Scaling for Streaming Video Generation

Hugging Face Daily Papers · 3d ago Cached

Stream-T1 is a proposed framework for test-time scaling in streaming video generation, improving temporal consistency and quality through mechanisms like noise propagation and reward pruning. The paper addresses the high computational costs of existing diffusion-based methods by leveraging chunk-level synthesis.

0 favorites 0 likes
#video-generation

Stream-R1: Reliability-Perplexity Aware Reward Distillation for Streaming Video Generation

Hugging Face Daily Papers · 4d ago Cached

Stream-R1 introduces a reliability-perplexity aware reward distillation framework for streaming video generation that adaptively weights supervision to improve visual and motion quality without additional computational overhead.

0 favorites 0 likes
#video-generation

SulphurAI/Sulphur-2-base

Hugging Face Models Trending · 6d ago Cached

Sulphur-2-base is an uncensored video generation model based on LTX 2.3, supporting native text-to-video and image-to-video workflows.

0 favorites 0 likes
#video-generation

UniVidX: A Unified Multimodal Framework for Versatile Video Generation via Diffusion Priors

Papers with Code Trending · 2026-05-01 Cached

The article discusses the UniVidX paper, which introduces a unified multimodal framework for video generation using diffusion priors and discusses its cross-modal coherence mechanisms.

0 favorites 0 likes
#video-generation

@aiDotEngineer: Building Generative Image & Video models at Scale https://youtube.com/watch?v=xOP1PM8fwnk… A lot of interest in image g…

X AI KOLs Timeline · 2026-04-21

YouTube talk by @sedielem offering a concise state-of-the-art overview of scaling generative image and video models, covering modeling, architecture, distillation and control.

0 favorites 0 likes
#video-generation

@AlchainHust: With Huashu Design, you can create an 80-point promo video for your product in 30 minutes using any Agent—here’s one I made for Kimi K2.6

X AI KOLs Timeline · 2026-04-21 Cached

Huashu Design launches a tool that lets users whip up 80-point promo videos in 30 minutes with any AI agent; demo showcases a spot for Kimi K2.6.

0 favorites 0 likes
#video-generation

CityRAG: Stepping Into a City via Spatially-Grounded Video Generation

Hugging Face Daily Papers · 2026-04-21 Cached

CityRAG introduces a video generative model that produces long, physically grounded, 3D-consistent videos of real-world cities using geo-registered data, enabling realistic navigation and simulation for robotics and autonomous driving.

0 favorites 0 likes
#video-generation

OSCBench: Benchmarking Object State Change in Text-to-Video Generation

arXiv cs.CL · 2026-04-20 Cached

OSCBench is a new benchmark designed to evaluate text-to-video generation models' ability to accurately represent object state changes (transformations caused by actions like peeling or slicing). The paper reveals that current T2V models struggle with temporally consistent state changes, especially in novel and compositional scenarios, identifying this as a key bottleneck in video generation.

0 favorites 0 likes
#video-generation

Speculative Decoding for Autoregressive Video Generation

Hugging Face Daily Papers · 2026-04-19 Cached

SDVG adapts speculative decoding to autoregressive video diffusion, using an image-quality router to achieve up to 2.09× speed-up with 95.7% quality retention on MovieGenVideoBench.

0 favorites 0 likes
#video-generation

Motif-Video 2B: Technical Report

Hugging Face Daily Papers · 2026-04-14 Cached

Motif-Video 2B is a 2B parameter text-to-video generation model that achieves 83.76% on VBench, surpassing Wan2.1 14B while using 7x fewer parameters and trained on fewer than 10M clips with less than 100,000 H200 GPU hours. The model uses a specialized architecture with shared cross-attention and a three-part backbone to separate prompt alignment, temporal consistency, and detail refinement.

0 favorites 0 likes
#video-generation

HDR Video Generation via Latent Alignment with Logarithmic Encoding

Hugging Face Daily Papers · 2026-04-13 Cached

This paper presents a method for HDR video generation by leveraging pretrained generative models through logarithmic encoding alignment and camera-mimicking degradation training, enabling effective HDR synthesis without architectural redesign. The approach demonstrates that HDR generation can be achieved simply by adapting existing models to a representation naturally aligned with their learned priors.

0 favorites 0 likes
#video-generation

LiconStudio/Ltx2.3-VBVR-lora-I2V

Hugging Face Models Trending · 2026-04-08 Cached

LiconStudio releases a LoRA adapter for LTX-2.3 fine-tuned on the VBVR dataset to enhance video generation with improved prompt understanding, motion dynamics, and temporal consistency for complex video reasoning tasks.

0 favorites 0 likes
#video-generation

Create, edit and share videos at no cost in Google Vids

Google AI Blog · 2026-04-02 Cached

Google Vids introduces free high-quality video generation using Veo 3.1 for all users, alongside new custom music creation via Lyria 3 and AI avatar features for premium subscribers.

0 favorites 0 likes
#video-generation

Build with Veo 3.1 Lite, our most cost-effective video generation model

Google AI Blog · 2026-03-31 Cached

Google releases Veo 3.1 Lite, a cost-effective video generation model available on the Gemini API with 50% lower cost than Veo 3.1 Fast while maintaining the same speed. The model supports text-to-video and image-to-video generation with flexible resolutions and aspect ratios.

0 favorites 0 likes
Next →
← Back to home

Submit Feedback