generative-models

Tag

Cards List
#generative-models

GCCM: Enhancing Generative Graph Prediction via Contrastive Consistency Model

arXiv cs.AI · 2d ago Cached

This paper introduces GCCM, a graph contrastive consistency model that improves generative graph prediction by mitigating shortcut solutions in consistency training through negative pairs and feature perturbation.

0 favorites 0 likes
#generative-models

Energy Generative Modeling: A Lyapunov-based Energy Matching Perspective

arXiv cs.LG · 2d ago Cached

This paper proposes a unified framework for energy-based generative models by casting density transport as a nonlinear control problem with KL divergence as a Lyapunov function. It derives finite-step stopping criteria and demonstrates how nonlinear control theory tools can be applied to static scalar energy models.

0 favorites 0 likes
#generative-models

MidSteer: Optimal Affine Framework for Steering Generative Models

arXiv cs.LG · 2d ago Cached

Introduces MidSteer, a theoretical framework for concept steering in generative models, bridging the gap between empirical success and theoretical understanding by providing optimal affine transformations for steering, erasing, and switching concepts in LLMs and vision diffusion models.

0 favorites 0 likes
#generative-models

Generative Quantum-inspired Kolmogorov-Arnold Eigensolver

Hugging Face Daily Papers · 4d ago Cached

This paper introduces the Generative Quantum-inspired Kolmogorov-Arnold Eigensolver (GQKAE), a parameter-efficient architecture that replaces traditional neural components with Kolmogorov-Arnold modules to significantly reduce memory usage and improve convergence in quantum chemistry simulations.

0 favorites 0 likes
#generative-models

@simpreetkaur_19: Research papers you must read for AI Engineer interviews: 1. Attention is all you need (Transformers) 2. LoRA (Low rank…

X AI KOLs Timeline · 2026-04-22 Cached

A curated list of foundational AI papers recommended for interview prep, covering transformers, efficient fine-tuning, vision models, and generative networks.

0 favorites 0 likes
#generative-models

@aiDotEngineer: Building Generative Image & Video models at Scale https://youtube.com/watch?v=xOP1PM8fwnk… A lot of interest in image g…

X AI KOLs Timeline · 2026-04-21

YouTube talk by @sedielem offering a concise state-of-the-art overview of scaling generative image and video models, covering modeling, architecture, distillation and control.

0 favorites 0 likes
#generative-models

Beyond Prompts: Unconditional 3D Inversion for Out-of-Distribution Shapes

Hugging Face Daily Papers · 2026-04-16 Cached

This paper identifies and addresses 'latent sink traps' in text-to-3D generative models where they become insensitive to text prompts, proposing a framework that decouples geometric representation from linguistic sensitivity to enable robust text-based 3D shape editing of out-of-distribution shapes.

0 favorites 0 likes
#generative-models

LangFlow: Continuous Diffusion Rivals Discrete in Language Modeling

Hugging Face Daily Papers · 2026-04-15 Cached

LangFlow presents the first continuous diffusion language model that rivals discrete diffusion approaches, challenging the long-held belief that continuous diffusion is inferior for language modeling. The work introduces key ingredients like optimal Gumbel-based noise scheduling and demonstrates competitive perplexity and transfer learning performance compared to discrete diffusion baselines.

0 favorites 0 likes
#generative-models

HDR Video Generation via Latent Alignment with Logarithmic Encoding

Hugging Face Daily Papers · 2026-04-13 Cached

This paper presents a method for HDR video generation by leveraging pretrained generative models through logarithmic encoding alignment and camera-mimicking degradation training, enabling effective HDR synthesis without architectural redesign. The approach demonstrates that HDR generation can be achieved simply by adapting existing models to a representation naturally aligned with their learned priors.

0 favorites 0 likes
#generative-models

Improved Techniques for Training Consistency Models

OpenAI Blog · 2024-06-20 Cached

OpenAI presents improved techniques for training consistency models that enable high-quality single-step image generation without distillation, achieving significant FID improvements on CIFAR-10 and ImageNet 64×64 through novel loss functions and training strategies.

0 favorites 0 likes
#generative-models

Consistency Models

OpenAI Blog · 2024-06-20 Cached

OpenAI introduces Consistency Models, a new family of generative models that enable fast one-step image generation by directly mapping noise to data, while supporting multi-step sampling and zero-shot editing tasks like inpainting and super-resolution. The approach achieves state-of-the-art FID scores on CIFAR-10 and ImageNet 64x64 for one-step generation.

0 favorites 0 likes
#generative-models

Video generation models as world simulators

OpenAI Blog · 2024-02-15 Cached

OpenAI's technical report on Sora describes a video generation model that unifies diverse visual data through visual patches, enabling large-scale training of generative models capable of producing high-definition videos up to one minute long across variable durations, aspect ratios, and resolutions.

0 favorites 0 likes
#generative-models

FFJORD: Free-form continuous dynamics for scalable reversible generative models

OpenAI Blog · 2018-10-02 Cached

FFJORD introduces a scalable reversible generative model using continuous dynamics and Hutchinson's trace estimator to enable unbiased log-density estimation without architectural constraints. The method achieves state-of-the-art results on density estimation and image generation while maintaining efficient sampling.

0 favorites 0 likes
#generative-models

Glow: Better reversible generative models

OpenAI Blog · 2018-07-09 Cached

OpenAI introduces Glow, an improved reversible generative model that simplifies the RealNVP architecture by replacing fixed permutations with learned 1x1 convolutions, enabling better information flow and significant performance improvements.

0 favorites 0 likes
#generative-models

Improving GANs using optimal transport

OpenAI Blog · 2018-03-15 Cached

OT-GAN introduces a novel GAN variant using optimal transport combined with energy distance in an adversarially learned feature space to improve training stability and image generation quality. The method demonstrates state-of-the-art results on benchmark problems with stable training using large mini-batches.

0 favorites 0 likes
#generative-models

Domain randomization and generative models for robotic grasping

OpenAI Blog · 2017-10-17 Cached

Researchers explore a data generation pipeline using domain randomization and procedurally generated objects to train a deep neural network for robotic grasp planning. The proposed autoregressive model achieves >90% success on unseen objects in simulation and 80% in the real world, despite being trained only on random simulated objects.

0 favorites 0 likes
#generative-models

Prediction and control with temporal segment models

OpenAI Blog · 2017-03-12 Cached

OpenAI introduces a method for learning complex nonlinear system dynamics using deep generative models over temporal segments, enabling stable long-horizon predictions and differentiable trajectory optimization for model-based control.

0 favorites 0 likes
#generative-models

PixelCNN++: Improving the PixelCNN with discretized logistic mixture likelihood and other modifications

OpenAI Blog · 2017-01-19 Cached

PixelCNN++ introduces several architectural improvements to PixelCNN including discretized logistic mixture likelihood, downsampling, and shortcut connections, achieving state-of-the-art log likelihood results on CIFAR-10.

0 favorites 0 likes
#generative-models

On the quantitative analysis of decoder-based generative models

OpenAI Blog · 2016-11-14 Cached

This paper proposes using Annealed Importance Sampling to evaluate log-likelihoods for decoder-based generative models (VAEs, GANs, etc.), addressing the challenge of intractable likelihood estimation. The authors validate their method and provide evaluation code to analyze model performance, overfitting, and mode coverage.

0 favorites 0 likes
#generative-models

Variational lossy autoencoder

OpenAI Blog · 2016-11-08 Cached

OpenAI researchers present a Variational Lossy Autoencoder (VLAE) that combines VAEs with neural autoregressive models (RNN, MADE, PixelRNN/CNN) to learn controllable global representations, achieving state-of-the-art results on MNIST, OMNIGLOT, and Caltech-101 Silhouettes density estimation tasks.

0 favorites 0 likes
Next →
← Back to home

Submit Feedback