Black Box Learning and Inference

NIPS Workshop, 12 December 2015, Montréal, Canada


Accepted papers

Research Abstracts

A model of familiar and unfamiliar 3D face recognition.
Kelsey R. Allen, Ilker Yildirim, and Joshua B. Tenenbaum.

Stochastic variational inference for Gaussian process latent variable models using back constraints.
Thang D. Bui and Richard E. Turner

Black-Box Stochastic Variational Inference in Five Lines of Python.
David Duvenaud and Ryan P. Adams.

Black-box α-divergence Minimization.
José Miguel Hernández-Lobato, Yingzhen Li, Daniel Hernández-Lobato, Thang Bui, and Richard Turner.

Deep Kalman Filters. [arXiv]
Rahul G. Krishnan, Uri Shalit, and David Sontag.

Towards Automated Sequential Monte Carlo for Probabilistic Graphical Models.
Christian A. Naesseth, Fredrik Lindsten, and Thomas B. Schon.

Data-driven Sequential Monte Carlo in Probabilistic Programming.
Yura Perov, Tuan Anh Le, and Frank Wood.

Bayesian Optimization for Probabilistic Programs.
Tom Rainforth, Jan-Willem van de Meent, Michael A. Osborne, and Frank Wood.

Amortized inference through normalized nonnegative models.
Cyril J. Stark.

Smooth Arrows.
Zenna Tavares and Armando Solar Lezama.

Variational Gaussian Process.
Dustin Tran, Rajesh Ranganath, and David Blei.

A Model Explanation System.
Ryan Turner.

Galileo: Perceiving Physical Object Properties by Integrating a Physics Engine with Deep Learning.
Jiajun Wu, Joseph J. Lim, William T. Freeman, Ilker Yildirim, and Joshua B. Tenenbaum.

Language and Systems Abstracts

Building blocks for exact and approximate inference.
Jacques Carette, Chung-chieh Shan, Praveen Narayanan, Wren Romano, and Robert Zinkov.

Practical Probabilistic Programming with Figaro.
Avi Pfeffer, Michael Howard, Brian Ruttenberg, Glenn Takata, Alison O’Connor, and Joe Gorman.

Swift: Compiled Inference for Probabilistic Programs.
Yi Wu, Lei Li, and Stuart Russell.

Stan Overview: Language and Inference.
Stan Development Team.