diglm

The diglm module implements a Deep Invertible Generalized Linear Model.

Diglm

Diglm: Deeply Invertible Generalized Linear Model

class diglm.Diglm(bijector, glm, num_features, name='diglm', **kwargs)[source]

Deep Invertible Generalized Linear Model using tensorflow_probability. This class implements the model described by Nalisnick et al. in `Hybrid Models with Deep and Invertible Features<https://arxiv.org/abs/1902.02767>`_. See the original article for a detailed discussion of the model, its pros and cons and its possible applications. Inherits from tensorflow_probability.distributions.JointDistributionsNamed.

Parameters
  • bijector (tensorflow_probability.bijectors.Bijector) – Bijector with learnable parameters for the invertible tranformation.

  • glm (tensorflow_probability.glm.ExponentialFamily) – Generalized linear model.

  • num_feature – Dimensions of features space.

  • **kwargs

    Other arguments for JointDitributionNamed.

property bijector

The bijector

cross_entropy(other, name='cross_entropy')

Computes the (Shannon) cross entropy.

Denote this distribution (self) by P and the other distribution by Q. Assuming P, Q are absolutely continuous with respect to one another and permit densities p(x) dr(x) and q(x) dr(x), (Shannon) cross entropy is defined as:

`none H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x) `

where F denotes the support of the random variable X ~ P.

Args:

other: tfp.distributions.Distribution instance. name: Python str prepended to names of ops created by this function.

Returns:
cross_entropy: self.dtype Tensor with shape [B1, …, Bn]

representing n different calculations of (Shannon) cross entropy.

eta_from_features(features)[source]

Compute predicted linear response transforming features in latent space.

Parameters

features (tensorflow.Tensor) – Model features.

Returns

Predicted linear response.

Return type

tensorflow.Tensor

property glm

Generalized Linear Model

kl_divergence(other, name='kl_divergence')

Computes the Kullback–Leibler divergence.

Denote this distribution (self) by p and the other distribution by q. Assuming p, q are absolutely continuous with respect to reference measure r, the KL divergence is defined as:

```none KL[p, q] = E_p[log(p(X)/q(X))]

= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x) = H[p, q] - H[p]

```

where F denotes the support of the random variable X ~ p, H[., .] denotes (Shannon) cross entropy, and H[.] denotes (Shannon) entropy.

Args:

other: tfp.distributions.Distribution instance. name: Python str prepended to names of ops created by this function.

Returns:
kl_divergence: self.dtype Tensor with shape [B1, …, Bn]

representing n different calculations of the Kullback-Leibler divergence.

latent_features(features)[source]

Compute latent variables from features.

Parameters

features (tensorflow.Tensor) – Model features.

Returns

Transformed feature in latent space.

Return type

tensorflow.Tensor

property num_features

Number of features

weighted_log_prob(value, scaling_const=0.1)[source]

Weighted objective function as described in Nalisnik et al.

Parameters
  • value (dict(tensorflow.Tensor)) – Dictionary of (a batch of) features and labels.

  • scaling_const (float) – The scaling constant of the modified objective, defaults to 0.1

Returns

Weighted objective.

Return type

tensorflow.Tensor