@classmethod
def _marginal_log_likelihood(
█▘  cls,
██  bias_var:▚float,
███▘noise_var: float,
█▚  true_mark_var: float,
██▝ true_mark_mean: float,
█ ▞ assessor_ids: np.ndarray,
███ script_ids: np.ndarray,
█▞▗ assessor_marks: np.ndarray,
) -> float:
█▛  # Remove the mean from marks
██▚ marks_normalized = assessor_marks - true_mark_mean
███▘# Calculate the covariance matrix
▚█▘ K = cls._assessors_mark_covariance(
█▘      script_ids=script_ids,
███▛    assessor_ids=assessor_ids,
███▚█▚  true_mark_var=true_mark_var,
███▖    bias_var=bias_var,
█▚    ▚ noise_var=noise_var,
███▖)
▙▞  # Use GPy positive definite inverse.
█▙█▖Kinv, L, Li, logdet = GPy.util.linalg.pdinv(K)
██▞ # Calculate the marginal log-likelihood
█▖  mah_dist = np.dot(np.dot(marks_normalized.T, Kinv), marks_normalized)
███▚mah_dist = mah_dist[0, 0]  # The results of dots products is an array with shape [1, 1]
█ ▗ ll = -0.5 * (logdet + mah_dist + marks_normalized.shape[0] * np.log(2 * np.pi))
▛▗  return ll
███▚█▚
▙▞ 

I'm Bruno Mlodozeniec, a deep learning researcher.

I'm currently a PhD student at the University of Cambridge with David Krüger and Richard Turner. I'm broadly interested in topics ranging from generative modelling, symmetries and probabilistic modelling to fundamentals of deep learning – understanding theoretically and empirically what makes these models work.

I have also previously worked on topics ranging from AI for science to novel approaches to automated hyperparameter optimisation and model selection.

You can check-out my:

blog publications résumé

or find me on:

Twitter GitHub Google Scholar LinkedIn