Give me some Feedback

Feedback objects are used to provide I/O feedback to other objects or users. This inclues performance measures and statistics which can be used for early-stopping (cross-validation) or used to guide training. Currently, the following Feeback classes are supported:

  • Feedback : abstract class;
    • Confusion : adapts the optim ConfusionMatrix (used for classification);
    • Perplexity : measures perplexity (used for language models);
    • TopCrop : measures the top-n classification accuracy for m-crops taken from the same image;
    • FacialKeypointFeedback : measures the MSE for facial keypoint detection;
    • CompositeFeedback : composite of Feedback components ;


Abstract class. Strategy (design pattern) for processing predictions and targets. Unlike Observers, Feedback objects generate reports. Like Observers they may also publish/subscribe to mediator channels. The Facial Keypoints Tutorial includes examples for implementation your own concrete Feedback classes.


Feedback constructor. Only accepts key-value arguments:

  • name is a string used to identify the reports generated by this object. Usually hard-coded in sub-class.

setup{mediator, propagator, dataset}

Post-initialization method for mediation and such:

  • mediator is a Mediator used for inter-object communication.
  • propagator is the Propagator which this object extends.
  • dataset is a DataSet. This might be useful to determine the type of targets. Feedback should not hold a reference to the dataset due to it's possible serialization.

add(batch, output, report)

The main method of the object. It is called for every batch propagated through the model.

  • batch is the current Batch being propagated by the subject Propagator.
  • output is the forward propagated output Tensor of the model.
  • report is a table of statistics and meta-data returned by the Experiment:report method during the last epoch;


Confusion is a wrapper for the ConfusionMatrix found in the optim package.


Computes perplexity for language models. Works for neural network language models and recurrent language models.


This Feedback measures the top-n classification accuracy for m-crops taken from the same image (which therefore have the same target class). In particular, this Feedback is used for evaluating the accuracy of the ImageNet DataSource (see alexnet.lua for an example use-case).


The constructor takes the following key-value arguments. The Feedback constructor arguments also apply.

  • n_top is a number (or table thereof). The accuracy is measured by looking for the target class in the n_top predictions with highest log-likelihood. Defaults to {1,5}, i.e. top-1 and top-5.
  • n_crop specifies the number of crops taken from each sample. The assumption is that the input performs n_crop propagations of each image such that their predictions can be averaged. Defaults to 10.
  • center, specifies the number of first crops to be considered center patches. Their performance will also be reported separately. This means that you should put center crops first. Defaults to 2.


Measures MSE with respect to targets and optionaly compares this to constant (mean) value baseline.


Composite of Feedback components.