Sumário:

Despite the current wide scope of applications of machine learning to problems in several areas of applications, the theoretical understanding lags behind. Among the set of mathematical tools used in the analysis of such devices, the techniques of Statistical Mechanics have been shown to be useful to understand several aspects of the average properties of information processing systems and provide means of predicting their dynamical behavior when learning from examples.

This course is divided into three parts, It begins with theoretical aspects of inference and its applications to the construction and characterization of learning algorithms in Neural Networks. These methods are applied to some simple examples and attempts at a general theory for NN of Deep architectures is discussed.

The second part deals with current applications and a description of the state of the art libraries that permit the implementation of NN through the use of transfer learning.

Some applications of interest in experimental physics will be chosen from the literature and discussed. The choices will be done taking onto account students research interests.

The third part deals with more theoretical issues of information processing. Topics include Integrated Information theory, the Renormalization group in Neural Networks and the complexity of neurobiological time series collected from cortically implanted electrodes in rats.