A singular Riemannian geometry approach to Deep Neural Networks.
Introduction

What is this library?

This library is a basic neural network written in C++ implementing the 1-dimensional SiMEC and SiMExp algorithms from the paper "A singular Riemannian geometry approach to Deep Neural Networks II. Reconstruction of 1-D equivalence classes".

In this version there are no training methods, since there already are other libraries - such as Tensorflow, Caffe, PyTorch or OpenNN - implementing efficient training methods. At the current development stage including some training methods would only have meant to reinvent the wheel, since the main aim of this version of the library is to test the SiMEC1D and SiMExp1D algorithms. However, this library is not dependent on a particular framework for the training, in the sense that only the weights and the biases from a trained network are needed, along with the structure of the neural network. For more information about the structure of this file, see On the structure of the file containing the weights and the biases. Given these information, the neural network is then built from scratch, for motivations related to SiMEC_1D and SiMExp_1D algorithms. We also avoid making use of automatic differentiation, which has the unfortunate side effect of slowing down the two algorithms considerably, preferring a hard-coding approach, which allows faster computations.

In our examples we employed Keras with Tensorflow as the backend (and we provide the code to extract the weights and the biases from the trained network) but one can feel free to use the framework is most confortable with.

We use the Eigen library to perform the matrix operations. Eigen is provided along with the neural network, as a headers-only library. We store the sparse matrix containing the derivatives of a layer map with respect to the weights and biases in a special reduced form of the Jacobian matrix. See On the reduced form of the Jacobian matrices for the details. We employ the classes VectorXd and MatrixXd, storing the entries as doubles because the precision of the float type is not sufficient to run the SiMEC_1D and SiMExp_1D algorithms with sufficient accuracy.

Brief description of the repository

In the repository, which you can find at http://github.com/alessiomarta/simec-1d-test-code there are:

How to compile the test code

To compile the test code we provide a makefile. Open the terminal in the folder in which you find "main.cpp" and type "make". Before to compile modifiy the Makefile to use the instruction set extensions supported by your CPU. The library does not require to install Eigen, which is provided as headers only along with the neural network. However, it requires a previous installation of OpenMP.

Note: In the provided makefile we compile using C++17. You can use previous C++ standards, but consider that Eigen up to version 3.4 is standard C++03 and versions following 3.4 will be C++14.

How to use the library in Python

We use pybind11 to generate Python wrappers for the library. The source file containing the wrappers for neural_network.h is py_net.cpp

Eigen3 and pybind11 must be installed on the machine.

To build the Python wrapper (using a Linux-based OS) open a terminal and type:

g++ -O3 -march=native -Ofast -lpthread -DNDEBUG -D_GLIBCXX_PARALLEL -ffast-math -Wall -shared -std=c++17 -Ofast -fno-builtin -ffast-math -mmmx -msse -msse2 -msse3 -msse4 -msse4.1 -msse4.2 -mavx -mavx2 -mfma -frename-registers -m64 -ftree-vectorize -funroll-loops -fopenmp -fopenmp-simd -fPIC $(python3 -m pybind11 –includes) layer.cpp fc_layer_sg.cpp fc_layer_sm.cpp fc_layer_sp.cpp fc_layer_sl.cpp neural_network.cpp activation_functions.cpp read_csv.cpp py_net.cpp -o py_net$(python3-config –extension-suffix)

  • We compile using C++17. You can use previous C++ standards, but consider that Eigen up to version 3.4 is standard C++03 and versions following 3.4 will be C++14.
  • It may be necessary to add the paths of pybind11 and python if they are not in your compiler's default include paths.
  • Before to build the python wrapper, check the instruction set extensions supported by your CPU.

To use the library in Python, import the module py_net.

FAQ

Why don't you employ the Tensor class from Eigen?

In the current implementation we do not use the tensor class form Eigen simply because it is not strictly needed. In addition, avoiding tensor at this stage simplifies the testing of SiMEC1D and SiMExp1D algorithms, and allows to follow the notation employed in the paper "A singular Riemannian geometry approach to Deep Neural Networks II. Reconstruction of 1-D equivalence classes.".