- Oct 12, 2020
-
-
Lorenzo Moneta authored
-
Lorenzo Moneta authored
Move from Init() to SetUpModel() when class state is filled either with options or with XML file Add a check if Keras version is >=2.3 If it is not and Tensorflow 2 is used then switch automatically to use Tensorflow.Keras
-
Lorenzo Moneta authored
The support is provided by adding an option in MethodPyKeras, tf.keras=1
-
- Aug 12, 2020
-
-
Konstantin Gizdov authored
-
Konstantin Gizdov authored
* update deprecated function call name to backward compatible one * adapt convolution forward to cuDNN 8 * adapt convolution backward to cuDNN 8 * fix typo and re-declarations * implement workspace limits, fix an algoruthm preference bug and rewrite relevant sections * implement correct logic behind cudnn logarithm preference * use decltype instead of auto, fix typos * assign backward filter algo to correct place * make it compile and support C++11 * compiles completely
-
Konstantin Gizdov authored
-
Konstantin Gizdov authored
-
- Jun 23, 2020
-
-
Attila Krasznahorkay authored
Had to make sure that the GSL_CBLAS_LIBRARY variable is set in the same way in which FindGSL.cmake would set it, and that TMVA would explicitly wait for the completion of the GSL build.
-
- Jun 04, 2020
-
-
Enric Tejedor Saavedra authored
-
- May 26, 2020
-
-
Lorenzo Moneta authored
* Add deprecation message for MethodDNN * Improve handling of parsing of inputshape layout. If size of input shape given is smaller than expected pad it with 1 values. In this case different inputShape con be supported easly in case of RNN and DNN Remove from the tutorial the usage of inputBatchLayout that is not needed anymore Remove also using InputShapeLayout in case of dense layer networks from the tutorial
-
Lorenzo Moneta authored
* Set correct options for Deep Learning method in TMVARegression.C * remove an uneeded printout statement
-
- May 14, 2020
-
-
Massimiliano Galli authored
PyROOT/PyMVA related These changes address the case in which Python 3 has only Interpreter while Python 2 has both Interpreter and Development. In this case, the Python used to build ROOT (which requires only Interpreter) will be 3, but PyROOT (which requires also Development) will be built only for 2. To achieve this, ROOT and PyROOT/PyMVA use now two different sets of variables. Documentation for the whole machinery is also added. It is also worth pointing out that the entire machinery could be much simplified by just requiring the following as a prerequisite for ROOT: - CMake >= 3.12.4 - Python-Development package
-
- May 12, 2020
-
-
Lorenzo Moneta authored
Fix compilation of cuda with C++14 when normal ROOT is compile with C++17 which has std::string_view (#5598) Fix it by modifying the pre-processor macros defined in RCOnfigure.h when compiling Cuda. A better fix would be to remove the TString dependency in the Cuda compiled code. TString is used when doing I/O of the DeepNet layers to XML. In principle this code could be moved out of Cuda
-
- May 11, 2020
-
-
Sergey Linev authored
-
- May 08, 2020
-
-
Lorenzo Moneta authored
* Fix the multi-class classification for MethodDL by using as default for the outputlayer a width equal to the number of classes. Update multiclass tutorial to use MethodDL * Make sure input file is generated with 2000 events. Use a different name to not mix with other tutorials * Update input file for TMVAMulticlassApplication.C * Update multiclass tutorials to use input file from root.cern.ch as suggested by review comment of Stefan Add also some other small improvements
-
- May 04, 2020
-
-
Sergey Linev authored
-
Lorenzo Moneta authored
- clean up also test CMake lists file
-
Lorenzo Moneta authored
Use still TCpuMatrix class (TCpu architecture) but use TMatrix for matrix multiplication instead of BLAS\ Only dense layers architectures are supported in this case - Fix tutorials for the no-imt case - Use Higgs data file from root.cern.ch
-
- May 01, 2020
-
-
Oksana Shadura authored
* Update gtest to latest version 1.10.0 More info: https://github.com/google/googletest/releases/tag/release-1.10.0 * Adjust cmake variables to be used by Ninja of build byproducts * Replace deprecated functions for googletest 1.10 release * Add missing gtest_main target * Fix locations for gtest and gtest_main targets * Update CMake configuration for gtest tests Tests were failing with next error: /usr/bin/ld: ../../../googletest-prefix/src/googletest-build/lib/libgtest_main.a(gtest_main.cc.o): in function testing::InitGoogleTest(int*, char**) /usr/bin/ld: gtest_main.cc:(.text.startup+0x2f): undefined reference to testing::UnitTest::Run() * Patch by Bertrand Bellenot: Update build configuration for Windows * We build googletest in Release mode (for Debug names of libraries should be different and require special treatment) * [formating] Add EOL in CMakeLists.txt
-
- Apr 29, 2020
-
-
Vassil Vassilev authored
The weak vtable forces to compiler to duplicate the vtable for every TU which includes the header and the deserializer to deserialize the entries from PCH/PCM at startup.
-
- Apr 28, 2020
-
-
Bertrand Bellenot authored
One could Enable Console Virtual Terminal Sequences (ANSI escape code) that can control cursor movement, color/font mode, and other operations when written to the output stream, but then it breaks the wrap at eol output. This is a known issue, see https://github.com/microsoft/terminal/issues/349. With this feature, the escape sequences like `\033[39m` would work in the Windows 10 command prompt as well.
-
- Apr 27, 2020
-
-
Lorenzo Moneta authored
-
Lorenzo Moneta authored
- TMVA_Higgs_Classification.C a tutorial using the public Higgs UCI dataset for a classification problem using a Deep Neural network from TMVA, that is made with fully connected layers - TMVA_CNN_Classification.C Tutorial showing the usage of Convolutional neural network in TMVA. The macro generates on the fly some toys images (size 16x16) of two different classes and then a convolutional neural network is used for their classification. This example builds and uses also a CNN built on the fly using Keras through the ROOT PyKeras package This example shows also how to use a batch normalization layer in TMVA - TMVA_RNN_Classification.C Tutorial showing the usage of Recurrent neural network in TMVA. Toys time dependednt data of two different classes are generated on the fly and then a recurrent neural network is used for classification. Both TMVA and PyKeras networks are built and used. The network uses by default one LSTM layer, but optionally it can be built with a simple RNN or a GRU layer or also 3 different recurrent networks for each recurrent layer type can be made
-
- Apr 24, 2020
-
-
Vassil Vassilev authored
-
- Apr 22, 2020
-
-
Sergey Linev authored
Try to exclude as much as possible ${CMAKE_BUILD_DIR)/include from includes paths. By this one much better control library dependencies - includes from other ROOT libraries not "visible" during the build. Several generated files placed first to ${CMAKE_BUILD_DIR)/ginclude and then copied to include. Dictionary generation still uses only ${CMAKE_BUILD_DIR)/include Co-authored-by:
Axel Naumann <Axel.Naumann@cern.ch>
-
- Apr 21, 2020
-
-
Sergey Linev authored
-
- Apr 17, 2020
-
-
Lorenzo Moneta authored
Do not build cudnn full test for simple RNN when cudnn is not available
-
- Apr 16, 2020
-
-
Lorenzo Moneta authored
-
Lorenzo Moneta authored
Use a single test program code ( RNN/testFullRNN.h) and remove teh ones in LSTM and GRU - Fix a problem found in updating weights using gradient and learning rate in case of Cudnn - remove an empty file (TestRecurrentForwardPassCuda.cxx)
-
Lorenzo Moneta authored
- implement the backpropagation for CPU for teh GRU in case the option reset gate after is on, this means that the candidate gate output is computed as : c(t) = tanh ( W * x(t) + r(t) * ( U * h(t-1) ) + B ) instead of the vanilla GRU implmentation : c(t) = tanh ( W * x(t) + U * ( r(t) * h(t-1) ) + B )
-
Lorenzo Moneta authored
The Cuddnn implementation of GRU is not the vanilla GRU as described for example in https://en.wikipedia.org/wiki/Gated_recurrent_unit#Fully_gated_unit When computing the candidate gate the multiplication of the candidate weight state is done only with the previous state and the multiplication with the reset state is done afterwards. See https://docs.nvidia.com/deeplearning/sdk/cudnn-api/index.html#cudnnRNNMode_t We use in Cudnn this mode with a single input bias - Ad test for comparing CPU and GPU implementations for all recurrent layer types
-
- Apr 09, 2020
-
-
Lorenzo Moneta authored
-
Lorenzo Moneta authored
This commit fixes failures observed in incremental builds with debug mode
-
- Apr 06, 2020
-
-
Lorenzo Moneta authored
-
Lorenzo Moneta authored
- Improve LSTM abd GRU Layer to add the option to return as output the full time sequence (statesize x time size) instead of only the last output (default case). This is needed for example when chaining RNN layers. - CHave a single parsing function for Recurrent layer in the MethodDL to avoid code repetition - Fix and improve all backpropagation tests for different configuration. Few bugs in backpropagation test programs for all 3 recurrent layers have been fixed
-
Lorenzo Moneta authored
Add the fast_tanh vdt implementation as a new activation function (FTANH) and do not use anymore as default implementation for tanh for the CPU architecture.
-
Lorenzo Moneta authored
-
Lorenzo Moneta authored
-
Lorenzo Moneta authored
-
Lorenzo Moneta authored
-