AI2019

VMP
Neural Network in Your Browser
Intermediate Python
Tutorials for learning Torch
awesome-deep-learning-papers
awesome-deep-learning2
Deep Learning An MIT Press book
deep learning coursera
Artificial Intelligence: A Modern Approach
Docker container for NeuralTalk
Как и для чего использовать Docker
введение в Docker с нуля. Ваш первый микросервис

Comparison of deep learning software
Benchmarking CNTK on Keras: is it Better at Deep Learning than TensorFlow?
Get Started with TensorFlow
Setup CNTK on your machine
Using CNTK with Keras (Beta)
Installing CNTK for Python on Windows
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit
CNTK Examples

Google AI Publication database
DEEP NEURAL NETWORKS AS GAUSSIANP ROCESSES PDF
Gaussian Process Behaviour in Wide Deep Neural Networks PDF
BAYESIAN DEEP CONVOLUTIONAL NETWORKS WITH MANY CHANNELS ARE GAUSSIAN PROCESSES

Deep Neural Networks as Gaussian Processes
Gaussian Process Behaviour in Wide Deep Neural Networks
A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music
A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music

Magenta: Music and Art Generation with Machine Intelligence
Generating Music and Lyrics using Deep Learning via Long Short-Term Recurrent Networks (LSTMs). Implements a Char-RNN in Python using TensorFlow.
Using Long Short-Term Memory neural networks to generate music
Music generation from midi files based on Long Short Term Memory
Using Long Short Term Memory Keras model with additional libraries to generate Metallica style music
StructLSTM – Structure augmented Long-Short Term Memory Networks for Music Generation
Generating Music and Lyrics using Deep Learning via Long Short-Term Recurrent Networks (LSTMs). Implements a Char-RNN in Python using TensorFlow.
In-depth Summary of Facebook AI’s Music Translation Model
happier: hierarchical polyphonic music generative rnn – OpenReview
Model for Learning Long-Term Structure in Music
Learning a Latent Space of Multitrack Measures – Machine Learning
MusicVAE: Creating a palette for musical scores with machine learning
Awesome-Deep-Learning-Resources
awesome-deeplearning-resources

Grid LSTM
tensorflow-grid-lstm
lstm-neural-networks
Grid LSTM – UvA Deep Learning Course
Tensorflow Grid LSTM RNN
Examples of using GridLSTM (and GridRNN in general) in tensorflow
MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music.

Modeling Time-Frequency Patterns with LSTM vs
Torch7 implementation of Grid-LSTM as described here: http://arxiv.org/pdf/1507.01526v2.pdf
Grid Long Short-Term Memory
Neural Network Right Here in Your Browser
Recurrent Neural Network – A curated list of resources dedicated to RNN
Generation of poems with a recurrent neural network
Generating Poetry using Neural Networks
Chinese Poetry Generation with Recurrent Neural Networks
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing , pages 1919–1924, Lisbon, Portugal, 17-21 September 2015. c © 2015 Association for Computational Linguistics. GhostWriter: Using an LSTM for Automatic Rap Lyric Generation
Generating Sentences from a Continuous Space
PoetRNN A python framework for learning and producing verse poetry
Как научить свою нейросеть генерировать стихи
КлассикAI жанра: ML ищет себя в поэзии
github.com/facebookresearch
deep learning v: рекуррентные сети
Нейросеть сочинила стихи в стиле «Нирваны»
Comparison of deep learning software

Neural Network Right Here in Your Browser
TensorFlow на AWS Удобные возможности для глубокого обучения в облаке. 85% проектов TensorFlow в облачной среде выполняются в AWS.
Deep Learning on ROCm
GitHub Facebook Research
nevergrad A Python toolbox for performing gradient-free optimization

On word embeddings – Part 1
On word embeddings – Part 2: Approximating the Softmax
On word embeddings – Part 3: The secret ingredients of word2vec
A survey of cross-lingual word embedding models
Word embeddings in 2017: Trends and future directions
Keras: The Python Deep Learning library
Deep Learning for humans http://keras.io/
Keras examples directory
Стихи.ру – российский литературный портал
github IlyaGusev
Библиотека для анализа и генерации стихов на русском языке
Морфологический анализатор на основе нейронных сетей и pymorphy2
Задание по курсу NLP
Поэтический корпус русского языка http://poetry-corpus.ru/
Code inspired by Unsupervised Machine Translation Using Monolingual Corpora Only
Open Source Neural Machine Translation in PyTorch http://opennmt.net/
A library for Multilingual Unsupervised or Supervised word Embeddings
PyText A natural language modeling framework based on PyTorch https://fb.me/pytextdocs

RusVectōrēs: семантические модели для русского языка
Sberbank AI
Классик AI: Cоревнование по стихотворному Искуственному Интеллекту
Программа MyStem производит морфологический анализ текста на русском языке. Она умеет строить гипотетические разборы для слов, не входящих в словарь.
A Python wrapper of the Yandex Mystem 3.1 morphological analyzer (http://api.yandex.ru/mystem)
gensim
NTLK Natural Language Toolkit
nltk windows
rusvectores
Национальный корпус русского языка
Flask
Морфологическая разметка с использование обширного описания языка
A receiver operating characteristic curve, i.e., ROC curve

Neural Style Transfer: Creating Art with Deep Learning using tf.keras and eager execution
TF Jam — Shooting Hoops with Machine Learning
Introducing TensorFlow.js: Machine Learning in Javascript
Standardizing on Keras: Guidance on High-level APIs in TensorFlow 2.0

PyText Documentation
PyText A natural language modeling framework based on PyTorch https://fb.me/pytextdocs
Recurrent Neural Network – A curated list of resources dedicated to RNN
Generation of poems with a recurrent neural network
Generating Poetry using Neural Networks
Chinese Poetry Generation with Recurrent Neural Networks
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing , pages 1919–1924, Lisbon, Portugal, 17-21 September 2015. c © 2015 Association for Computational Linguistics. GhostWriter: Using an LSTM for Automatic Rap Lyric Generation
Generating Sentences from a Continuous Space
PoetRNN A python framework for learning and producing verse poetry
Language modeling a billion words
Deep Learning with Torch: the 60-minute blitz
NNGraph A graph based container for creating deep learning models
Tutorials for learning Torch

Как научить свою нейросеть генерировать стихи
КлассикAI жанра: ML ищет себя в поэзии
github.com/facebookresearch
deep learning v: рекуррентные сети
Нейросеть сочинила стихи в стиле «Нирваны»

Building TensorFlow on Android
Building TensorFlow on iOS
Как использовать TensorFlow Mobile в приложениях для Android
TensorFlow Lite is for mobile and embedded devices
TensorFlow Lite versus TensorFlow Mobile
tensorflow/tensorflow/contrib/lite/
github tensorflow
Android TensorFlow Lite Machine Learning Example
After a TensorFlow model is trained, the TensorFlow Lite converter uses that model to generate a TensorFlow Lite FlatBuffer file (.tflite). The converter supports as input: SavedModels, frozen graphs (models generated by freeze_graph.py), and tf.keras HDF5 models. The TensorFlow Lite FlatBuffer file is deployed to a client device (generally a mobile or embedded device), and the TensorFlow Lite interpreter uses the compressed model for on-device inference.

TF Lite Developer Guide
TensorFlow for Poets 2: TFLite iOS
TensorFlow for Poets 2: TFLite Android
TensorFlow for Poets 2: TFMobile
Tinker With a Neural Network Right Here in Your Browser

TensorFlow Lite Optimizing Converter command-line examples

Differences between L1 and L2 as Loss Function and Regularization
L1 and L2 Regularization
Регуляризация
L1- и L2-регуляризация в машинном обучении
L 1 -регуляризациялинейнойрегрессии. Регрессиянаименьшихуглов(алгоритмLARS)
L1 и L2 регуляризации для линейной регрессии
L1 и L2-регуляризация для логистической регрессии


Обработка естественных языков на языке Python
Практическое глубокое обучение в Theano и TensorFlow

TensorFlow For Poets
Get Started with TensorFlow
Train your own image classifier with Inception in TensorFlow
Google Developers






The Khronos Group
Khronos royalty-free open standards for 3D graphics, Virtual and Augmented Reality, Parallel Computing, Neural Networks, and Vision Processing
характеристики CPU AMD Ryzen 3000: флагманская модель Ryzen 9 3850X предложит 16 ядер и частоту 5,1 ГГц
NeuralTuringMachine
Oxford Deep NLP 2017 course
A Stable Neural-Turing-Machine (NTM) Implementation (Source Code and Pre-Print)
A series of models applying memory augmented neural networks to machine translation

Newly published papers (< 6 months) which are worth reading

  • MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications (2017), Andrew G. Howard et al. [pdf]
  • Convolutional Sequence to Sequence Learning (2017), Jonas Gehring et al. [pdf]
  • A Knowledge-Grounded Neural Conversation Model (2017), Marjan Ghazvininejad et al. [pdf]
  • Accurate, Large Minibatch SGD:Training ImageNet in 1 Hour (2017), Priya Goyal et al. [pdf]
  • TACOTRON: Towards end-to-end speech synthesis (2017), Y. Wang et al. [pdf]
  • Deep Photo Style Transfer (2017), F. Luan et al. [pdf]
  • Evolution Strategies as a Scalable Alternative to Reinforcement Learning (2017), T. Salimans et al. [pdf]
  • Deformable Convolutional Networks (2017), J. Dai et al. [pdf]
  • Mask R-CNN (2017), K. He et al. [pdf]
  • Learning to discover cross-domain relations with generative adversarial networks (2017), T. Kim et al. [pdf]
  • Deep voice: Real-time neural text-to-speech (2017), S. Arik et al., [pdf]
  • PixelNet: Representation of the pixels, by the pixels, and for the pixels (2017), A. Bansal et al. [pdf]
  • Batch renormalization: Towards reducing minibatch dependence in batch-normalized models (2017), S. Ioffe. [pdf]
  • Wasserstein GAN (2017), M. Arjovsky et al. [pdf]
  • Understanding deep learning requires rethinking generalization (2017), C. Zhang et al. [pdf]
  • Least squares generative adversarial networks (2016), X. Mao et al. [pdf]

Understanding / Generalization / Transfer

  • Distilling the knowledge in a neural network (2015), G. Hinton et al. [pdf]
  • Deep neural networks are easily fooled: High confidence predictions for unrecognizable images (2015), A. Nguyen et al. [pdf]
  • How transferable are features in deep neural networks? (2014), J. Yosinski et al. [pdf]
  • CNN features off-the-Shelf: An astounding baseline for recognition (2014), A. Razavian et al. [pdf]
  • Learning and transferring mid-Level image representations using convolutional neural networks (2014), M. Oquab et al. [pdf]
  • Visualizing and understanding convolutional networks (2014), M. Zeiler and R. Fergus [pdf]
  • Decaf: A deep convolutional activation feature for generic visual recognition (2014), J. Donahue et al. [pdf]

Optimization / Training Techniques

  • Training very deep networks (2015), R. Srivastava et al. [pdf]
  • Batch normalization: Accelerating deep network training by reducing internal covariate shift (2015), S. Loffe and C. Szegedy [pdf]
  • Delving deep into rectifiers: Surpassing human-level performance on imagenet classification (2015), K. He et al. [pdf]
  • Dropout: A simple way to prevent neural networks from overfitting (2014), N. Srivastava et al. [pdf]
  • Adam: A method for stochastic optimization (2014), D. Kingma and J. Ba [pdf]
  • Improving neural networks by preventing co-adaptation of feature detectors (2012), G. Hinton et al. [pdf]
  • Random search for hyper-parameter optimization (2012) J. Bergstra and Y. Bengio [pdf]

Unsupervised / Generative Models

  • Pixel recurrent neural networks (2016), A. Oord et al. [pdf]
  • Improved techniques for training GANs (2016), T. Salimans et al. [pdf]
  • Unsupervised representation learning with deep convolutional generative adversarial networks (2015), A. Radford et al. [pdf]
  • DRAW: A recurrent neural network for image generation (2015), K. Gregor et al. [pdf]
  • Generative adversarial nets (2014), I. Goodfellow et al. [pdf]
  • Auto-encoding variational Bayes (2013), D. Kingma and M. Welling [pdf]
  • Building high-level features using large scale unsupervised learning (2013), Q. Le et al. [pdf]

Convolutional Neural Network Models

  • Rethinking the inception architecture for computer vision (2016), C. Szegedy et al. [pdf]
  • Inception-v4, inception-resnet and the impact of residual connections on learning (2016), C. Szegedy et al. [pdf]
  • Identity Mappings in Deep Residual Networks (2016), K. He et al. [pdf]
  • Deep residual learning for image recognition (2016), K. He et al. [pdf]
  • Spatial transformer network (2015), M. Jaderberg et al., [pdf]
  • Going deeper with convolutions (2015), C. Szegedy et al. [pdf]
  • Very deep convolutional networks for large-scale image recognition (2014), K. Simonyan and A. Zisserman [pdf]
  • Return of the devil in the details: delving deep into convolutional nets (2014), K. Chatfield et al. [pdf]
  • OverFeat: Integrated recognition, localization and detection using convolutional networks (2013), P. Sermanet et al. [pdf]
  • Maxout networks (2013), I. Goodfellow et al. [pdf]
  • Network in network (2013), M. Lin et al. [pdf]
  • ImageNet classification with deep convolutional neural networks (2012), A. Krizhevsky et al. [pdf]

Image: Segmentation / Object Detection

  • You only look once: Unified, real-time object detection (2016), J. Redmon et al. [pdf]
  • Fully convolutional networks for semantic segmentation (2015), J. Long et al. [pdf]
  • Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks (2015), S. Ren et al. [pdf]
  • Fast R-CNN (2015), R. Girshick [pdf]
  • Rich feature hierarchies for accurate object detection and semantic segmentation (2014), R. Girshick et al. [pdf]
  • Spatial pyramid pooling in deep convolutional networks for visual recognition (2014), K. He et al. [pdf]
  • Semantic image segmentation with deep convolutional nets and fully connected CRFs, L. Chen et al. [pdf]
  • Learning hierarchical features for scene labeling (2013), C. Farabet et al. [pdf]

Image / Video / Etc

  • Image Super-Resolution Using Deep Convolutional Networks (2016), C. Dong et al. [pdf]
  • A neural algorithm of artistic style (2015), L. Gatys et al. [pdf]
  • Deep visual-semantic alignments for generating image descriptions (2015), A. Karpathy and L. Fei-Fei [pdf]
  • Show, attend and tell: Neural image caption generation with visual attention (2015), K. Xu et al. [pdf]
  • Show and tell: A neural image caption generator (2015), O. Vinyals et al. [pdf]
  • Long-term recurrent convolutional networks for visual recognition and description (2015), J. Donahue et al. [pdf]
  • VQA: Visual question answering (2015), S. Antol et al. [pdf]
  • DeepFace: Closing the gap to human-level performance in face verification (2014), Y. Taigman et al. [pdf]:
  • Large-scale video classification with convolutional neural networks (2014), A. Karpathy et al. [pdf]
  • Two-stream convolutional networks for action recognition in videos (2014), K. Simonyan et al. [pdf]
  • 3D convolutional neural networks for human action recognition (2013), S. Ji et al. [pdf]

Natural Language Processing / RNNs

  • Neural Architectures for Named Entity Recognition (2016), G. Lample et al. [pdf]
  • Exploring the limits of language modeling (2016), R. Jozefowicz et al. [pdf]
  • Teaching machines to read and comprehend (2015), K. Hermann et al. [pdf]
  • Effective approaches to attention-based neural machine translation (2015), M. Luong et al. [pdf]
  • Conditional random fields as recurrent neural networks (2015), S. Zheng and S. Jayasumana. [pdf]
  • Memory networks (2014), J. Weston et al. [pdf]
  • Neural turing machines (2014), A. Graves et al. [pdf]
  • Neural machine translation by jointly learning to align and translate (2014), D. Bahdanau et al. [pdf]
  • Sequence to sequence learning with neural networks (2014), I. Sutskever et al. [pdf]
  • Learning phrase representations using RNN encoder-decoder for statistical machine translation (2014), K. Cho et al. [pdf]
  • A convolutional neural network for modeling sentences (2014), N. Kalchbrenner et al. [pdf]
  • Convolutional neural networks for sentence classification (2014), Y. Kim [pdf]
  • Glove: Global vectors for word representation (2014), J. Pennington et al. [pdf]
  • Distributed representations of sentences and documents (2014), Q. Le and T. Mikolov [pdf]
  • Distributed representations of words and phrases and their compositionality (2013), T. Mikolov et al. [pdf]
  • Efficient estimation of word representations in vector space (2013), T. Mikolov et al. [pdf]
  • Recursive deep models for semantic compositionality over a sentiment treebank (2013), R. Socher et al. [pdf]
  • Generating sequences with recurrent neural networks (2013), A. Graves. [pdf]

Speech / Other Domain

  • End-to-end attention-based large vocabulary speech recognition (2016), D. Bahdanau et al. [pdf]
  • Deep speech 2: End-to-end speech recognition in English and Mandarin (2015), D. Amodei et al. [pdf]
  • Speech recognition with deep recurrent neural networks (2013), A. Graves [pdf]
  • Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups (2012), G. Hinton et al. [pdf]
  • Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition (2012) G. Dahl et al. [pdf]
  • Acoustic modeling using deep belief networks (2012), A. Mohamed et al. [pdf]

Reinforcement Learning / Robotics

  • End-to-end training of deep visuomotor policies (2016), S. Levine et al. [pdf]
  • Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection (2016), S. Levine et al. [pdf]
  • Asynchronous methods for deep reinforcement learning (2016), V. Mnih et al. [pdf]
  • Deep Reinforcement Learning with Double Q-Learning (2016), H. Hasselt et al. [pdf]
  • Mastering the game of Go with deep neural networks and tree search (2016), D. Silver et al. [pdf]
  • Continuous control with deep reinforcement learning (2015), T. Lillicrap et al. [pdf]
  • Human-level control through deep reinforcement learning (2015), V. Mnih et al. [pdf]
  • Deep learning for detecting robotic grasps (2015), I. Lenz et al. [pdf]
  • Playing atari with deep reinforcement learning (2013), V. Mnih et al. [pdf])

More Papers from 2016

  • Layer Normalization (2016), J. Ba et al. [pdf]
  • Learning to learn by gradient descent by gradient descent (2016), M. Andrychowicz et al. [pdf]
  • Domain-adversarial training of neural networks (2016), Y. Ganin et al. [pdf]
  • WaveNet: A Generative Model for Raw Audio (2016), A. Oord et al. [pdf] [web]
  • Colorful image colorization (2016), R. Zhang et al. [pdf]
  • Generative visual manipulation on the natural image manifold (2016), J. Zhu et al. [pdf]
  • Texture networks: Feed-forward synthesis of textures and stylized images (2016), D Ulyanov et al. [pdf]
  • SSD: Single shot multibox detector (2016), W. Liu et al. [pdf]
  • SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 1MB model size (2016), F. Iandola et al. [pdf]
  • Eie: Efficient inference engine on compressed deep neural network (2016), S. Han et al. [pdf]
  • Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1 (2016), M. Courbariaux et al. [pdf]
  • Dynamic memory networks for visual and textual question answering (2016), C. Xiong et al. [pdf]
  • Stacked attention networks for image question answering (2016), Z. Yang et al. [pdf]
  • Hybrid computing using a neural network with dynamic external memory (2016), A. Graves et al. [pdf]
  • Google’s neural machine translation system: Bridging the gap between human and machine translation (2016), Y. Wu et al. [pdf]

New papers
Newly published papers (< 6 months) which are worth reading

  • MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications (2017), Andrew G. Howard et al. [pdf]
  • Convolutional Sequence to Sequence Learning (2017), Jonas Gehring et al. [pdf]
  • A Knowledge-Grounded Neural Conversation Model (2017), Marjan Ghazvininejad et al. [pdf]
  • Accurate, Large Minibatch SGD:Training ImageNet in 1 Hour (2017), Priya Goyal et al. [pdf]
  • TACOTRON: Towards end-to-end speech synthesis (2017), Y. Wang et al. [pdf]
  • Deep Photo Style Transfer (2017), F. Luan et al. [pdf]
  • Evolution Strategies as a Scalable Alternative to Reinforcement Learning (2017), T. Salimans et al. [pdf]
  • Deformable Convolutional Networks (2017), J. Dai et al. [pdf]
  • Mask R-CNN (2017), K. He et al. [pdf]
  • Learning to discover cross-domain relations with generative adversarial networks (2017), T. Kim et al. [pdf]
  • Deep voice: Real-time neural text-to-speech (2017), S. Arik et al., [pdf]
  • PixelNet: Representation of the pixels, by the pixels, and for the pixels (2017), A. Bansal et al. [pdf]
  • Batch renormalization: Towards reducing minibatch dependence in batch-normalized models (2017), S. Ioffe. [pdf]
  • Wasserstein GAN (2017), M. Arjovsky et al. [pdf]
  • Understanding deep learning requires rethinking generalization (2017), C. Zhang et al. [pdf]
  • Least squares generative adversarial networks (2016), X. Mao et al. [pdf]

>Old Papers
Classic papers published before 2012

  • An analysis of single-layer networks in unsupervised feature learning (2011), A. Coates et al. [pdf]
  • Deep sparse rectifier neural networks (2011), X. Glorot et al. [pdf]
  • Natural language processing (almost) from scratch (2011), R. Collobert et al. [pdf]
  • Recurrent neural network based language model (2010), T. Mikolov et al. [pdf]
  • Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion (2010), P. Vincent et al. [pdf]
  • Learning mid-level features for recognition (2010), Y. Boureau [pdf]
  • A practical guide to training restricted boltzmann machines (2010), G. Hinton [pdf]
  • Understanding the difficulty of training deep feedforward neural networks (2010), X. Glorot and Y. Bengio [pdf]
  • Why does unsupervised pre-training help deep learning (2010), D. Erhan et al. [pdf]
  • Learning deep architectures for AI (2009), Y. Bengio. [pdf]
  • Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations (2009), H. Lee et al. [pdf]
  • Greedy layer-wise training of deep networks (2007), Y. Bengio et al. [pdf]
  • Reducing the dimensionality of data with neural networks, G. Hinton and R. Salakhutdinov. [pdf]
  • A fast learning algorithm for deep belief nets (2006), G. Hinton et al. [pdf]
  • Gradient-based learning applied to document recognition (1998), Y. LeCun et al. [pdf]
  • Long short-term memory (1997), S. Hochreiter and J. Schmidhuber. [pdf]

HW / SW / Dataset

  • SQuAD: 100,000+ Questions for Machine Comprehension of Text (2016), Rajpurkar et al. [pdf]
  • OpenAI gym (2016), G. Brockman et al. [pdf]
  • TensorFlow: Large-scale machine learning on heterogeneous distributed systems (2016), M. Abadi et al. [pdf]
  • Theano: A Python framework for fast computation of mathematical expressions, R. Al-Rfou et al.
  • Torch7: A matlab-like environment for machine learning, R. Collobert et al. [pdf]
  • MatConvNet: Convolutional neural networks for matlab (2015), A. Vedaldi and K. Lenc [pdf]
  • Imagenet large scale visual recognition challenge (2015), O. Russakovsky et al. [pdf]
  • Caffe: Convolutional architecture for fast feature embedding (2014), Y. Jia et al. [pdf]

Book / Survey / Review

  • On the Origin of Deep Learning (2017), H. Wang and Bhiksha Raj. [pdf]
  • Deep Reinforcement Learning: An Overview (2017), Y. Li, [pdf]
  • Neural Machine Translation and Sequence-to-sequence Models(2017): A Tutorial, G. Neubig. [pdf]
  • Neural Network and Deep Learning (Book, Jan 2017), Michael Nielsen. [html]
  • Deep learning (Book, 2016), Goodfellow et al. [html]
  • LSTM: A search space odyssey (2016), K. Greff et al. [pdf]
  • Tutorial on Variational Autoencoders (2016), C. Doersch. [pdf]
  • Deep learning (2015), Y. LeCun, Y. Bengio and G. Hinton [pdf]
  • Deep learning in neural networks: An overview (2015), J. Schmidhuber [pdf]
  • Representation learning: A review and new perspectives (2013), Y. Bengio et al. [pdf]

Video Lectures / Tutorials / Blogs
(Lectures)

  • CS231n, Convolutional Neural Networks for Visual Recognition, Stanford University [web]
  • CS224d, Deep Learning for Natural Language Processing, Stanford University [web]
  • Oxford Deep NLP 2017, Deep Learning for Natural Language Processing, University of Oxford [web]

(Tutorials)

  • NIPS 2016 Tutorials, Long Beach [web]
  • ICML 2016 Tutorials, New York City [web]
  • ICLR 2016 Videos, San Juan [web]
  • Deep Learning Summer School 2016, Montreal [web]
  • Bay Area Deep Learning School 2016, Stanford [web]

(Blogs)

Appendix: More than Top 100
(2016)

  • A character-level decoder without explicit segmentation for neural machine translation (2016), J. Chung et al. [pdf]
  • Dermatologist-level classification of skin cancer with deep neural networks (2017), A. Esteva et al. [html]
  • Weakly supervised object localization with multi-fold multiple instance learning (2017), R. Gokberk et al. [pdf]
  • Brain tumor segmentation with deep neural networks (2017), M. Havaei et al. [pdf]
  • Professor Forcing: A New Algorithm for Training Recurrent Networks (2016), A. Lamb et al. [pdf]
  • Adversarially learned inference (2016), V. Dumoulin et al. [web][pdf]
  • Understanding convolutional neural networks (2016), J. Koushik [pdf]
  • Taking the human out of the loop: A review of bayesian optimization (2016), B. Shahriari et al. [pdf]
  • Adaptive computation time for recurrent neural networks (2016), A. Graves [pdf]
  • Densely connected convolutional networks (2016), G. Huang et al. [pdf]
  • Region-based convolutional networks for accurate object detection and segmentation (2016), R. Girshick et al.
  • Continuous deep q-learning with model-based acceleration (2016), S. Gu et al. [pdf]
  • A thorough examination of the cnn/daily mail reading comprehension task (2016), D. Chen et al. [pdf]
  • Achieving open vocabulary neural machine translation with hybrid word-character models, M. Luong and C. Manning. [pdf]
  • Very Deep Convolutional Networks for Natural Language Processing (2016), A. Conneau et al. [pdf]
  • Bag of tricks for efficient text classification (2016), A. Joulin et al. [pdf]
  • Efficient piecewise training of deep structured models for semantic segmentation (2016), G. Lin et al. [pdf]
  • Learning to compose neural networks for question answering (2016), J. Andreas et al. [pdf]
  • Perceptual losses for real-time style transfer and super-resolution (2016), J. Johnson et al. [pdf]
  • Reading text in the wild with convolutional neural networks (2016), M. Jaderberg et al. [pdf]
  • What makes for effective detection proposals? (2016), J. Hosang et al. [pdf]
  • Inside-outside net: Detecting objects in context with skip pooling and recurrent neural networks (2016), S. Bell et al. [pdf].
  • Instance-aware semantic segmentation via multi-task network cascades (2016), J. Dai et al. [pdf]
  • Conditional image generation with pixelcnn decoders (2016), A. van den Oord et al. [pdf]
  • Deep networks with stochastic depth (2016), G. Huang et al., [pdf]
  • Consistency and Fluctuations For Stochastic Gradient Langevin Dynamics (2016), Yee Whye Teh et al. [pdf]

(2015)

  • Ask your neurons: A neural-based approach to answering questions about images (2015), M. Malinowski et al. [pdf]
  • Exploring models and data for image question answering (2015), M. Ren et al. [pdf]
  • Are you talking to a machine? dataset and methods for multilingual image question (2015), H. Gao et al. [pdf]
  • Mind’s eye: A recurrent visual representation for image caption generation (2015), X. Chen and C. Zitnick. [pdf]
  • From captions to visual concepts and back (2015), H. Fang et al. [pdf].
  • Towards AI-complete question answering: A set of prerequisite toy tasks (2015), J. Weston et al. [pdf]
  • Ask me anything: Dynamic memory networks for natural language processing (2015), A. Kumar et al. [pdf]
  • Unsupervised learning of video representations using LSTMs (2015), N. Srivastava et al. [pdf]
  • Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding (2015), S. Han et al. [pdf]
  • Improved semantic representations from tree-structured long short-term memory networks (2015), K. Tai et al. [pdf]
  • Character-aware neural language models (2015), Y. Kim et al. [pdf]
  • Grammar as a foreign language (2015), O. Vinyals et al. [pdf]
  • Trust Region Policy Optimization (2015), J. Schulman et al. [pdf]
  • Beyond short snippents: Deep networks for video classification (2015) [pdf]
  • Learning Deconvolution Network for Semantic Segmentation (2015), H. Noh et al. [pdf]
  • Learning spatiotemporal features with 3d convolutional networks (2015), D. Tran et al. [pdf]
  • Understanding neural networks through deep visualization (2015), J. Yosinski et al. [pdf]
  • An Empirical Exploration of Recurrent Network Architectures (2015), R. Jozefowicz et al. [pdf]
  • Deep generative image models using a laplacian pyramid of adversarial networks (2015), E.Denton et al. [pdf]
  • Gated Feedback Recurrent Neural Networks (2015), J. Chung et al. [pdf]
  • Fast and accurate deep network learning by exponential linear units (ELUS) (2015), D. Clevert et al. [pdf]
  • Pointer networks (2015), O. Vinyals et al. [pdf]
  • Visualizing and Understanding Recurrent Networks (2015), A. Karpathy et al. [pdf]
  • Attention-based models for speech recognition (2015), J. Chorowski et al. [pdf]
  • End-to-end memory networks (2015), S. Sukbaatar et al. [pdf]
  • Describing videos by exploiting temporal structure (2015), L. Yao et al. [pdf]
  • A neural conversational model (2015), O. Vinyals and Q. Le. [pdf]
  • Improving distributional similarity with lessons learned from word embeddings, O. Levy et al. [[pdf]] (https://www.transacl.org/ojs/index.php/tacl/article/download/570/124)
  • Transition-Based Dependency Parsing with Stack Long Short-Term Memory (2015), C. Dyer et al. [pdf]
  • Improved Transition-Based Parsing by Modeling Characters instead of Words with LSTMs (2015), M. Ballesteros et al. [pdf]
  • Finding function in form: Compositional character models for open vocabulary word representation (2015), W. Ling et al. [pdf]

(~2014)

  • DeepPose: Human pose estimation via deep neural networks (2014), A. Toshev and C. Szegedy [pdf]
  • Learning a Deep Convolutional Network for Image Super-Resolution (2014, C. Dong et al. [pdf]
  • Recurrent models of visual attention (2014), V. Mnih et al. [pdf]
  • Empirical evaluation of gated recurrent neural networks on sequence modeling (2014), J. Chung et al. [pdf]
  • Addressing the rare word problem in neural machine translation (2014), M. Luong et al. [pdf]
  • On the properties of neural machine translation: Encoder-decoder approaches (2014), K. Cho et. al.
  • Recurrent neural network regularization (2014), W. Zaremba et al. [pdf]
  • Intriguing properties of neural networks (2014), C. Szegedy et al. [pdf]
  • Towards end-to-end speech recognition with recurrent neural networks (2014), A. Graves and N. Jaitly. [pdf]
  • Scalable object detection using deep neural networks (2014), D. Erhan et al. [pdf]
  • On the importance of initialization and momentum in deep learning (2013), I. Sutskever et al. [pdf]
  • Regularization of neural networks using dropconnect (2013), L. Wan et al. [pdf]
  • Learning Hierarchical Features for Scene Labeling (2013), C. Farabet et al. [pdf]
  • Linguistic Regularities in Continuous Space Word Representations (2013), T. Mikolov et al. [pdf]
  • Large scale distributed deep networks (2012), J. Dean et al. [pdf]
  • A Fast and Accurate Dependency Parser using Neural Networks. Chen and Manning. [pdf]

Runtime 3D Asset Delivery
Sketchfab
Facebook получила поддержку 3D-файлов формата glTF 2.0 и новые способы распространения 3D-контента
glTF 2.0 и OpenGEX

glTF-Sample-Models
glTF Tutorial This tutorial gives an introduction to glTF, the GL transmission format. It summarizes the most important features and application cases of glTF, and describes the structure of the files that are related to glTF. It explains how glTF assets may be read, processed, and used to display 3D graphics efficiently.
glTF-CSharp-Loader

glTF – Runtime 3D Asset Delivery
glTF™ (GL Transmission Format) is a royalty-free specification for the efficient transmission and loading of 3D scenes and models by applications. glTF minimizes both the size of 3D assets, and the runtime processing needed to unpack and use those assets. glTF defines an extensible, common publishing format for 3D content tools and services that streamlines authoring workflows and enables interoperable use of content across the industry.
Runtime GLTF Loader for Unity3D
glTF-Blender-IO

COLLADA to glTF converter


SYCL C++ Single-source Heterogeneous Programming for OpenCL
Khronos SYCL Registry
SyclParallelSTL Open Source Parallel STL implementation
SYCL
SYCL WiKi
TensorFlow™ AMD Setup Guide
TensorFlow 1.x On Ubuntu 16.04 LTS
AMDGPU-PRO Driver for Linux


Codeplay Announces World’s First Fully-Conformant SYCL 1.2.1 Solution Posted on August 23, 2018


SPIR The Industry Open Standard Intermediate Language for Parallel Compute and Graphics
SPIRV-Cross is a practical tool and library for performing reflection on SPIR-V and disassembling SPIR-V back to high level languages
A short OpenGL / SPIRV example
Khronos Vulkan
Vulkan NVIDIA

Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.
An implementation of BLAS using the SYCL open standard for acceleration on OpenCL devices
Tensorflow
ComputeCpp is Conformant with SYCL 1.2.1!
triSYCL
Xilinx FPGAs: The Chip Behind Alibaba
Optimizing the Convolution Operation to Accelerate Deep Neural Networks on FPGA

Neural Network Exchange Format
Neural Network Exchange Format (NNEF)
The NNEF Tools repository contains tools to generate and consume NNEF documents

The LLVM Compiler Infrastructure
Getting Started with the LLVM System
LLVM Download Page
Clang: a C language family frontend for LLVM

LLVM 8.0.0 Release Notes
Clang 7 documentation

Clang (произносится «клэнг») является фронтендом для языков программирования C, C++, Objective-C, Objective-C++ и OpenCL C, использующимся совместно с фреймворком LLVM. Clang транслирует исходные коды в байт-код LLVM, затем фреймворк производит оптимизации и кодогенерацию. Целью проекта является создание замены GNU Compiler Collection (GCC). Разработка ведётся согласно концепции open source в рамках проекта LLVM. В проекте участвуют работники нескольких корпораций, в том числе Google и Apple. Исходный код доступен на условиях BSD-подобной лицензии.
Clang. Часть 1: введение
Как приручить дракона. Краткий пример на clang-c
clang и IDE
Clang API. Начало

Numba makes Python code fast Numba is an open source JIT compiler that translates a subset of Python and NumPy code into fast machine code.
Numba for AMD ROC GPUs









Изучение сознания в когнитивной психологии — Иван Иванчей


Тест AMD Ryzen 5 2600

Результаты тестирования AMD Ryzen 7 1700X
Рейтинг процессоров
Tensorflow Inception v3 benchmark
low performance ?

Hardware to Play ROCm
ROCm Software Platform
Deep Learning on ROCm ROCm Tensorflow Release
HIP : Convert CUDA to Portable C++ Code
RadeonOpenCompute/ROCm ROCm – Open Source Platform for HPC and Ultrascale GPU Computing https://rocm.github.io/
ROCmSoftwarePlatform pytorch
ROCmSoftwarePlatform/MIOpen
Welcome to MIOpen Advanced Micro Devices, Inc’s open source deep learning library.
AMD ROCm GPUs now support TensorFlow v1.8, a major milestone for AMD’s deep learning plans
ROCm-Developer-Tools/HIP HIP : Convert CUDA to Portable C++ Code
hipCaffe Quickstart Guide Install ROCm
Half-precision floating point library
convnet-benchmarks
computeruniverse.ru VEGA
Deep Learning on ROCm Announcing our new Foundation for Deep Learning acceleration MIOpen 1.0 which introduces support for Convolution Neural Network (CNN) acceleration — built to run on top of the ROCm software stack!
MXNet is a deep learning framework that has been ported to the HIP port of MXNet. It works both on HIP/ROCm and HIP/CUDA platforms. Mxnet makes use of rocBLAS,rocRAND,hcFFT and MIOpen APIs.
ROCmSoftwarePlatform/hipCaffe
ROCm Software Platform
«Radeon Open Compute (ROCm) — это новая эра для платформ расчета на GPU, призванных использовать возможности ПО с открытым исходным кодом, чтобы реализовать новые решения для высокопроизводительных и гипермасштабируемых вычислений. ПО ROCm дает разработчикам абсолютную гибкость в том, где и как они могут использовать GPU-вычисления.
MIOpen
A Comparison of Deep Learning Frameworks
UserBenchmark
UserBenchmark AMD-Ryzen-TR-2990WX-vs-Intel-Core-i9-7960X
UserBenchmark AMD-Ryzen-7-2700X-vs-Intel-Core-i7-8700K
UserBenchmark AMD-Ryzen-7-2700X-vs-AMD-Ryzen-7-1800X
UserBenchmark AMD-Ryzen-7-2700X-vs-Intel-Core-i7-4790K
UserBenchmark Intel-Core-i7-4790K-vs-Intel-Core-i5-4690
UserBenchmark Intel-Core-i5-4670K-vs-Intel-Core-i7-4790K
UserBenchmark AMD-Ryzen-7-2700X-vs-Intel-Core-i7-3770K
UserBenchmark Intel-Core-i5-3570-vs-AMD-Ryzen-7-2700X
UserBenchmark Intel-Core-i5-3570-vs-Intel-Core-i7-3770K
UserBenchmark Intel-Core-i5-3570-vs-Intel-Core-i5-4690
UserBenchmark Intel-Core-i5-3570-vs-Intel-Core-i5-4670K
UserBenchmark Intel-Core-i5-3570-vs-Intel-Core-i7-4790K

Сравнение Google TPUv2 и Nvidia V100 на ResNet-50
AI accelerator
eSilicon deep learning ASIC in production qualification
Бенчмарк нового тензорного процессора Google для глубинного обучения
Специализированный ASIC от Google для машинного обучения в десятки раз быстрее GPU
В MIT разработали фотонный чип для глубокого обучения
Machine Learning Series
Visual Computing Group
source{d} tech talks – Machine Learning 2017
10 Alarming Predictions for Deep Learning in 2018
Эксперименты с malloc и нейронными сетями
LSTMVis: A Tool for Visual Analysis of Hidden State Dynamics in Recurrent Neural Networks
HiPiler: Visual Exploration Of Large Genome Interaction Matrices With Interactive Small Multiples
Deep Learning Hardware Limbo
OpenAI
Inside OpenAI
Math Deep learning
Facebook and Microsoft introduce new open ecosystem for interchangeable AI frameworks

Inside AI Next-level computing powered by Intel AI Intel® Nervana™ Neural Network Processor

Intel® Nervana™ Neural Network Processor: Architecture Update Dec 06, 2017
AI News January 2018
Andrej Karpathy

MIT 6.S094: Deep Reinforcement Learning for Motion Planning

RI Seminar: Sergey Levine : Deep Robotic Learning

Tim Lillicrap – Data efficient deep reinforcement learning for continuous control
Intermediate Python
Tensors and Dynamic neural networks in Python with strong GPU acceleration http://pytorch.org
Tutorial for beginners https://github.com/GunhoChoi/Kind-PyTorch-Tutorial
A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.
PyTorch documentation
Transfering a model from PyTorch to Caffe2 and Mobile using ONNX
ONNX is a new open ecosystem for interchangeable AI models.
Open Neural Network Exchange https://onnx.ai/
Intermediate Python Docs
K-Means Clustering in Python
In Depth: k-Means Clustering
K-means Clustering in Python
Clustering With K-Means in Python
Unsupervised Machine Learning: Flat Clustering K-Means clusternig example with Python and Scikit-learn
ST at CES 2018 – Deep Learning on STM32


Configuring Marlin 1.1

Конец эпохи Nvidia? Graphcore разработала чипы на базе вычислительных графов
Как выбрать графический процессор для глубокого обучения
Почему TPU так хорошо подходят для глубинного обучения?

Конфигурация компьютера для машинного обучения. Бюджетный и оптимальный подбор
Десять алгоритмов машинного обучения, которые вам нужно знать
С Новым машиннообучательным годом!
Лучшие видеокарты и оборудование для майнинга Ethereum