drmeme + deeplearning   80

Hardware for Deep Learning. Part 1: Introduction
Semi-regularly updated review of the issues related to deep learning and hardware.

In particular, see Part 3 about GPUs.
hardware  servers  deeplearning  dlcpus  gpus 
6 weeks ago by drmeme
Why building your own Deep Learning Computer is 10x cheaper than AWS
First of a good series on building your own deep learning machine.
aws  deeplearning  dl  server  host  rig  diy  hardware 
6 weeks ago by drmeme
The Elephant in the Room
How to fool deep learning.

We showcase a family of common failures of state-of-the art object detectors. These are obtained by replacing image sub-regions by another sub-image that contains a trained object. We call this "object transplanting". Modifying an image in this manner is shown to have a non-local impact on object detection. Slight changes in object position can affect its identity according to an object detector as well as that of other objects in the image. We provide some analysis and suggest possible reasons for the reported phenomena.
deeplearning  computervision  objectrecognition  failures  recognitionfailures 
11 weeks ago by drmeme
Ten Techniques Learned From fast.ai
1. Use the Fast.ai library
2. Don’t use one learning rate, use many
3. How to find the right learning rate
4. Cosine annealing
5. Stochastic Gradient Descent with restarts
6. Anthropomorphise your activation functions
7. Transfer learning is hugely effective in NLP
8. Deep learning can challenge ML in tackling structured data
9. A game-winning bundle: building up sizes, dropout and TTA
10. Creativity is key
dl  ml  deeplearning  fast.ai  transferlearning  learningrate 
august 2018 by drmeme
Notes from Coursera Deep Learning courses by Andrew Ng
A slideshare of awesome graphical notes from Andrew Ng's Coursera Deep Learning class.
slideshare  coursera  nadrewng  deeplearning  notes 
april 2018 by drmeme
The Matrix Calculus You Need For Deep Learning
Intro and tutorial on just enough matrix calculus for deep learning.
deeplearning  matrixcalculus  tutorial 
february 2018 by drmeme
Wolf in Sheep's Clothing - The Downscaling Attack Against Deep Learning Applications
This paper considers security risks buried in the data processing pipeline in common deep learning applications. Deep learning models usually assume a fixed scale for their training and input data. To allow deep learning applications to handle a wide range of input data, popular frameworks, such as Caffe, TensorFlow, and Torch, all provide data scaling functions to resize input to the dimensions used by deep learning models. Image scaling algorithms are intended to preserve the visual features of an image after scaling. However, common image scaling algorithms are not designed to handle human crafted images. Attackers can make the scaling outputs look dramatically different from the corresponding input images.
This paper presents a downscaling attack that targets the data scaling process in deep learning applications. By carefully crafting input data that mismatches with the dimension used by deep learning models, attackers can create deceiving effects. A deep learning application effectively consumes data that are not the same as those presented to users. The visual inconsistency enables practical evasion and data poisoning attacks to deep learning applications. This paper presents proof-of-concept attack samples to popular deep-learning-based image classification applications. To address the downscaling attacks, the paper also suggests multiple potential mitigation strategies.
downscaling  computervision  imagerecognition  deeplearning  attacks 
december 2017 by drmeme
Learning Deep Learning with Keras
A tutorial on Keras with great links to other resources.
keras  deeplearning  tutorial 
december 2017 by drmeme
Understanding Hinton’s Capsule Networks. Part I: Intuition.
A good explanation of Hinton's Capsule Networks.

Internal data representation of a convolutional neural network does not take into account important spacial hierarchies between simple and complex objects. CNNs use max pooling to address this. However, max pooling loses information. The pooling operation used in convolutional neural networks is a big mistake and the fact that it works so well is a disaster. One way to take the spatial relationships into account is to use an inverse graphics approach to create a model that does not depend on the view angle. A 4D (hierarchical) pose matrix does this.

See also: the succeeding articles providing more detail.
machinelearning  deeplearning  hinton  capsules  capsuletheory  capnet  cnn  cnns  convnet  posematrix 
november 2017 by drmeme
pretrained.ml - Deep learning models with demos
Sortable and searchable compilation of pre-trained deep learning models. With demos and code.

Pre-trained models are deep learning model weights that you can download and use without training. Note that computation is not done in the browser
ml  deeplearning  models 
november 2017 by drmeme
Feature Visualization
This article focusses on feature visualization. While feature visualization is a powerful tool, actually getting it to work involves a number of details. In this article, we examine the major issues and explore common approaches to solving them. We find that remarkably simple methods can produce high-quality visualizations. Along the way we introduce a few tricks for exploring variation in what neurons react to, how they interact, and how to improve the optimization process.
machinelearning  deeplearning  ai  visualization  features 
november 2017 by drmeme
Build your own top-spec remote-access Machine Learning rig
Very detailed and helpful (links to relevant sources and experience) post on building a Deep Learning machine.
dl  deeplearning  machine  rig  diy  build 
november 2017 by drmeme
The Two Phases of Gradient Descent in Deep Learning
Good article that reviews recent papers on the theory behind SGD in deep learning. The links to other papers in this article are also very helpful.
deeplearning  ai  theory  sgd  compression  generalization  informationtheory 
september 2017 by drmeme
New Theory Cracks Open the Black Box of Deep Learning | Quanta Magazine
Great review article of a paper explaining the results of a new theory on how deep learning works. They describe SGD as having two distinct phases, a drift phase and a diffusion phase. SGD begins in the first phase, basically exploring the multidimensional space of solutions. When it begins converging, it arrives at the diffusion phase where it is extremely chaotic and the convergence rate slows to a crawl. Also, read the original article at https://arxiv.org/abs/1703.00810 and a video of a talk at https://www.youtube.com/watch?v=bLqJHjXihK8
deeplearning  ai  theory  sgd  compression  generalization  informationtheory 
september 2017 by drmeme
A Dozen Times Artificial Intelligence Startled The World
1. A look into a machine’s imagination.
2. Making a machine imitate humans.
3. Turning horses into zebras and winters into summers.
4. Paintings drawn from sketches.
5. Images created from only text description.
6. Making computers learn through "curiosity".
7. AI designing games.
8. Predicting what happens next in a video.
9. Generating realistic yet fake faces.
10. Changing facial expressions & features in photos.
gan  artificialintelligence  ai  deeplearning  generateiveadversarialnetworks  generateivemodels 
september 2017 by drmeme
deeplearn.js
deeplearn.js is an open-source library that brings performant machine learning building blocks to the web, allowing you to train neural networks in a browser or run pre-trained models in inference mode.
browser  deeplearning  javascript 
august 2017 by drmeme
A Primer on Neural Network Models for Natural Language Processing
Over the past few years, neural networks have re-emerged as powerful machine-learning models, yielding state-of-the-art results in fields such as image recognition and speech processing. More recently, neural network models started to be applied also to textual natural language signals, again with very promising results. This tutorial surveys neural network models from the perspective of natural language processing research, in an attempt to bring natural-language researchers up to speed with the neural techniques. The tutorial covers input encoding for natural language tasks, feed-forward networks, convolutional networks, recurrent networks and recursive networks, as well as the computation graph
abstraction for automatic gradient computation.
nlp  deeplearning  pdf  primer  tutorial 
july 2017 by drmeme
The limitations of deep learning
Here's what you should remember: the only real success of deep learning so far has been the ability to map space X to space Y using a continuous geometric transform, given large amounts of human-annotated data. Doing this well is a game-changer for essentially every industry, but it is still a very long way from human-level AI.
machinelearning  deeplearning  ml  ai  limits 
july 2017 by drmeme
The Strange Loop in Deep Learning – Intuition Machine – Medium
The major difficulty of training Deep Learning systems has been the lack of labeled data. Labeled data is the fuel that drives the accuracy of Deep Learning model. However, these newer kinds of system that begin to exploit loops are solving this lack of supervision problem. It is like having a perpetual motion machine where in these automation dream up new variations of labeled data. As a consequence, paradoxically fueling themselves with more data. These automation playing simulations game with themselves, and with enough game play, becoming experts at it.
deeplearning  strangeloop  gan  ai 
july 2017 by drmeme
Understanding deep learning requires re-thinking generalization
Deep learning models seem to be able to get to zero errors on the training set easily. Even under the face of changing labels and generating random training data!

Explicit forms of regularization, such as weight decay, dropout, and data augmentation, do not adequately explain the generalization error of neural networks: Explicit regularization may improve generalization performance, but is neither necessary nor by itself sufficient for controlling generalization error.

This situation poses a conceptual challenge to statistical learning theory as traditional measures of model complexity struggle to explain the generalization ability of large artificial neural networks. We argue that we have yet to discover a precise formal measure under which these enormous models are simple. Another insight resulting from our experiments is that optimization continues to be empirically easy even if the resulting model does not generalize. This shows that the reasons for why optimization is empirically easy must be different from the true cause of generalization.

See also, the original paper at: https://openreview.net/forum?id=Sy8gdB9xx&noteId=Sy8gdB9xx
deeplearning  generalization  overfitting  regularization  adriancolyer 
may 2017 by drmeme
Best Practices for Applying Deep Learning to Novel Applications
This report is targeted to groups who are subject matter experts in their application but deep learning novices. It contains practical advice for those interested in testing the use of deep neural networks on applications that are novel for deep learning. We suggest making your project more manageable by dividing it into phases. For each phase this report contains numerous recommendations and insights to assist novice practitioners.

Phase 1: Getting prepared.
Phase 2: Preparing your data.
Phase 3: Find an analogy between your application and the closest deep learning applications.
Phase 4: Create a simple baseline model.
Phase 5: Create visualization and debugging tool.
Phase 6: Fine tune your model.
Phase 7: End-to-end training, ensembles and other complexities.
ai  deeplearning  applications 
may 2017 by drmeme
CS20SI: Tensorflow for Deep Learning Research
Stanford class on Tensor Flow and Deep Learning.
Links and class notes.
stanford  course  tensorflow  deeplearning 
march 2017 by drmeme
Using Deep Learning and Google Street View to Estimate the Demographic Makeup of the US
The United States spends more than $1B each year on initiatives such as the American Community Survey (ACS), a labor-intensive door-to-door study that measures statistics relating to race, gender, education, occupation, unemployment, and other demographic factors. Although a comprehensive source of data, the lag between demographic changes and their appearance in the ACS can exceed half a decade. As digital imagery becomes ubiquitous and machine vision techniques improve, automated data analysis may provide a cheaper and faster alternative. Here, we present a method that determines socioeconomic trends from 50 million images of street scenes, gathered in 200 American cities by Google Street View cars. Using deep learning-based computer vision techniques, we determined the make, model, and year of all motor vehicles encountered in particular neighborhoods. Data from this census of motor vehicles, which enumerated 22M automobiles in total (8% of all automobiles in the US), was used to accurately estimate income, race, education, and voting patterns, with single-precinct resolution. (The average US precinct contains approximately 1000 people.) The resulting associations are surprisingly simple and powerful. For instance, if the number of sedans encountered during a 15-minute drive through a city is higher than the number of pickup trucks, the city is likely to vote for a Democrat during the next Presidential election (88% chance); otherwise, it is likely to vote Republican (82%). Our results suggest that automated systems for monitoring demographic trends may effectively complement labor-intensive approaches, with the potential to detect trends with fine spatial resolution, in close to real time.
deeplearning  census  machinevision  cars 
march 2017 by drmeme
DeepMind just published a mind blowing paper: PathNet.
Using the training on one task to pre-learn a different task.

See also, the original paper at

https://arxiv.org/pdf/1701.08734.pdf
ai  deeplearning  transferlearning  pathnet 
february 2017 by drmeme
The Alien Style of Deep Learning Generative Design
What is surprising is that these designs do not exist for the sake of style. Rather, these designs are actually the optimal solutions to multiple competing design requirements. Why do they look organic or biological? Is there some underlying fundamental principle that exists in biological systems that leads to this? Why aren’t the solutions sparse, but rather complex?

Even a more deeper question is, if these were the optimal designs, then why don’t inanimate objects look like this? Inanimate objects that are complex tend to have a fractal style: complexity,The self-similar repeating patterns that we see in crystals and coastlines ,despite looking complex, certainly have a style that is different from organic or biological styles. Deep Learning clearly has similar capabilities as biological systems. I suspect that this difference originates from the difference in computational machinery that generates these. It indeed is fascinating that the style of these generated objects are a reflection of the capabilities of its creator.
deeplearning  design  generativedesign  agile  nature  complexity  biology 
february 2017 by drmeme
Oxford Deep NLP 2017 course
This is an applied course focussing on recent advances in analysing and generating speech and text using recurrent neural networks. We introduce the mathematical definitions of the relevant machine learning models and derive their associated optimisation algorithms. The course covers a range of applications of neural networks in NLP including analysing latent dimensions in text, transcribing speech to text, translating between languages, and answering questions. These topics are organised into three high level themes forming a progression from understanding the use of neural networks for sequential language modelling, to understanding their use as conditional language models for transduction tasks, and finally to approaches employing these techniques in combination with other mechanisms for advanced applications. Throughout the course the practical implementation of such models on CPU and GPU hardware is also discussed.
nlp  deeplearning  naturallanguageprocessing  ml  machinelearning  rnns  lstm  oxford  deepmind 
february 2017 by drmeme
Understanding Neural Networks Through Deep Visualization
Deep neural networks have recently been producing amazing results! But how do they do what they do? Historically, they have been thought of as “black boxes”, meaning that their inner workings were mysterious and inscrutable. Recently, we and others have started shinning light into these black boxes to better understand exactly what each neuron has learned and thus what computation it is performing.
deeplearning  visualization  understanding  neuralnetworks  neurons 
february 2017 by drmeme
Understanding deep learning requires rethinking generalization
The classical view of machine learning rests on the idea of parsimony. In almost any formulation, learning boils down to extracting low-complexity patterns from data. Brute-force memorization is typically not thought of as an effective form of learning. At the same time, it’s possible that sheer memorization can in part be an effective problem-solving strategy for natural tasks. Our results challenge the classical view of learning by showing that many successful neural networks easily have the effective capacity for sheer memorization. This leads us to believe that these models may very well make use of massive memorization when tackling the problems they are trained to solve. It is likely that learning in the traditional sense still occurs in part, but it appears to be deeply intertwined with massive memorization. Classical approaches are therefore poorly suited for reasoning about why these models generalize well. We believe that understanding neural networks requires rethinking generalization. We hope that our paper provides a first stepping stone by problematizing the classical view and pointing towards unresolved conundrums.
deeplearning  generalization  trainingerror  testerror  memorization 
february 2017 by drmeme
NanoNets : How to use Deep Learning when you have Limited Data
Train on one set, use the trained model on a different (but hopefully related) domain.
nanonets  deeplearning  transferlearning 
february 2017 by drmeme
Build a super fast deep learning machine for under $1,000
What it says. Good step-by-step process. Mostly parts from Amazon.
Deals with software too.
deeplearning  machine  pc 
february 2017 by drmeme
Tutorial on Deep Learning
Videos for a UCB Deep Learning tutorial.
ucb  berkeley  deeplearning  tutorial  video 
january 2017 by drmeme
Learn TensorFlow and deep learning, without a Ph.D.
A fast (3 hours) tutorial on Deep Learning with Tensor Flow. Links to videos and other material.
tensorflow  tf  deeplearning  tutorial 
january 2017 by drmeme
Deep Learning in Clojure With Cortex
A not-very-detailed article about using Cortex to run a deep learning example. With github sources.
deeplearning  ml  clojure  cortex  article  example  howto 
january 2017 by drmeme
Open Source Deep Learning Curriculum
A list of links to courses for deep learning. Arranged as a curriculum.
deeplearning  study  courses  list  curriculum 
january 2017 by drmeme
A Guide to Deep Learning
Links to resources for deep learning. Organized as a tutorial.
deeplearning  tutorial  course  links 
december 2016 by drmeme
Deep Learning Papers
List of deep learning papers. Classified by domain.
papers  deeplearning  list 
november 2016 by drmeme
Deep Learning Papers Reading Roadmap
Curated list of Deep Learning papers. Especially for learning. In order.
deeplearning  papers  list  resources 
october 2016 by drmeme
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data
Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information.
To address this problem, we demonstrate a generally applicable approach to providing strong privacy guarantees for training data. The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users. Because they rely directly on sensitive data, these models are not published, but instead used as teachers for a student model. The student learns to predict an output chosen by noisy voting among all of the teachers, and cannot directly access an individual teacher or the underlying data or parameters. The student's privacy properties can be understood both intuitively (since no single teacher and thus no single dataset dictates the student's training) and formally, in terms of differential privacy. These properties hold even if an adversary can not only query the student but also inspect its internal workings.
Compared with previous work, the approach imposes only weak assumptions on how teachers are trained: it applies to any model, including non-convex models like DNNs. We achieve state-of-the-art privacy/utility trade-offs on MNIST and SVHN thanks to an improved privacy analysis and semi-supervised learning.
machinelearning  deeplearning  privacy  training 
october 2016 by drmeme
SREZ - Github
Image super-resolution through deep learning.
Amazing results.
deeplearning  ml  imageenhancement  resolution 
september 2016 by drmeme
End-to-End Deep Learning for Self-Driving Cars
We have empirically demonstrated that CNNs are able to learn the entire task of lane and road following without manual decomposition into road or lane marking detection, semantic abstraction, path planning, and control. A small amount of training data from less than a hundred hours of driving was sufficient to train the car to operate in diverse conditions, on highways, local and residential roads in sunny, cloudy, and rainy conditions. The CNN is able to learn meaningful road features from a very sparse training signal (steering alone).

The system learns for example to detect the outline of a road without the need of explicit labels during training.
nvidia  cnn  deeplearning  cars  driving  selfdrivingcars  torch7 
august 2016 by drmeme
Deep Learning
Online deep learning book from MIT Press.
deeplearning  book  online 
april 2016 by drmeme
Deep Learning Is Going to Teach Us All the Lesson of Our Lives: Jobs Are for Machines — Basic income — Medium
There appears to be no limit to the jobs covered by AI. Perhaps a basic minimum wage is some part of an answer.
deeplearning  ai  artificialintelligence  jobs  society  minimumwage 
march 2016 by drmeme
GitHub - zer0n/deepframeworks: Evaluation of Deep Learning Frameworks
Evaluate some popular deep learning toolkits.: Caffe, CNTK, TensorFlow, Theano, and Torch. Based on: 1. Modeling Capability 2. Interfaces 3. Model Deployment 4. Performance 5. Architecture 6. Ecosystem 7. Cross-platform
caffe  tensorflow  cntk  theano  torch  comparison  deeplearning  framework 
february 2016 by drmeme
developer.nvidia.com
Nvidia's deep learning library.
cudnn  deeplearning  nvidia 
february 2016 by drmeme
Torch7
Torch is a scientific computing framework with wide support for machine learning algorithms. It is easy to use and efficient, thanks to an easy and fast scripting language, LuaJIT, and an underlying C/CUDA implementation. A summary of core features: - a powerful N-dimensional array - lots of routines for indexing, slicing, transposing, ... - amazing interface to C, via LuaJIT - linear algebra routines - neural network, and energy-based models - numeric optimization routines - Fast and efficient GPU support - Embeddable, with ports to iOS, Android and FPGA backends
machinelearning  opensource  lua  library  deeplearning  neuralnetwork 
february 2016 by drmeme
Batch Normalization: Accelerating Deep Network Training b y Reducing Internal Covariate Shift
Training Deep Neural Networks is complicated by the fact that the distribution of each layer’s inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it no - toriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout.
batchnormalization  google  pdf  training  deeplearning 
february 2016 by drmeme
Caffe
Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is FAST.
deeplearning  machinelearning  gpu  neuralnetworks  framework  vision 
february 2016 by drmeme
DeepDetect :: A Simple & Powerful Deep Learning Server
Classify images, text and numerical data from your application or the command line by series of simple calls to the deep learning server. Use one or more deep learning servers for development and production, test, move and reuse models, it has never been easier to bring the full machine learning cycle into production! A simple yet powerful and generic API for use of Machine Learning. It is simple to setup, test, and plug into your existing application.
deeplearning  server 
february 2016 by drmeme
Deep Learning in a Nutshell: Core Concepts | Parallel Forall
Good tutorial on deep learning. This is part 1. See also part 2.
deeplearning  tutorial 
january 2016 by drmeme
Practical deep text learning | Dato Notebook | Dato
This notebook presents practical methods for learning from natural text. Using simple combinations of deep learning, classification, and regression, I demonstrate how to predict a blogger's gender and age with high accuracy based on his or her blog posts. More specifically, I create text features using the Word2Vec deep learning model implemented in the Gensim Python package, and then perform classification and regression using the machine learning toolkits in GraphLab Create.
deeplearning  textlearning  text  python 
november 2015 by drmeme
TensorFlow
TensorFlow™ is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API.
tensorflow  google  ml  machinelearning  deeplearning 
november 2015 by drmeme
26 Things I Learned in the Deep Learning Summer School - Marek Rei
Some interesting items on deep learning mentioned at Montreal summer school.
deeplearning 
october 2015 by drmeme
Why Deep Learning Works II: the Renormalization Group | Machine Learning
Not tutorial level, but interesting discussion of Deep Learning and why it works. Connections with Theoretical Physics. Also see the first post.
deeplearning  understanding  renormalization 
july 2015 by drmeme
Why does Deep Learning work? | Machine Learning
Not tutorial level, but interesting discussion of Deep Learning and why it works. Also see the next post.
deeplearning  understanding 
july 2015 by drmeme
Neural networks and deep learning
Online book on neural networks and deep learning. Great.
neuralnetworks  machinelearning  deeplearning  ebook  free 
august 2014 by drmeme
A Primer on Deep Learning | DataRobot
Good overview of Deep Learning. Some history. Good explanations. Some pointers to other references.
deeplearning  machinelearning  explanation  neuralnetworks  primer  history 
may 2014 by drmeme

related tags

8bits  adriancolyer  agile  ai  apple  applications  article  artificialintelligence  attacks  autoencoders  aws  backpropagation  batchnormalization  berkeley  bibliography  binary  biology  book  browser  build  caffe  caffee  capnet  capsules  capsuletheory  cars  census  clojure  cnn  cnns  cntk  comparison  comparisons  complexity  compression  computervision  conditionalgan  convnet  convolutionalneuralnetworks  cortex  course  coursera  courses  cudnn  curriculum  datascience  deception  deeplearning  deepmind  deepneuralnetworks  design  diy  dl  dlcpus  dnn  downscaling  driving  ebook  example  explanation  failures  fashion  fast.ai  features  framework  free  gan  generalization  generateiveadversarialnetworks  generateivemodels  generativeadversarialnetworks  generativedesign  glossary  google  gpu  gpus  hardware  hinton  history  host  howto  imageenhancement  imagerecognition  images  infogan  informationtheory  javascript  jeffdean  jobs  keras  learningrate  lecures  libraries  library  limits  links  list  lstm  lstms  lua  machine  machinelearning  machinevision  matrixcalculus  memorization  minimumwage  ml  models  nadrewng  nanonets  naturallanguageprocessing  nature  neuralnets  neuralnetwork  neuralnetworks  neurons  nlp  notes  nvidia  objectrecognition  online  opensource  overfitting  oxford  packages  papers  pathnet  pc  pdf  posematrix  precision  presentation  primer  privacy  products  python  rbm  recognitionfailures  recurrentneuralnetworks  regularization  renormalization  resolution  resource  resources  rig  rnn  rnns  selfdrivingcars  sentimentanalysis  server  servers  sgd  shazam  siri  slideshare  society  stanford  strangeloop  study  tensorflow  tensornetworks  testerror  text  textlearning  tf  theano  theory  tools  torch  torch7  training  trainingerror  transferlearning  tutorial  ucb  udnerstanding  understanding  unsupervisedlearning  vagrant  video  vision  visualization 

Copy this bookmark:



description:


tags: