dnn   828

« earlier    

Apollo Scape
RGB vidoes/images with per-pixel segmentation, 3D data
ml  dnn  data 
4 weeks ago by nrrd
[1802.10078] A Fast Deep Learning Model for Textual Relevance in Biomedical Information Retrieval
Publications in the life sciences are characterized by a large technical vocabulary, with many lexical and semantic variations for expressing the same concept. Towards addressing the problem of relevance in biomedical literature search, we introduce a deep learning model for the relevance of a document's text to a keyword style query. Limited by a relatively small amount of training data, the model uses pre-trained word embeddings. With these, the model first computes a variable-length Delta matrix between the query and document, representing a difference between the two texts, which is then passed through a deep convolution stage followed by a deep feed-forward network to compute a relevance score. This results in a fast model suitable for use in an online search engine. The model is robust and outperforms comparable state-of-the-art deep learning approaches.
IR  neural  papers  DNN 
6 weeks ago by foodbaby
RT : Intel Launches AI: In Production Programme for Movidius Devs.

MachineLearning  AI  DNN  from twitter
7 weeks ago by 9600
[1711.05408] Recurrent Neural Networks as Weighted Language Recognizers
We investigate computational complexity of questions of various problems for simple recurrent neural networks (RNNs) as formal models for recognizing weighted languages. We focus on the single-layer, ReLU-activation, rational-weight RNNs with softmax, which are commonly used in natural language processing applications. We show that most problems for such RNNs are undecidable, including consistency, equivalence, minimization, and finding the highest-weighted string. However, for consistent RNNs the last problem becomes decidable, although the solution can be exponentially long. If additionally the string is limited to polynomial length, the problem becomes NP-complete and APX-hard. In summary, this shows that approximations and heuristic algorithms are necessary in practical applications of those RNNs. We also consider RNNs as unweighted language recognizers and situate RNNs between Turing Machines and Random-Access Machines regarding their real-time recognition powers.
RNN  DNN  theory 
10 weeks ago by foodbaby
[1801.07860] Scalable and accurate deep learning for electronic health records
Predictive modeling with electronic health record (EHR) data is anticipated to drive personalized medicine and improve healthcare quality. Constructing predictive statistical models typically requires extraction of curated predictor variables from normalized EHR data, a labor-intensive process that discards the vast majority of information in each patient's record. We propose a representation of patients' entire, raw EHR records based on the Fast Healthcare Interoperability Resources (FHIR) format. We demonstrate that deep learning methods using this representation are capable of accurately predicting multiple medical events from multiple centers without site-specific data harmonization. We validated our approach using de-identified EHR data from two U.S. academic medical centers with 216,221 adult patients hospitalized for at least 24 hours. In the sequential format we propose, this volume of EHR data unrolled into a total of 46,864,534,945 data points, including clinical notes. Deep learning models achieved high accuracy for tasks such as predicting in-hospital mortality (AUROC across sites 0.93-0.94), 30-day unplanned readmission (AUROC 0.75-0.76), prolonged length of stay (AUROC 0.85-0.86), and all of a patient's final discharge diagnoses (frequency-weighted AUROC 0.90). These models outperformed state-of-the-art traditional predictive models in all cases. We also present a case-study of a neural-network attribution system, which illustrates how clinicians can gain some transparency into the predictions. We believe that this approach can be used to create accurate and scalable predictions for a variety of clinical scenarios, complete with explanations that directly highlight evidence in the patient's chart.
DNN  health 
11 weeks ago by foodbaby
10 Ways DNN Can Improve Your Website SEO
Nothing to do with DNN since this cms is fucking horrible. But tells you how to sorta get the obvious and easy in any other CMS to work in their hunk of shit system.
12 weeks ago by gjhead
[1707.07270] MatchZoo: A Toolkit for Deep Text Matching
In recent years, deep neural models have been widely adopted for text matching tasks, such as question answering and information retrieval, showing improved performance as compared with previous methods. In this paper, we introduce the MatchZoo toolkit that aims to facilitate the designing, comparing and sharing of deep text matching models. Specifically, the toolkit provides a unified data preparation module for different text matching problems, a flexible layer-based model construction process, and a variety of training objectives and evaluation metrics. In addition, the toolkit has implemented two schools of representative deep text matching models, namely representation-focused models and interaction-focused models. Finally, users can easily modify existing models, create and share their own models for text matching in MatchZoo.
DNN  text-classification  IR  toolkit  papers 
12 weeks ago by foodbaby
rdspring1/LSH_DeepLearning: Scalable and Sustainable Deep Learning via Randomized Hashing
Current deep learning architectures are growing larger in order to learn from complex datasets. These architectures require giant matrix multiplication operations to train millions of parameters. Conversely, there is another growing trend to bring deep learning to low-power, embedded devices. The matrix operations, associated with the training and testing of deep networks, are very expensive from a computational and energy standpoint. We present a novel hashing-based technique to drastically reduce the amount of computation needed to train and test neural networks. Our approach combines two recent ideas, Adaptive Dropout and Randomized Hashing for Maximum Inner Product Search (MIPS), to select the nodes with the highest activation efficiently. Our new algorithm for deep learning reduces the overall computational cost of the forward and backward propagation steps by operating on significantly fewer nodes. As a consequence, our algorithm uses only 5% of the total multiplications, while keeping within 1% of the accuracy of the original model on average. A unique property of the proposed hashing-based back-propagation is that the updates are always sparse. Due to the sparse gradient updates, our algorithm is ideally suited for asynchronous, parallel training, leading to near-linear speedup, as the number of cores increases. We demonstrate the scalability and sustainability (energy efficiency) of our proposed algorithm via rigorous experimental evaluations on several datasets.
DNN  perf 
12 weeks ago by foodbaby

« earlier    

related tags

3d  actiondetection  activity  activitydetection  adversarial  ai  al  analysis  apple  artificialintelligence  asr  attack  attention  audio  bayesian  benchmark  blogs  book  brainwave  capsenet  cloud  clustering  cnn  code  coding  community  compiler  computer  course  cs  cuda  cv  data  dataset  datasets  deep  deep_learning  deepclustering  deeplearning  demo  detection  dl  docker  dpu  dssm  enterprise  entrepreneurship  esw  eventdetection  evolution  examples  expressivity  facebook  flow  fpga  fun  github  gpu  graphics  grimmeathook  hack  hardware  health  history  ifttt  information-bottleneck  infosec  interpretation  introspection  ios  ir  jive  js  kaldi  keras  ktfsc  learning  learningrate  library  linear  linkfodder  literature  lithium  lstm  machine-learning  machine  machine_learning  machinelearning  materials  microsoft  ml  model  networks  neural-net  neural  neural_network  neural_networks  neuralnetworks  nips2017  nl  nlp  nn  numerics  optimisation  overfitting  overview  papers  pc  perf  probabilisitc  program  programming  project  python  pytorch  relevance  research  resources  review  rnn  scikit  search  security  semantic  siri  slides  social  softcore  ssn  statistics  survey  swift  tdnn  teaching  tensor  tensorflow  testing  text-classification  tflearn  theory  toolkit  tutorial  unread  visualisation  visualization  voice  work  workstation  writing 

Copy this bookmark: