amy + google   409

Forbes: 12 Amazing Deep Learning Breakthroughs of 2017
1. DeepMind’s AlphaZero Clobbered The Top AI Champions In Go, Shogi, And Chess
2. OpenAI’s Universe Gained Traction With High-Profile Partners
3. Sonnet & Tensorflow Eager Joined Their Fellow Open-Source Frameworks
4. Facebook & Microsoft Joined Forces To Enable AI Framework Interoperability
5. Unity Enabled Developers To Easily Build Intelligent Agents In Games
6. Machine Learning As A Service (MLAAS) Platforms Sprout Up Everywhere
7. The Gan Zoo Continued To Grow
8. Who Needs Recurrence Or Convolution When You Have Attention? (Transformer)
9. AutoML Simplified The Lives Of Data Scientists & Machine Learning Engineers
10. Hinton Declared Backprop Dead, Finally Dropped His Capsule Networks
11. Quantum & Optical Computing Entered The AI Hardware Wars
12. Ethics & Fairness Of ML Systems Took Center Stage
machine_learning  TensorFlow  google  gcp 
5 weeks ago by amy
models/research/slim/nets/nasnet at master · tensorflow/models
This directory contains the code for the NASNet-A model from the paper Learning Transferable Architectures for Scalable Image Recognition by Zoph et al. In there are three different configurations of NASNet-A that are implementented. One of the models is the NASNet-A built for CIFAR-10 and the other two are variants of NASNet-A trained on ImageNet, which are listed below.
TensorFlow  machine_learning  google 
9 weeks ago by amy
tensorflow/tensorflow/contrib/gan at master · tensorflow/tensorflow
TFGAN is a lightweight library for training and evaluating Generative Adversarial Networks (GANs). This technique allows you to train a network (called the 'generator') to sample from a distribution, without having to explicitly model the distribution and without writing an explicit loss. For example, the generator could learn to draw samples from the distribution of natural images. For more details on this technique, see 'Generative Adversarial Networks' by Goodfellow et al. See tensorflow/models for examples, and this tutorial for an introduction.
machine_learning  TensorFlow  google  GANs 
9 weeks ago by amy
Research Blog: TFGAN: A Lightweight Library for Generative Adversarial Networks
Training a neural network usually involves defining a loss function, which tells the network how close or far it is from its objective. For example, image classification networks are often given a loss function that penalizes them for giving wrong classifications; a network that mislabels a dog picture as a cat will get a high loss. However, not all problems have easily-defined loss functions, especially if they involve human perception, such as image compression or text-to-speech systems. Generative Adversarial Networks (GANs), a machine learning technique that has led to improvements in a wide range of applications including generating images from text, superresolution, and helping robots learn to grasp, offer a solution. However, GANs introduce new theoretical and software engineering challenges, and it can be difficult to keep up with the rapid pace of GAN research.

A video of a generator improving over time. It begins by producing random noise, and eventually learns to generate MNIST digits.
In order to make GANs easier to experiment with, we’ve open sourced TFGAN, a lightweight library designed to make it easy to train and evaluate GANs. It provides the infrastructure to easily train a GAN, provides well-tested loss and evaluation metrics, and gives easy-to-use examples that highlight the expressiveness and flexibility of TFGAN. We’ve also released a tutorial that includes a high-level API to quickly get a model trained on your data.
machine_learning  google  GANs  TensorFlow 
9 weeks ago by amy
[1711.10337] Are GANs Created Equal? A Large-Scale Study
Generative adversarial networks (GAN) are a powerful subclass of generative models. Despite a very rich research activity leading to numerous interesting GAN algorithms, it is still very hard to assess which algorithm(s) perform better than others. We conduct a neutral, multi-faceted large-scale empirical study on state-of-the art models and evaluation measures. We find that most models can reach similar scores with enough hyperparameter optimization and random restarts. This suggests that improvements can arise from a higher computational budget and tuning more than fundamental algorithmic changes. To overcome some limitations of the current metrics, we also propose several data sets on which precision and recall can be computed. Our experimental results suggest that future GAN research should be based on more systematic and objective evaluation procedures. Finally, we did not find evidence that any of the tested algorithms consistently outperforms the original one.
machine_learning  GANs  google 
9 weeks ago by amy
Understanding deep learning requires rethinking generalization | OpenReview
Abstract: Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.

Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction
showing that simple depth two neural networks already have perfect finite
sample expressivity as soon as the number of parameters exceeds the
number of data points as it usually does in practice.
machine_learning  google 
9 weeks ago by amy
[1712.01769] State-of-the-art Speech Recognition With Sequence-to-Sequence Models
Attention-based encoder-decoder architectures such as Listen, Attend, and Spell (LAS), subsume the acoustic, pronunciation and language model components of a traditional automatic speech recognition (ASR) system into a single neural network. In our previous work, we have shown that such architectures are comparable to state-of-the-art ASR systems on dictation tasks, but it was not clear if such architectures would be practical for more challenging tasks such as voice search. In this work, we explore a variety of structural and optimization improvements to our LAS model which significantly improve performance. On the structural side, we show that word piece models can be used instead of graphemes. We introduce a multi-head attention architecture, which offers improvements over the commonly-used single-head attention. On the optimization side, we explore techniques such as synchronous training, scheduled sampling, label smoothing, and minimum word error rate optimization, which are all shown to improve accuracy. We present results with a unidirectional LSTM encoder for streaming recognition. On a 12,500 hour voice search task, we find that the proposed changes improve the WER of the LAS system from 9.2% to 5.6%, while the best conventional system achieve 6.7% WER. We also test both models on a dictation dataset, and our model provide 4.1% WER while the conventional system provides 5% WER.
machine_learning  google  TensorFlow  seq2seq 
9 weeks ago by amy
Research Blog: Improving End-to-End Models For Speech Recognition
Traditional automatic speech recognition (ASR) systems, used for a variety of voice search applications at Google, are comprised of an acoustic model (AM), a pronunciation model (PM) and a language model (LM), all of which are independently trained, and often manually designed, on different datasets [1]. AMs take acoustic features and predict a set of subword units, typically context-dependent or context-independent phonemes. Next, a hand-designed lexicon (the PM) maps a sequence of phonemes produced by the acoustic model to words. Finally, the LM assigns probabilities to word sequences. Training independent components creates added complexities and is suboptimal compared to training all components jointly. Over the last several years, there has been a growing popularity in developing end-to-end systems, which attempt to learn these separate components jointly as a single system. While these end-to-end models have shown promising results in the literature [2, 3], it is not yet clear if such approaches can improve on current state-of-the-art conventional systems.

Today we are excited to share “State-of-the-art Speech Recognition With Sequence-to-Sequence Models [4],” which describes a new end-to-end model that surpasses the performance of a conventional production system [1]. We show that our end-to-end system achieves a word error rate (WER) of 5.6%, which corresponds to a 16% relative improvement over a strong conventional system which achieves a 6.7% WER. Additionally, the end-to-end model used to output the initial word hypothesis, before any hypothesis rescoring, is 18 times smaller than the conventional model, as it contains no separate LM and PM.
machine_learning  google  TensorFlow  research 
9 weeks ago by amy
[1710.05941] Searching for Activation Functions
The choice of activation functions in deep networks has a significant effect on the training dynamics and task performance. Currently, the most successful and widely-used activation function is the Rectified Linear Unit (ReLU). Although various hand-designed alternatives to ReLU have been proposed, none have managed to replace it due to inconsistent gains. In this work, we propose to leverage automatic search techniques to discover new activation functions. Using a combination of exhaustive and reinforcement learning-based search, we discover multiple novel activation functions. We verify the effectiveness of the searches by conducting an empirical evaluation with the best discovered activation function. Our experiments show that the best discovered activation function, f(x)=x⋅sigmoid(βx), which we name Swish, tends to work better than ReLU on deeper models across a number of challenging datasets. For example, simply replacing ReLUs with Swish units improves top-1 classification accuracy on ImageNet by 0.9\% for Mobile NASNet-A and 0.6\% for Inception-ResNet-v2. The simplicity of Swish and its similarity to ReLU make it easy for practitioners to replace ReLUs with Swish units in any neural network.
machine_learning  google  TensorFlow 
9 weeks ago by amy
[1611.01578] Neural Architecture Search with Reinforcement Learning
Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.
machine_learning  google  TensorFlow 
10 weeks ago by amy
[1703.01041] Large-Scale Evolution of Image Classifiers
Neural networks have proven effective at solving difficult problems but designing their architectures can be challenging, even for image classification problems alone. Our goal is to minimize human participation, so we employ evolutionary algorithms to discover such networks automatically. Despite significant computational requirements, we show that it is now possible to evolve models with accuracies within the range of those published in the last year. Specifically, we employ simple evolutionary techniques at unprecedented scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting from trivial initial conditions and reaching accuracies of 94.6% (95.6% for ensemble) and 77.0%, respectively. To do this, we use novel and intuitive mutation operators that navigate large search spaces; we stress that no human participation is required once evolution starts and that the output is a fully-trained model. Throughout this work, we place special emphasis on the repeatability of results, the variability in the outcomes and the computational requirements.
machine_learning  google  TensorFlow 
10 weeks ago by amy
[1706.03762] Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
machine_learning  google  TensorFlow  attention  nlp 
11 weeks ago by amy
[1602.02215] Swivel: Improving Embeddings by Noticing What's Missing
We present Submatrix-wise Vector Embedding Learner (Swivel), a method for generating low-dimensional feature embeddings from a feature co-occurrence matrix. Swivel performs approximate factorization of the point-wise mutual information matrix via stochastic gradient descent. It uses a piecewise loss with special handling for unobserved co-occurrences, and thus makes use of all the information in the matrix. While this requires computation proportional to the size of the entire matrix, we make use of vectorized multiplication to process thousands of rows and columns at once to compute millions of predicted values. Furthermore, we partition the matrix into shards in order to parallelize the computation across many nodes. This approach results in more accurate embeddings than can be achieved with methods that consider only observed co-occurrences, and can scale to much larger corpora than can be handled with sampling methods.
machine_learning  google 
november 2017 by amy
[1707.07012] Learning Transferable Architectures for Scalable Image Recognition
Developing image classification models often requires significant architecture engineering. In this paper, we attempt to automate this engineering process by learning the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to learn an architectural building block on a small dataset that can be transferred to a large dataset. In our experiments, we search for the best convolutional layer (or "cell") on the CIFAR-10 dataset and then apply this learned cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters. Although the cell is not learned directly on ImageNet, an architecture constructed from the best learned cell achieves, among the published work, state-of-the-art accuracy of 82.7% top-1 and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS -- a reduction of 28% from the previous state of the art model. When evaluated at different levels of computational cost, accuracies of networks constructed from the cells exceed those of the state-of-the-art human-designed models. For instance, a smaller network constructed from the best cell also achieves 74% top-1 accuracy, which is 3.1% better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features used with Faster-RCNN framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO dataset.
machine_learning  google 
november 2017 by amy
Research Blog: AutoML for large scale image classification and object detection
A few months ago, we introduced our AutoML project, an approach that automates the design of machine learning models. While we found that AutoML can design small neural networks that perform on par with neural networks designed by human experts, these results were constrained to small academic datasets like CIFAR-10, and Penn Treebank. We became curious how this method would perform on larger more challenging datasets, such as ImageNet image classification and COCO object detection. Many state-of-the-art machine learning architectures have been invented by humans to tackle these datasets in academic competitions.

In Learning Transferable Architectures for Scalable Image Recognition, we apply AutoML to the ImageNet image classification and COCO object detection dataset -- two of the most respected large scale academic datasets in computer vision. These two datasets prove a great challenge for us because they are orders of magnitude larger than CIFAR-10 and Penn Treebank datasets. For instance, naively applying AutoML directly to ImageNet would require many months of training our method.
google  machine_learning 
november 2017 by amy
[1711.00436] Hierarchical Representations for Efficient Architecture Search
We explore efficient neural architecture search methods and present a simple yet powerful evolutionary algorithm that can discover new architectures achieving state of the art results. Our approach combines a novel hierarchical genetic representation scheme that imitates the modularized design pattern commonly adopted by human experts, and an expressive search space that supports complex topologies. Our algorithm efficiently discovers architectures that outperform a large number of manually designed models for image classification, obtaining top-1 error of 3.6% on CIFAR-10 and 20.3% when transferred to ImageNet, which is competitive with the best existing neural architecture search approaches and represents the new state of the art for evolutionary strategies on this task. We also present results using random search, achieving 0.3% less top-1 accuracy on CIFAR-10 and 0.1% less on ImageNet whilst reducing the architecture search time from 36 hours down to 1 hour.
machine_learning  google 
november 2017 by amy
Research Blog: Announcing AVA: A Finely Labeled Video Dataset for Human Action Understanding
Teaching machines to understand human actions in videos is a fundamental research problem in Computer Vision, essential to applications such as personal video search and discovery, sports analysis, and gesture interfaces. Despite exciting breakthroughs made over the past years in classifying and finding objects in images, recognizing human actions still remains a big challenge. This is due to the fact that actions are, by nature, less well-defined than objects in videos, making it difficult to construct a finely labeled action video dataset. And while many benchmarking datasets, e.g., UCF101, ActivityNet and DeepMind’s Kinetics, adopt the labeling scheme of image classification and assign one label to each video or video clip in the dataset, no dataset exists for complex scenes containing multiple people who could be performing different actions.

In order to facilitate further research into human action recognition, we have released AVA, coined from “atomic visual actions”, a new dataset that provides multiple action labels for each person in extended video sequences. AVA consists of URLs for publicly available videos from YouTube, annotated with a set of 80 atomic actions (e.g. “walk”, “kick (an object)”, “shake hands”) that are spatial-temporally localized, resulting in 57.6k video segments, 96k labeled humans performing actions, and a total of 210k action labels. You can browse the website to explore the dataset and download annotations, and read our arXiv paper that describes the design and development of the dataset.
machine_learning  google 
october 2017 by amy
[1709.07417] Neural Optimizer Search with Reinforcement Learning
We present an approach to automate the process of discovering optimization methods, with a focus on deep learning architectures. We train a Recurrent Neural Network controller to generate a string in a domain specific language that describes a mathematical update equation based on a list of primitive functions, such as the gradient, running average of the gradient, etc. The controller is trained with Reinforcement Learning to maximize the performance of a model after a few epochs. On CIFAR-10, our method discovers several update rules that are better than many commonly used optimizers, such as Adam, RMSProp, or SGD with and without Momentum on a ConvNet model. We introduce two new optimizers, named PowerSign and AddSign, which we show transfer well and improve training on a variety of different tasks and architectures, including ImageNet classification and Google's neural machine translation system.
TensorFlow  machine_learning  RL  google 
september 2017 by amy
NYTimes/marvin: A go-kit HTTP server for the App Engine Standard Environment
Marvin is a go-kit server for Google App Engine

Marvin + GAE -> let's get it oooonnnn!

:insert adorable Marvin Gaye inspired gopher here:

Marvin provides common tools and structure for services being built on Google App Engine by leaning heavily on the go-kit/kit/transport/http package. The service interface here is very similar to the service interface in NYT's gizmo/server/kit package so teams can build very similar looking software but use vasty different styles of infrastructure.

Marvin has been built to work with Go 1.8, currently in open beta on App Engine Standard. Use it by setting api_version: go1.8 in your app.yaml.
gae  google  gcp  gokit  golang 
august 2017 by amy
A new machine learning app for reporting on hate in America
Hate crimes in America have historically been difficult to track since there is very little official data collected. What data does exist is incomplete and not very useful for reporters keen to learn more. This led ProPublica — with the support of the Google News Lab — to form Documenting Hate earlier this year, a collaborative reporting project that aims to create a national database for hate crimes by collecting and categorizing news stories related to hate crime attacks and abuses from across the country.

Now, with ProPublica, we are launching a new machine learning tool to help journalists covering hate news leverage this data in their reporting.
google  machine_learning 
august 2017 by amy
Research Blog: An Update to Open Images - Now with Bounding-Boxes
Last year we introduced Open Images, a collaborative release of ~9 million images annotated with labels spanning over 6000 object categories, designed to be a useful dataset for machine learning research. The initial release featured image-level labels automatically produced by a computer vision model similar to Google Cloud Vision API, for all 9M images in the training set, and a validation set of 167K images with 1.2M human-verified image-level labels.

Today, we introduce an update to Open Images, which contains the addition of a total of ~2M bounding-boxes to the existing dataset, along with several million additional image-level labels. Details include:
1.2M bounding-boxes around objects for 600 categories on the training set. These have been produced semi-automatically by an enhanced version of the technique outlined in [1], and are all human-verified.
Complete bounding-box annotation for all object instances of the 600 categories on the validation set, all manually drawn (830K boxes). The bounding-box annotations in the training and validations sets will enable research on object detection on this dataset. The 600 categories offer a broader range than those in the ILSVRC and COCO detection challenges, and include new objects such as fedora hat and snowman.
4.3M human-verified image-level labels on the training set (over all categories). This will enable large-scale experiments on object classification, based on a clean training set with reliable labels.
machine_learning  google 
july 2017 by amy
Research Blog: Federated Learning: Collaborative Machine Learning without Centralized Training Data
Standard machine learning approaches require centralizing the training data
on one machine or in a datacenter. And Google has built one of the most
secure and robust cloud infrastructures for processing this data to make
our services better. Now for models trained from user interaction with
mobile devices, we're introducing an additional approach: *Federated
Federated Learning enables mobile phones to collaboratively
learn a shared prediction model while keeping all the training data on
device, decoupling the ability to do machine learning from the need to
store the data in the cloud. This goes beyond the use of local models that
make predictions on mobile devices (like the Mobile Vision API
<> and On-Device Smart Reply
by bringing model *training* to the device as well.
google  machine_learning 
july 2017 by amy
Gradient Ventures
Our AI-focused venture fund invests in and connects startups with Google’s resources, innovation, and technical leadership in artificial intelligence.
We bring Google smarts and scale to promising, early-stage startups, providing innovators with funding, resources, and dedicated access to world-class people and practices in this area. Our fund focuses on helping founders navigate the challenges in developing AI-based products, from leveraging training datasets to helping companies take advantage of the latest techniques, so that great ideas can come to life.
google  startups  AI  machine_learning 
july 2017 by amy
"Distroless" Docker Images

"Distroless" images contain only your application and its runtime dependencies. They do not contain package managers, shells any other programs you would expect to find in a standard Linux distribution.
docker  google  container 
june 2017 by amy
[1706.05137] One Model To Learn Them All
Deep learning yields great results across many fields, from speech recognition, image classification, to translation. But for each problem, getting a deep model to work well involves research into the architecture and a long period of tuning. We present a single model that yields good results on a number of problems spanning multiple domains. In particular, this single model is trained concurrently on ImageNet, multiple translation tasks, image captioning (COCO dataset), a speech recognition corpus, and an English parsing task. Our model architecture incorporates building blocks from multiple domains. It contains convolutional layers, an attention mechanism, and sparsely-gated layers. Each of these computational blocks is crucial for a subset of the tasks we train on. Interestingly, even if a block is not crucial for a task, we observe that adding it never hurts performance and in most cases improves it on all tasks. We also show that tasks with less data benefit largely from joint training with other tasks, while performance on large tasks degrades only slightly if at all.
TensorFlow  machine_learning  google 
june 2017 by amy
[1602.07261] Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the ImageNet classification (CLS) challenge
machine_learning  google 
june 2017 by amy
Spinnaker 1.0 Enables Continuous Delivery of Cloud Code Updates
Google has released a new version of Spinnaker, an open-source software release management platform for deploying application code to the cloud.

Video streaming giant Netflix originally developed the technology to enable continuous delivery of software updates to its hosted applications and services on Amazon's cloud platform.

In 2014 Netflix began working with Google and a couple of other companies to make Spinnaker a multi-cloud platform. A year later it released the technology to the open source community. Besides Netflix and Google, other companies using Spinnaker currently to speed up cloud application delivery and deployment include Target, Waze and Oracle.

Spinnaker 1.0, which Google announced this week, builds on the work that has been put into the technology thus far. It is a continuous delivery platform that is designed to make it easier for organizations to deliver applications and software code to Google's cloud platform and to cloud services from other providers as well.
oss  spinnaker  netflix  google 
june 2017 by amy
Google-funded ‘super sensor’ project brings IoT powers to dumb appliances | TechCrunch
Google-funded ‘super sensor’ project brings IoT powers to dumb appliances
iot  research  google 
may 2017 by amy
Invisible Corporations, Part Two – Latent Content
This is a tall tale of two companies: On the surface, Monolithoogle bears a mild resemblance to Google. Distributazon has some things in common with Amazon. These similarities are anecdotal and superficial. Accuracy is not the goal. At best, I am attempting to develop hyperbolic caricatures of the companies as archetypes. The ideas here evolved from a conversation I had with a good friend.

This is the second installment of Invisible Corporations.
business  culture  google  amazon 
may 2017 by amy
1,000 Genomes — Google Genomics v1 documentation
1,000 Genomes
This dataset comprises roughly 2,500 genomes from 25 populations around the world.
google  genomics  machine_learning 
may 2017 by amy
Using Web Preview  |  Cloud Shell  |  Google Cloud Platform
This page describes how to use the web preview feature in Google Cloud Shell. This feature allows you to run web applications on the Cloud Shell virtual machine instance and preview them from the Google Cloud Platform Console.
gcp  google 
may 2017 by amy
Research Blog: Open sourcing the Embedding Projector: a tool for visualizing high dimensional data
To enable a more intuitive exploration process, we are open-sourcing the Embedding Projector, a web application for interactive visualization and analysis of high-dimensional data recently shown as an A.I. Experiment, as part of TensorFlow. We are also releasing a standalone version at, where users can visualize their high-dimensional data without the need to install and run TensorFlow.
TensorFlow  machine_learning  google 
may 2017 by amy
« earlier      
per page:    204080120160

related tags

#analytics  #Android  #Apple  #books  #cablegate  #charts  #china  #chine  #copyright  #EMI  #google  #gov20  #netneutrality  #oracle  #paywalls  #swpats  #times  #verizon  #w2s  #wave  #Zeitgeist  &  @andreaswpv  @emilychang  @jason_pontin  @jayrosen_nyu  @om  aarchitecture  academia  ack  advertising  africa  AfriTechSummit  aggregator  AI  ajax  amazon  amusements  analysis  analytics  android  api  apis  AppEngine  Apple  archive  arghh  art  attention  austin  australia  aws  Beam  bigdata  bigquery  big_data  bioinformatics  biology  blog  blogs  books  browsers  business  cablegate  censorship  charts  china  chine  chocolate  chrome  cloud  cloud_computing  cocoa  collaboration  collaborative_filtering  comment  computers  computer_languages  computer_science  computing  conspiracy_theories  container  cool  copyright  cryptography  culture  data  database  databases  Dataflow  datamining  datastore  data_science  deeplearning  deep_learning  design  design_patterns  development  devops  discovery  distributed  diversity  docker  documentation  ec2  economics  education  email  EMI  engineering  essays  feedly  firefox  fonts  food  framework  frameworks  from  gae  GANs  gce  gcp  gcs  gender  genetics  genomics  geocoding  geography  geolocation  geometry  ghana  gmail  gokit  golang  google  gov20  government  gps  graphics  greasemonkey  grid  gwt  hardware  health  history  hmm  hosting  huh  identity  images  image_processing  innovation  internet  interviews  ios5  iot  iphone  java  javascript  journalism  jruby  json  k8s  kaggle  kindle  kml  kubernetes  language  law  libraries  library  mac  machine_learning  mapping  mapreduce  maps  markup  mashups  md  media  medicine  metadata  microformats  mobile  museums  mysql  netflix  netneutrality  network  networks  net_neutrality  news  nlp  nosql  nytimes  objective-c  ocr  opensource  open_source  optimization  oracle  oscon  oss  osx  paywalls  personalization  photography  photos  photoshop  php  politics  portals  presentations  privacy  productivity  programming  protocols  pubsub  python  qr_codes  rails  rdbms  recipes  reference  religion  research  RL  ruby  s2  s3  scalability  science  search  seattle  security  semantic_web  seq2seq  Shared  sharing  social_media  software  software/social  spanner  spinnaker  standards  startups  statistics  storage  swpats  sydney  tagging  tbr  technology  TensorFlow  times  tips  tools  translation  travel  tutorial  tutorials  twitter  twitter_fav  typography  udacity  usa  usability  utilities  verizon  video  videos  visualization  visualizations  w2s  wave  Weather  web  web_design  web_dev  web_services  workshops  writing  xml  Zeitgeist  Zend 

Copy this bookmark: