amy + tensorflow   484

Cloud poetry: training and hyperparameter tuning custom text models on Cloud ML Engine | Google Cloud Blog
Machine learning models for interpreting and processing natural language have made tremendous advances in recent years thanks to deep learning methods. Recurrent models continue to be a common choice for textual data, but newer models based on fully convolutional architectures like ByteNet, and even more recently models based on attention like the Transformer have yielded impressive results. All this complexity—added to the fast pace of research—have made it hard to keep current and apply the latest methods to your own problems.

This is why the open-sourcing of Tensor2Tensor (T2T), a library of best-in-class machine learning models by the Google Brain team, was so exciting—you now have at your disposal a standard interface that ties together all the pieces needed in a deep learning system: datasets, model architectures, optimizers, and hyperparameters in a coherent and standardized way that enables you to try many models on the same dataset, or apply the same model to many datasets.

Now that we’ve established that the software tools exist, how should you go about setting up  a training environment to launch many experiments? In this blog post, we provide a tutorial on how to use T2T and Google Cloud ML Engine, Google Cloud Platform’s fully managed ML service, to train a text-to-text model on your own data. With T2T and ML Engine, you won’t have to manage any infrastructure for training or hyperparameter tuning. You will be able to train a sophisticated, custom natural language model from just a Jupyter notebook.

Throughout this blog post, we will examine code blocks from this Jupyter notebook—we strongly encourage you to fire up Cloud Datalab (you don’t need an instance with a GPU because we’ll submit jobs to Cloud ML Engine) and try out the notebook on Google Cloud Platform.
TensorFlow  machine_learning 
5 weeks ago by amy
Google AI Blog: Accelerating Deep Learning Research with the Tensor2Tensor Library
Deep Learning (DL) has enabled the rapid advancement of many useful technologies, such as machine translation, speech recognition and object detection. In the research community, one can find code open-sourced by the authors to help in replicating their results and further advancing deep learning. However, most of these DL systems use unique setups that require significant engineering effort and may only work for a specific problem or architecture, making it hard to run new experiments and compare the results.

Today, we are happy to release Tensor2Tensor (T2T), an open-source system for training deep learning models in TensorFlow. T2T facilitates the creation of state-of-the art models for a wide variety of ML applications, such as translation, parsing, image captioning and more, enabling the exploration of various ideas much faster than previously possible. This release also includes a library of datasets and models, including the best models from a few recent papers (Attention Is All You Need, Depthwise Separable Convolutions for Neural Machine Translation and One Model to Learn Them All) to help kick-start your own DL research.
machine_learning  TensorFlow 
5 weeks ago by amy
GoogleCloudPlatform/appengine-tensorboard
Google App Engine can provide an easy way to get a persistent Tensorboard server with authentication for a small cost.

In addition if you find yourself pulling a large amount of data from GCS when starting up Tensorboard servers, you may actually pay less for a persistent GAE server, since you don't pay for data egress between GCS and GAE.
gae  gcp  TensorFlow  machine_learning 
7 weeks ago by amy
python - Sliding window of a batch in Tensorflow using Dataset API - Stack Overflow
Can be achieved using sliding window batch operation for tf.data.Dataset:

Example:

from tensorflow.contrib.data.python.ops import sliding

imgs = tf.constant(['img0','img1', 'img2','img3', 'img4','img5', 'img6', 'img7'])
labels = tf.constant([0, 0, 0, 1, 1, 1, 0, 0])

# create TensorFlow Dataset object
data = tf.data.Dataset.from_tensor_slices((imgs, labels))

# sliding window batch
window = 4
stride = 1
data = data.apply(sliding.sliding_window_batch(window, stride))

# create TensorFlow Iterator object
iterator = tf.data.Iterator.from_structure(data.output_types,data.output_shapes)
next_element = iterator.get_next()

# create initialization ops
init_op = iterator.make_initializer(data)

with tf.Session() as sess:
# initialize the iterator on the data
sess.run(init_op)
while True:
try:
elem = sess.run(next_element)
print(elem)
except tf.errors.OutOfRangeError:
print("End of dataset.")
break
TensorFlow  machine_learning 
10 weeks ago by amy
Google AI Blog: Introducing a New Framework for Flexible and Reproducible Reinforcement Learning Research

In a new blog post, Google Brain team researchers @pcastr & @marcgbellemare share a new @TensorFlow-based reinforcement learning framework that aims to provide flexibility, stability, and reproducibility for new and experienced RL researchers alike. http://goo.gl/nrsb9n
TensorFlow  machine_learning  google 
11 weeks ago by amy
[1802.05365] Deep contextualized word representations
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer
(Submitted on 15 Feb 2018 (v1), last revised 22 Mar 2018 (this version, v2))
We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals.
machine_learning  TensorFlow 
11 weeks ago by amy
allenai/bilm-tf: Tensorflow implementation of contextualized word representations from bi-directional language models
Tensorflow implementation of contextualized word representations from bi-directional language models
machine_learning  TensorFlow 
11 weeks ago by amy
tensor2tensor/tensor2tensor/mesh_tensorflow at master · tensorflow/tensor2tensor
Mesh TensorFlow (mtf) is a language for distributed deep learning, capable of specifying a broad class of distributed tensor computations. The purpose of mesh-tensorflow is to formalize and implement distribution strategies for your computation graph over your hardware/processors For example: "Split the batch over rows of processors and split the units in the hidden layer across columns of processors." Mesh-TensorFlow is implemented as a layer over TensorFlow.
TensorFlow  machine_learning 
august 2018 by amy
tensorflow/probability: Probabilistic reasoning and statistical analysis in TensorFlow
TensorFlow Probability is a library for probabilistic reasoning and statistical analysis in TensorFlow. As part of the TensorFlow ecosystem, TensorFlow Probability provides integration of probabilistic methods with deep networks, gradient-based inference via automatic differentiation, and scalability to large datasets and models via hardware acceleration (e.g., GPUs) and distributed computation.
TensorFlow  statistics  machine_learning  probability 
august 2018 by amy
tf.contrib.eager.defun  |  TensorFlow
Compiles a Python function into a callable TensorFlow graph.
TensorFlow  machine_learning 
july 2018 by amy
AutoGraph converts Python into TensorFlow graphs – TensorFlow – Medium
We’d like to tell you about a new TensorFlow feature called “AutoGraph”. AutoGraph converts Python code, including control flow, print() and other Python-native features, into pure TensorFlow graph code.
TensorFlow  machine_learning  google 
july 2018 by amy
training-data-analyst/serving_embed.ipynb at master · GoogleCloudPlatform/training-data-analyst
Serving embeddings
This notebook illustrates how to:

- Create a custom embedding as part of a regression/classification model
- Representing categorical variables in different ways
- Math with feature columns
- Serve out the embedding, as well as the original model's predictions
TensorFlow  machine_learning  gcp 
july 2018 by amy
tensorflow - train_and_evaluate() batch size with TPU on GCMLE - Stack Overflow
The batch size handling is slightly different between normal Estimator and TPUEstimator.

For normal Estimator, the batch size is not explicitly visible to Estimator; instead, it is part of the input_fn story, like your example is doing.

For TPU, batch size is handled differently. To be specific, the "xxx_batch_size" family, e.g., train batch size, in TPUEstimator constructor is the global batch size for your model. By changing the tf.contrib.tpu.TPUConfig.per_host_input_for_training, your input_fn is invoked by TPUEstimator in different ways.

Here, the params['batch_size'] is the shard batch size, calculated by the train_batch_size in constructor.

A concrete example is: Say, train_batch_size is 64, and for Cloud TPU,

if per_host_input_for_training is False, input_fn will be invoked 8 times on Cloud TPU (this is called per-core mode). In this case, the params['batch_size'] in input_fn is 64/8=8. The total global batch size your model sees is 64, which is the train_batch_size above passed via TPUEstimator constructor.

If flipping the per_host_input_for_training to bool true, params['batch_size'] in input_fn will be 64 (not 64/8) and the input_fn will be called only once. So, global batch size is still 64.

The same input_fn can work in both case.

For TPU Pods, this is the same story as params['batch_size'] is the shard batch size with respect to each host.

To summarize:

The global batch size should be passed via TPUEstimator constructor.

The input_fn should take the shard batch size from params['batch_size'] and respect that to create your dataset.
TensorFlow  machine_learning  TPUs 
june 2018 by amy
Predicting San Francisco Bikeshare. availability with TensorFlow and LSTMs
Predicting San Francisco Bikeshare availability with TensorFlow and LSTMs
machine_learning  TensorFlow  LSTMs 
june 2018 by amy
Google AI Blog: Advances in Semantic Textual Similarity
The recent rapid progress of neural network-based natural language understanding research, especially on learning semantic text representations, can enable truly novel products such as Smart Compose and Talk to Books. It can also help improve performance on a variety of natural language tasks which have limited amounts of training data, such as building strong text classifiers from as few as 100 labeled examples.

Below, we discuss two papers reporting recent progress on semantic representation research at Google, as well as two new models available for download on TensorFlow Hub that we hope developers will use to build new and exciting applications.
TensorFlow  machine_learning  google 
may 2018 by amy
model-analysis/examples/chicago_taxi at master · tensorflow/model-analysis
The Chicago Taxi example demonstrates the end-to-end workflow and steps of how to transform data, train a model, analyze and serve it, using:

TensorFlow Transform for feature preprocessing
TensorFlow Estimators for training
TensorFlow Model Analysis and Jupyter for evaluation
TensorFlow Serving for serving
The example shows two modes of deployment.

The first is a “local mode” with all necessary dependencies and components deployed locally.
The second is a “cloud mode”, where all components will be deployed on Google Cloud.
In the future we will be showing additional deployment modes, so dear reader, feel free to check back in periodically!
machine_learning  TensorFlow  google 
april 2018 by amy
google/deepvariant: DeepVariant is an analysis pipeline that uses a deep neural network to call genetic variants from next-generation DNA sequencing data.
DeepVariant is an analysis pipeline that uses a deep neural network to call genetic variants from next-generation DNA sequencing data.
google  genomics  dna  machine_learning  TensorFlow 
march 2018 by amy
SeldonIO/seldon-core: Machine Learning Deployment for Kubernetes
Seldon Core is an open source platform for deploying machine learning models on Kubernetes.
machine_learning  kubernetes  k8s  TensorFlow 
march 2018 by amy
Cyclic computational graphs with Tensorflow or Theano - Stack Overflow
TensorFlow does support cyclic computation graphs. The tf.while_loop() function allows you to specify a while loop with arbitrary subgraphs for the condition and the body of the loop, and the runtime will execute the loop in parallel. The tf.scan() function is a higher-level API that is similar to Theano's theano.scan() function. Both allow you to loop over tensors of dynamic size.
TensorFlow 
march 2018 by amy
Twitter
RT : I'm doing a webinar on a development workflow for models written in -- to go from data…
tensorflow  machinelearning  from twitter
march 2018 by amy
Propel ML
Propel provides a GPU-backed numpy-like infrastructure for scientific computing in JavaScript. JavaScript is a fast, dynamic language which, we think, could act as an ideal workflow for scientific programmers of all sorts.
machine_learning  TensorFlow  javascript 
february 2018 by amy
Machine Learning with TensorFlow on Google Cloud Platform: code samples
new courses!: “Machine Learning with TensorFlow on Google Cloud Platform: code samples”

via @lak_gcp
TensorFlow  machine_learning  gcp 
february 2018 by amy
design | Architecture and UX design of KAML-D
KAML-D can be deployed on any cloud (or on-premises) platform that allows you to run Kubernetes. Most of the components are open source. As a SaaS, it integrates with the cloud providers (user) identity management system, on-prem something like LDAP.

Existing open source components KAML-D uses:

Kubernetes for workload management and to ensure portability
TensorFlow for machine learning execution
JupyterHub for data scientists (dev/test of algorithms)
Storage layer: To hold the datasets, Minio, Ceph, as well as cloud-provider specific offerings such as EBS, with built-in dotmesh support for snapshots
New components KAML-D introduces:

KAML-D Workbench: a graphical UI for data scientists, data engineers, developers, and SREs to manage datasets as well as to test and deploy ML algorithms. Builds on the metadata layer to find and visualize datasets. Builds on the storage layer to store and load datasets.
KAML-D Metadata Hub: a data and metadata layer using PrestoDB and Elasticsearch for indexing and querying datasets.
KAML-D Observation Hub: a comprehensive observability suite for SREs and admins (as well as developers on the app level) to understand the health of the KAML-D platform and troubleshoot issues on the platform and application level:
Prometheus and Grafana for end-to-end metrics and monitoring/alerting
EFK stack for (aggregrated) logging
Jaeger for (distributed) tracing
The user management and access control part is outside of the scope of KAML-D but standard integration points such as LDAP are offered.
machine_learning  kubernetes  TensorFlow 
february 2018 by amy
nikhilk/node-tensorflow: Node.js + TensorFlow
TensorFlow is Google's machine learning runtime. It is implemented as C++ runtime, along with Python framework to support building a variety of models, especially neural networks for deep learning.

From Nikhil

It is interesting to be able to use TensorFlow in a node.js application using just JavaScript (or TypeScript if that's your preference). However, the Python functionality is vast (several ops, estimator implementations etc.) and continually expanding. Instead, it would be more practical to consider building Graphs and training models in Python, and then consuming those for runtime use-cases (like prediction or inference) in a pure node.js and Python-free deployment. This is what this node module enables.

This module takes care of the building blocks and mechanics for working with the TensorFlow C API, and instead provides an API around Tensors, Graphs, Sessions and Models.

This is still in the works, and recently revamped to support TensorFlow 1.4+.
machinelearning  TensorFlow 
february 2018 by amy
Forbes: 12 Amazing Deep Learning Breakthroughs of 2017
1. DeepMind’s AlphaZero Clobbered The Top AI Champions In Go, Shogi, And Chess
2. OpenAI’s Universe Gained Traction With High-Profile Partners
3. Sonnet & Tensorflow Eager Joined Their Fellow Open-Source Frameworks
4. Facebook & Microsoft Joined Forces To Enable AI Framework Interoperability
5. Unity Enabled Developers To Easily Build Intelligent Agents In Games
6. Machine Learning As A Service (MLAAS) Platforms Sprout Up Everywhere
7. The Gan Zoo Continued To Grow
8. Who Needs Recurrence Or Convolution When You Have Attention? (Transformer)
9. AutoML Simplified The Lives Of Data Scientists & Machine Learning Engineers
10. Hinton Declared Backprop Dead, Finally Dropped His Capsule Networks
11. Quantum & Optical Computing Entered The AI Hardware Wars
12. Ethics & Fairness Of ML Systems Took Center Stage
machine_learning  TensorFlow  google  gcp 
february 2018 by amy
« earlier      
per page:    204080120160

Copy this bookmark:



description:


tags: