Skip to content

Commit 42088cb

Browse files
authored
new notebooks, cleanup & updates (#73)
* added notebooks, reverted to tf.train.AdamOptimizer * readme work & various cleanup, better prediction output, updated docker image
1 parent 8e4a9b5 commit 42088cb

File tree

20 files changed

+1189
-217
lines changed

20 files changed

+1189
-217
lines changed

INSTALL.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,6 @@
11

2+
**Note**: you do not need to follow these instructions for the [O'Reilly AI Conf workshop](https://conferences.oreilly.com/artificial-intelligence/ai-ca/public/schedule/detail/60305). For that workshop, you just need [TensorFlow installed](https://www.tensorflow.org/install/) on your laptop.
3+
24
# Installation instructions for the TensorFlow/Cloud ML workshop
35

46
- [Project and Cloud ML setup](#project-and-cloud-ml-setup)
@@ -52,7 +54,7 @@ If you like, you can start up a Google Compute Engine (GCE) VM with docker insta
5254
Once Docker is installed and running, download the workshop image:
5355

5456
```sh
55-
$ docker pull gcr.io/google-samples/tf-workshop:v6
57+
$ docker pull gcr.io/google-samples/tf-workshop:v7
5658
```
5759

5860
[Here's the Dockerfile](https://github.com/amygdala/tensorflow-workshop/tree/master/workshop_image) used to build this image.
@@ -67,7 +69,7 @@ Once you've downloaded the container image, you can run it like this:
6769

6870
```sh
6971
docker run -v `pwd`/workshop-data:/root/tensorflow-workshop-master/workshop-data -it \
70-
-p 6006:6006 -p 8888:8888 -p 5000:5000 gcr.io/google-samples/tf-workshop:v6
72+
-p 6006:6006 -p 8888:8888 -p 5000:5000 gcr.io/google-samples/tf-workshop:v7
7173
```
7274

7375
Edit the path to the directory you're mounting as appropriate. The first component of the `-v` arg is the local directory, and the second component is where you want to mount it in your running container.
@@ -80,7 +82,7 @@ For the second two, you will get a URL to paste in your browser, to obtain an au
8082
```shell
8183
gcloud config set project <your-project-name>
8284
gcloud auth login
83-
gcloud beta auth application-default login
85+
gcloud auth application-default login
8486
```
8587

8688
### Restarting the container later
@@ -105,7 +107,7 @@ $ docker exec -it <container_id> bash
105107

106108
### Running the Docker container on a VM
107109

108-
It is easy to set up a Google Compute Engine (GCE) VM on which to run the Docker container. We sketch the steps below, or see [TLDR_CLOUD_INSTALL.md](TLDR_CLOUD_INSTALL.md) for more detail.
110+
It is easy to set up a Google Compute Engine (GCE) VM on which to run the Docker container. We sketch the steps below.
109111

110112
First, make sure that your project has the GCE API enabled. An easy way to do this is to go to the [Cloud Console](https://console.cloud.google.com/), and visit the Compute Engine panel. It should display a button to enable the API.
111113

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@ This document points to more information for each workshop lab.
1414

1515
- [Building a small starter TensorFlow graph](workshop_sections/getting_started/starter_tf_graph/README.md)
1616
- [XOR: A minimal training example](workshop_sections/getting_started/xor/README.md)
17+
- A [LinearRegressor example](workshop_sections/linear_regressor_datasets) that uses Datasets.
1718

1819
## The MNIST (& 'fashion MNIST') series
1920

TLDR_CLOUD_INSTALL.md

Lines changed: 0 additions & 182 deletions
This file was deleted.

workshop_image/Dockerfile

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,13 +22,13 @@ RUN apt-get install -y python-scipy
2222
RUN pip install sklearn nltk pillow setuptools
2323
RUN pip install flask google-api-python-client
2424
RUN pip install pandas python-snappy scipy scikit-learn requests uritemplate
25-
RUN pip install --upgrade --force-reinstall https://storage.googleapis.com/cloud-ml/sdk/cloudml.latest.tar.gz
25+
# RUN pip install --upgrade --force-reinstall https://storage.googleapis.com/cloud-ml/sdk/cloudml.latest.tar.gz
2626

2727
# RUN python -c "import nltk; nltk.download('punkt')"
2828

29-
RUN curl https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-159.0.0-linux-x86_64.tar.gz | tar xvz
29+
RUN curl https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-171.0.0-linux-x86_64.tar.gz | tar xvz
3030
RUN ./google-cloud-sdk/install.sh -q
31-
RUN ./google-cloud-sdk/bin/gcloud components install beta
31+
# RUN ./google-cloud-sdk/bin/gcloud components install beta
3232

3333
ADD download_git_repo.py download_git_repo.py
3434
ENV PATH="${PATH}:/root/google-cloud-sdk/bin"

workshop_sections/README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@ This directory contains the workshop labs.
1111

1212
- [Building a small starter TensorFlow graph](getting_started/starter_tf_graph/README.md)
1313
- [XOR: A minimal training example](getting_started/xor/README.md)
14+
- A [LinearRegressor example](linear_regressor_datasets) that uses Datasets.
15+
1416

1517
## The MNIST (& 'fashion MNIST') series
1618

workshop_sections/getting_started/xor/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ To run locally, first choose either:
1414
python -m xor.xor_summaries --output-dir ${OUTPUT_DIR}
1515
```
1616

17-
or if you want to try using the `gcloud beta ml local train` command the following will work:
17+
or if you want to try using the `gcloud ml-engine local train` command, the following will work:
1818

1919
```
2020
gcloud ml-engine local train \
@@ -24,7 +24,7 @@ gcloud ml-engine local train \
2424
--output-dir ${OUTPUT_DIR}
2525
```
2626

27-
The `gcloud beta ml local train` command runs your python code locally in an environment that emulates that of the Google Cloud Machine Learning API. This is primarily useful for distributed execution, where the local tool can serve as validation, that your code will run properly.
27+
The `gcloud ml-engine local train` command runs your python code locally in an environment that emulates that of the Google Cloud Machine Learning API. This is primarily useful for distributed execution, where the local tool can serve as validation, that your code will run properly.
2828

2929
### Running in the Cloud
3030

workshop_sections/mnist_series/README.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,5 +8,4 @@ Each builds conceptually on the previous ones. Start at the README numbered '01
88

99
- [02_README_mnist_estimator](./02_README_mnist_estimator.md): Use the high-level TensorFlow APIs in `tf.estimator` to easily build a `LinearClassifier` and a `DNNClassifier` with hidden layers. Introducing TensorBoard.
1010

11-
**Details TBD**: Building a CNN Custom Estimator: both TensorFlow and [Keras](https://keras.io/) versions. Run locally or do (distributed) training on CMLE.
12-
[mnist_cnn_custom_estimator](mnist_cnn_custom_estimator).
11+
- [Building Custom CNN Estimators](mnist_cnn_custom_estimator): where 'canned' Estimators aren't available, you can build a custom one, to get all the advantages of using an Estimator, including support for distributed training. Examples show how to do this with both Keras and TF layers.

0 commit comments

Comments
 (0)