Submitting TensorFlow jobs
The Custom Resource Definition (CRD) allows you to define custom objects with their own name and schema. This is what we are going to use to submit TensorFlow jobs to our cluster.
Luckily, the Kubeflow Core installation step already created the CRD so we can immediately submit models as ksonnet components by using the generate/apply pair of commands.
The job we are going to deploy is
tf-cnn
, a convolutional neural network (CNN) example shipped with Kubeflow (GKE users can substitute cdk for gke):ks generate tf-cnn kubeflow-test --name=cdk-tf-cnn --namespace=kf-tutorial
ks apply cdk -c kubeflow-test
We can check that a resource of type "tfjob" was indeed submitted into the "kf-tutorial" namespace:
kubectl get tfjobs --namespace=kf-tutorial
Which should return (the job name will be gke-tf-cnn on GKE):
NAME AGE
cdk-tf-cnn 1m
You can also find the components of the TensorFlow job in the "Jobs" section of your Kubernetes Dashboard. The following image shows the Parameter Server and Worker and components on GKE. CDK has a Master component in addition to these two:
Once all pods have been deployed, we can verify the CNN job is running properly by inspecting the logs of the worker pod. The following command shows the output from our CDK deployment:
kubectl logs --namespace=kf-tutorial -f cdk-tf-cnn-worker-rptp-0-wjdph
The end of the log should show us our job:
INFO|2017-12-19T01:12:17|/opt/launcher.py|27| TensorFlow: 1.5
INFO|2017-12-19T01:12:17|/opt/launcher.py|27| Model: resnet50
INFO|2017-12-19T01:12:17|/opt/launcher.py|27| Mode: training
INFO|2017-12-19T01:12:17|/opt/launcher.py|27| SingleSess: False
INFO|2017-12-19T01:12:17|/opt/launcher.py|27| Batch size: 32 global
INFO|2017-12-19T01:12:17|/opt/launcher.py|27| 32 per device
INFO|2017-12-19T01:12:17|/opt/launcher.py|27| Devices: ['/job:worker/task:0/cpu:0']
INFO|2017-12-19T01:12:17|/opt/launcher.py|27| Data format: NHWC
INFO|2017-12-19T01:12:17|/opt/launcher.py|27| Optimizer: sgd
INFO|2017-12-19T01:12:17|/opt/launcher.py|27| Variables: parameter_server
INFO|2017-12-19T01:12:17|/opt/launcher.py|27| Sync: True
INFO|2017-12-19T01:12:17|/opt/launcher.py|27| ==========
INFO|2017-12-19T01:12:17|/opt/launcher.py|27| Generating model
INFO|2017-12-19T01:12:21|/opt/launcher.py|27| 2017-12-19 01:12:21.230800: I tensorflow/core/distributed_runtime/master_session.cc:1008] Start master session 8ba56f373a0872fb with config: intra_op_parallelism_threads: 1 gpu_options { force_gpu_compatible: true } allow_soft_placement: true
INFO|2017-12-19T01:12:22|/opt/launcher.py|27| Running warm up
There it is! Congratulations, you have successfully launched Kubeflow on top of either CDK on AWS or GKE (or both!).
You can check its parameters using the
ks show
command:ks show cdk -c kubeflow-test
The above will return the following on CDK, and be similar on GKE:
---
apiVersion: tensorflow.org/v1alpha1
kind: TfJob
metadata:
name: cdk-tf-cnn
namespace: kf-tutorial
spec:
replicaSpecs:
- replicas: 1
template:
spec:
containers:
- args:
- python
- tf_cnn_benchmarks.py
- --batch_size=32
- --model=resnet50
- --variable_update=parameter_server
- --flush_stdout=true
- --num_gpus=1
- --local_parameter_device=cpu
- --device=cpu
- --data_format=NHWC
image: gcr.io/kubeflow/tf-benchmarks-cpu:v20171202-bdab599-dirty-284af3
name: tensorflow
workingDir: /opt/tf-benchmarks/scripts/tf_cnn_benchmarks
restartPolicy: OnFailure
tfReplicaType: MASTER
- replicas: 1
template:
spec:
containers:
- args:
- python
- tf_cnn_benchmarks.py
- --batch_size=32
- --model=resnet50
- --variable_update=parameter_server
- --flush_stdout=true
- --num_gpus=1
- --local_parameter_device=cpu
- --device=cpu
- --data_format=NHWC
image: gcr.io/kubeflow/tf-benchmarks-cpu:v20171202-bdab599-dirty-284af3
name: tensorflow
workingDir: /opt/tf-benchmarks/scripts/tf_cnn_benchmarks
restartPolicy: OnFailure
tfReplicaType: WORKER
- replicas: 1
template:
spec:
containers:
- args:
- python
- tf_cnn_benchmarks.py
- --batch_size=32
- --model=resnet50
- --variable_update=parameter_server
- --flush_stdout=true
- --num_gpus=1
- --local_parameter_device=cpu
- --device=cpu
- --data_format=NHWC
image: gcr.io/kubeflow/tf-benchmarks-cpu:v20171202-bdab599-dirty-284af3
name: tensorflow
workingDir: /opt/tf-benchmarks/scripts/tf_cnn_benchmarks
restartPolicy: OnFailure
tfReplicaType: PS
tfImage: gcr.io/kubeflow/tf-benchmarks-cpu:v20171202-bdab599-dirty-284af3
As you can see, by default there are no GPUs being used (the parameter
--device=cpu
indicates this and forces the usage of the CPU version of the docker image). In a follow-up tutorial, we will build on this guide to add GPU-accelerated TensorFlow workers to your cluster and expose them via the CRD interface.
In order to clean up the kubeflow deployment on the cluster, issue the
kubectl delete
command. On CDK, enter the following:ks delete cdk -c kubeflow-test
The equivalent command to delete our GKE instance is:
kubectl delete ns kf-tutorial
Congratulations! You're ready to rock'n roll using Kubeflow on CDK and GKE!