Correct! We're currently thinking a lot about orchestration of the various components but for now, our goal is to use the native loose coupling between services available in K8s. So if you wanted Spark for data processing, for example, you could start a service, and the deployment, and feed that into the TF CRD.
The following are included:
- A JupyterHub to create & manage interactive Jupyter notebooks.
- A Tensorflow Training Controller that can be configured to use CPUs or GPUs, and adjusted to the size of a cluster with a single setting.
- A TF Serving container.