Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Cluster and app management services in Docker Cloud are shutting down on May 21 (docker.com)
161 points by zebra9978 on March 24, 2018 | hide | past | favorite | 154 comments


We are extremely worried about the future of Docker Swarm as well. We love Swarm - but we are seeing most work out of the Docker team is to give a migration path to kubernetes. A huge number of docker swarm networking bugs are not being worked on.

We will be happy if Docker talks about Swarm becoming a management UX for K8s - but we need visibility. These are production orchestration systems. The migration path is not easy.

And seeing what Docker Co is doing with Cloud, it is not very comforting to trust that they will do the right thing with Swarm.


Why did you pick Swarm for production?

We followed Swarm from the beginning, but after a few releases at v0.4 it was clear not to ever use Swarm, and that it mostly was the Docker PR machine that made it sound nice, and not the actual features.

Maybe it got better later on, but the first several Swarm announcements seemed really off-putting to me.

We ended up on Mesos/Marathon, not that that has a bright future either, but it was at least capable of restarting containers from the beginning..

Just migrate to Kubernetes. It has won.


When we started looking for a container orchestration system, we naturally used Google Trends (https://trends.google.com/trends/explore?date=today%205-y&q=...). Kubernetes reached their 1.0 release shortly before we were ready to start using the system and a lot of the features added since then eliminated so of our other problems (e.g. configmaps/secrets combined with the existing service discovery pretty much eliminated our plans to use Consul).

I've been taking a second look at Mesos recently and found that I didn't really grok it the first time I looked at it. In any case, I think your assessment is correct ("Just migrate to Kubernetes" - https://trends.google.com/trends/explore?date=today%205-y&q=...).


Hmm, that k8s “has won” might make me a bit sad - not that I’ve ever gasped some fresh air out of the stranglehold of AWS - as I was impressed with Mesos.

Can you share some 1st person opinions on Mesos, and where K8s is a step forward, backward or aside?

TIA


Having moved from Mesos to Kubernetes, Kubernetes just felt more mature. Working solutions for stateful sets, service discovery with DNS, flexible scheduling with affinity and tolerance, saner resource limits, a good CLI tool.

It's not a completely fair comparison since we also were able to offload persistent storage to Google Cloud, which is one of the harder problems IMO.

I think Mesos has improved since then, but it always felt like they were a bit behind.

In general Kubernetes feels like it is designed by people with relevant experience. Especially compared to our earlier experiments with Docker Compose files. People are praising their simplicity, but they left us solving a lot of hard problems that Kubernetes solves for us better than we could have done.


Why is it sad? It's great that we can finally standardize and use a single powerful system that is very capable but also improving quickly.


i would encourage you to try Swarm. Its brilliant in its simplicity. I think Docker product marketing and customer success basically sucks. But Swarm as a product has been really, really nice. And yes I continuously evaluate kubernetes and swarm side by side.

You can get a swarm cluster running in less than 10 minutes on your local laptop after "apt-get install docker-ce". To run k8s, you will need to first muck about ingress and overlays and everything else.

I know its because of "flexibility" - its like Sinatra vs Rails. They are both great in their spaces.


    $ brew cask install minikube
    $ minikube start


minikube is a specific version/distro/packaging of kubernetes meant for testing on local laptop.

Docker Swarm runs exactly the same way with exactly the same components and with the same ease on laptop as well as the cloud.

TL;DR - you cant run minikube in production.


I see your point. For k8s production, you'd have to use a solution provided by the cloud operator, GoogleKE / AzureKS / AmazonKS / etc. Which leaves the on-prem cluster and/or baremetal hosted cluster uncovered. Not that I'm convinced it's worth running baremetal anything, you're likely to be less efficient than large cloud operators because of economies of scale.

Nit: $ minikube get-k8s-versions The following Kubernetes versions are available when using the localkube bootstrapper: - v1.9.4 - v1.9.0 - v1.8.0 - v1.7.5 - v1.7.4 - v1.7.3 - v1.7.2 - v1.7.0 - v1.7.0-rc.1 - v1.7.0-alpha.2 - v1.6.4 - v1.6.3 - v1.6.0 - v1.6.0-rc.1 - v1.6.0-beta.4 - v1.6.0-beta.3 - v1.6.0-beta.2 - v1.6.0-alpha.1 - v1.6.0-alpha.0 - v1.5.3 - v1.5.2 - v1.5.1 - v1.4.5 - v1.4.3 - v1.4.2 - v1.4.1 - v1.4.0 - v1.3.7 - v1.3.6 - v1.3.5 - v1.3.4 - v1.3.3 - v1.3.0



I used kubespray to set up a Kubernetes cluster on our own hardware. I have Ansible and Docker knowledge and ran into a few issues, but it didn't take much time to set up a custom cluster. It's still different as I had issues accessing the UI, but I think it'll become even more easier in the next month.


Creating a k8s cluster on GKE is just as easy (a single gcloud command, or use the GUI if that is your preference).

With hosted Kubernetes as a service (GKE, AKS and soon EKS), there is little reason to roll your own cluster.


But you see with Docker Swarm, you don't need "Docker Swarm as a service". That's why Docker Swarm wins in simplicity.


It's just Kubernetes in a VM, like Docker on Mac is just docker in a VM.

You can run Kubernetes on your Linux device machine as-is.


It's what's in the VM that counts.


What about HA / clustering, security mTLS?


Why would you want any of those on your laptop?


Because that is how I will deploy on production. At least the security pieces. Any difference & I am not sure if I can be assured of preventing the "works on my machine" kind of issues.


Is HashiCorp’s Nomad in a similar position at this point? I really enjoy it’s (relative) simplicity.


Nomad is very much alive and HashiCorp is committed to delivering a scheduler which concentrates on operational simplicity so teams can concentrate on building applications. It also gives the capability for running workloads other than Docker such as isolated fork for binaries, non containerised Java, etc. We have a great release with 0.8 and many features planned for the rest of the year.

Integrating Nomad with Vault and Consul is super easy and allows you to provide secrets, configuration and service discovery to the application with the right layer of abstraction, the application should not be aware of the scheduler it is running on. Cloud auto join allows super easy cluster config. Job files are declarative.

Yes, Nomad does not have all the features of Kubernetes, but we take a different approach believing in workflows and the unix phillosophy of a single tool for a single job. A fairer comaprison would be to comapare the HashiCorp suite of OSS tools to K8s, Nomad, Vault, Consul, Terraform, this gives you capabilities to manage your workload both legacy and modern.


I don't know about Nomad, though Hashicorp's Vault is going to do just fine I think. It fills in a gap in secrets management that K8S doesn't do out of the box.

Looking at doing Vault in HA leads people to look at Consul, which leads to Nomad. (Consul uses a consensus protocol for service discovery and I think that will be interesting for the next generation).

Last year, K8S had already captured the center of gravity, and it took a while for the rest of the dev community to catch up.

I think this year ia a lot of shuffling as the survivors settles into orbit around K8S. There is a lot of interesting innovations up the stack once orchestration is de facto standardized.

K8S still hasn't solved the stateful workloads, though it is introducing a lot of primitives to support those: controller hooks, third part resources, on top of which Operators can function.

I think we will see a lot more innovations as people create Operators. That can include anything from stateful loads for specific distributed stateful loads, to things like intrusion detection, ML-driven autoscaling, and so forth.


I get the impression that Nomad was never particularly alive to begin with, which is a shame since it seems better designed. But it doesn't have that "ZOMG Google has blessed us with the secrets of the borg" that DevOps crave.


Nomad was my choice for queue centric workloads, but it doesn't seem to fit the webserver / long living services as good as Kubernetes. I'm not sure, but I would think you could run Nomad and Kubernetes on the same servers, sharing the Docker runtime.


If you run two schedulers they don't have a correct view of available capacity. You can use Mesos as a meta-scheduler but that introduces more complexity.


I, too, was sad to see similar things with Nomad. HashiCorp really does write generally excellent software.


The actual features are very nice. I don't know what else makes you say 'migrate' apart from the fact that there's a growing support for k8s. The advantages of Swarm (very easy setup in private clouds, docker-compose format descriptors,etc.) don't go away just because k8s is popular.

Stop spreading FUD please, it is not good for anyone.


There were two swarm's, the swarm classic — which was okayy, & then the newer swarm Mode, introduced in docker 1.12. I'm not sure which one you refer to when you say v0.4. But swarm mode was good, extremely simple & worked well for the right workloads. Like most solutions it is not a silver bullet for every orchestration need, but worked very well in microservices, new-ish architectures. K8s is great too, but it seems like an overkill for a handful of services. Also setup of K8s used to be hard, especially HA. The learning curve is also quite steep. One of the features in swarm mode that makes docker swarm extremely intuitive — IPVS, is being incorporated in K8s. So I guess there has been some cross pollination on both sides. But I do not think the swarm mode is going to die anytime soon.


I had to make a decision for an orchestration tool a few weeks ago and I went with K8s. One of the main reasons was that even Docker advertises it on its website and with Docker for Mac. I expect Swarm support to be canceled in a not so distant future and I cannot rely on a tool with an unclear future.

Which is a pity because I really liked Swarm for its simplicity.

Side note: I am also concerned about Docker in general. CE/EE split, services shutting down, bugs seemingly not being fixed - I cannot point out a precise aspect, but I am concerned.


I’m concerned as well. We use Docker and Docker Compose heavily for our development and both on Docker for Windows and Docker for Mac developers have to restart their daemon several times a day. The binaries aren’t open so it’s tough to see and fix the issue; but because we aren’t Docker Enterprise Engine customers, there is no path to support. It would be helpful if there were a way to pay for Docker and receive support without having to go the enterprise route. I can see paying $200 a month for the team for support.


Yikes. Surely Docker is providing you more than $200 worth of value a month. If so, why would you only pay $200?


I might; but that’d have to mean seeing some traction from that money first. They haven’t proven that they’re able to run the sort of business they’re trying to run.


I'm not sure who's going to beat Docker. Docker is central to most orchestration tools so as long as they make money someplace with their central services, they should be fine.


Kubernetes could move to rkt or even the now standard systemd stuff, end users would hardly know the difference. The container format isn't a very strong lock-in effect and most people are probably better served without the image type format anyway (as the Linux block layer wasn't really constructed with that use case in mind, and fixing the plumbing will take longer time than developing the orchestration tools which is what'll win the users).

Docker the company have few options to monetize on Docker the software when it becomes commoditized. They seeminly chose the Enterprise way, which consists of pretty orchestration tools and integrations with Active Directory. (A perfectly valid option, which worked out well for VMware.) That's a dead end now that Kubernetes won container orchestration. It will be interesting to see where they will go next.


The kubernetes community is pouring a lot of resources into cri-o. I imagine you are going to see the kubernetes clusters that are built 'the hard way' start switching over and removing docker. It will still be used for building pushing containers for the time being.


Who’s beating them? Amazon?


Kubernetes had won the container scheduler wars. At GitLab we're all in on making a PaaS based on k8s and our CI/CD and the container registry that is part of GitLab.


It feels awfully 19th century though that despite k8s having "won", by far the biggest container schedulers by containers scheduled are, no doubt:

(I think this is the correct order, not 100% sure of course)

1) google borg (maybe omega) [1]

2) amazon ec2

3) whatever microsoft is using

(large gap)

4) all the rest of the world combined, a small portion of which is k8s

[1] https://www.quora.com/Does-Google-use-the-Open-Source-Kubern...

(one might even say [1] seems to imply it'll never happen, or at least take a very long time. Also if you read the papers it becomes very clear that "Google Borg" includes a lot of things these days at many levels, from custom ASICs, device firmware (as in standard device, google borg firmware), BIOS firmware, entirely custom sub-kernel code, custom kernels, custom userspace (ie. Google-specific libc that's not optional), ... all of these will turn out to have dependencies on eachother that have to be redone for k8s, could take a while to migrate over)

(although I have not read any papers on it (I'd love some though), I'd bet amazon is in a similar boat, and of course Microsoft is Microsoft)


EC2 is not a container scheduler - it's an IaaS for VMs. The Amazon container PaaS (ECS/EKS) is a layer on top of EC2. And that is being superseded by Fargate which will make the underlying EC2 invisible. If you need a Fargate-like capability now, Azure AKS does it.

See https://azure.microsoft.com/en-us/services/container-service... and https://aws.amazon.com/fargate/


Fargate is expensive as hell for long running services. You should only be using it for something that creates value 100% of the time that it is running.


So what is the EC2 container scheduler before Fargate called ? Any papers on it ?


ECS and EKS.


> At GitLab we're all in on making a PaaS based on k8s

This is very interesting. Could you talk more on this ? There is definitely space for an "opinionated k8s distro with batteries included". I have wished for Swarm to become this....


It is not a Kubernetes distribution. You can use any distribution or CaaS you want. The beginning of it is in GitLab Auto DevOps https://about.gitlab.com/2017/10/04/devops-strategy/


Interesting. Is there a blog post where i can read more about this?



Docker is a amazing tool, but I think the technical design and overall strategy for Swarm wasn't very well executed. Moving to k8s is a smart thing for them, because it's objectively better for real production use.

In our tests about a year ago, swarm started showing serious networking and cluster synchronization problems with cluster sizes over 30 nodes (physical servers), on a fast, reliable LAN.

I've heard similar stories from another big Docker customer- Docker support promised them that improving performance of Swarm and fixing scaling issues are the focus of "the next version", but they never came. This company is now moving to k8s.


Could it be that the teams are simply focused on adding K8s support and getting the Docker EE out of Beta?

Public statement on their blog after the K8s announcement in EU:

"But it’s equally important for us to note that Swarm orchestration is not going away. Swarm forms an integral cluster management component of the Docker EE platform; in addition, Swarm will operate side-by-side with Kubernetes in a Docker EE cluster, allowing customers to select, based on their needs, the most suitable orchestration tool at application deployment time."

https://blog.docker.com/2017/11/swarm-orchestration-in-docke...

There's still plenty of PR's and activity in the SwarmKit and Libnetwork repos:

https://github.com/docker/swarmkit/pulse/monthly https://github.com/docker/libnetwork/pulse/monthly


I hope this isn't so either. Docker Swarm is so simple and works well for many use cases.


This is already Swarm V2, there was older Swarm, which worked nice enough. It was equivalent of multi-host Docker run, which could filter based on constraints and do bin-packing, and had even support for multi host networking with etcd/consul/zookeeper.

Then, they cancelled it, no more patches, no mention of it anywhere unless you know where to look.

Then they created Swarm Mode, and add the concept of "services" which sucks compared to regular run because it lacked so many options the run command had, it took more than 6 months to implement most of them.


People, yours truly included, seem a bit concerned with the 2 months notice.

To be fair docker cloud was never great and hopefully it doesn't have many big customers...

But the precedence of shutting down a paid service with 2 months notice is not nice. What would happen if this was docker hub? Panic!


> What would happen if this was docker hub? Panic!

Isn't docker hub relatively easy to replace, in comparison?

Otherwise, is this a wakeup call perhaps?


docker hub is surprisingly difficult to replace because of how docker registries work.

Traditional package managers have two distinct concepts: a repository and packages within those repositories.

For example, if ubuntu took down their apt repo server, I could run my own with all the same packages and change a single sources.list entry and all my servers, ansible roles installing packages, etc, would operate the same.

This is possible because the package name+version is an identifier everything else uses and the only thing that cares about the repository is apt itself; all other tooling doesn't need to know about the repository the package is sourced from.

Docker conflates those two things. Each client doesn't just send a package name, it sends a url + package name + version (e.g. foo.registryurl.com/image:version). Because every single client has the detail of "foo.registryurl.com" baked in, it's difficult to change that. I can't change a single "repository-mapping" file that the docker daemon reads to quickly update it.

Instead, I have to update every single client.

The idea of decoupling those is not new. In 2014 it was proposed [1], and various implementations that would help make it easier to migrate off the default registry have been proposed and rejected [2]

This doesn't even get into the lack of tooling for chasing down the transitive dependencies building my images has on various registries with each FROM.

[1]: https://github.com/moby/moby/issues/8329

[2]: https://github.com/moby/moby/pull/5821#issuecomment-49492924


> Instead, I have to update every single client.

To combat this, every single image we use from hub.docker.com is "proxied" into our registry with a one-line Dockerfile:

   FROM image:version
Building the "proxy" image and publishing it in our registry is entirely automated (using CI+Registry of a self-hosted Gitlab). Then we make everything point to our version in our registry. Should hub.docker.com go belly-up, then we have 1. a cache of versions in use (current and past), and 2. full control of the images (possibly making our own FROM scratch) without having to change a single line in downstream consumers. Initially we did this to be safe from hub.docker.com possibly intermittent availability that would delay image pulls on deployments.


Do you do it per project or as a separate project that houses all the proxy images? How do you version the proxy images? What namespace do you push them into? Is it easy enough to deal with that it doesn’t waste a lot of time?

I’ve been trying to insulate myself from docker too and the FROM proxy strategy seems to break the least stuff. Have you hit any pain points?


This is a single 'gitlab.example.com/docker/library' project.

We use orphan branches, one per image, although other strategies are possible (like using the commit diff and directory name).

Proxy images are versioned using branch names (e.g postgres vs postgres-9.6), images are pushed to gitlab.example.com/docker/library/postgres, and using version detection we generate docker image tags (e.g a 'postgres' branch will create postgres:latest, plus extracting version from postgres --version also pushes postgres:10 and postgres:10.1 images.

See this .gitlab-ci.yml[0]. Yes, there is one per branch. This can be generalised further (especially with Gitlab's new import system for .gitlab-ci.yml) but works well enough in practice, it's very low maintenance, and updates are a mere commit+push away.

In fact we use this not just for proxying images but for all "generalised", "utility", or "dependency" images that are not the result of a given full-blown app project in its own repo (those have their own CI/CD process in their respective repo)

[0]: https://gitlab.com/snippets/1705998


But here we are talking about Docker Hub closing, which is way easier to solve. This is the registry that is used as default (for all non-local images without a URL identifier, like "alpine:latest").

All you need is an option to set the default registry. Probably it's already there, didn't google.


Google it. It’s a fun topic.


Does it actually send the URL? I think it uses the URL to make the network request, but doesn’t actually send the URL to the registry. If it did, you could use an alternate registry as a pull through cache and have it go upstream for everything. AFAIK that’s not possible.


Docker's registry protocol is surprisingly complicated. It is stateful, it is not trivially cacheable, and it's a right pain in the ass to deal with


The registry itself can be self-hosted easily, however everything on top (authentication, web UI, automation, ...) is surprisingly difficult. Plus technical challenges as pointed out by other commenters.


GitLab includes a full docker registry, with all that.

Just run a custom GitLab instance, and you get all that for free.

For my open source projects on git.kuschku.de I also have a gitlab container registry on k8r.eu, and it’s been amazing to work with.


AWS has a registry as well. Both solutions are and easy way to host private and public repos. But Dockerhub offers access to a huge ecosystem that everyone is already plugged into that would be devastating if it shut down. You’d potentially have to re create so many base images from scratch.


Nexus also has a docker registry with all that.


Nexus isn’t a terrible option, but the application feels pretty dated from a deployment and maintenance perspective. We’re using it for ruby gems and internal docker registry. I haven’t looked into viable alternatives for the gems side of things—that works well enough—but we’re replacing the docker registry with gitlab.


Interesting, I didn’t know that. We host a registry with HTTP simple auth and it works for us because we don’t need all that.


There's an on premise version of docker hub (Docker Trusted Registry I think?).


References to docker hub is hardcoded into a LOT of code/config..


I suspect the majority are unqualified references, i.e. 'markbnj/unbound' that rely on docker using the hub as a default repository.


I could see github adding a tab to repos as for a container registry. That would destroy docker hub imo.


GitLab has an integrated container registry since 2016. https://about.gitlab.com/2016/05/23/gitlab-container-registr...


Their devops for autobuilding, testing, and deploying are really awesome as well. We run an internal GitLab host that handles all deployments for us through their CI/CD interface.


That would be super cool, and I've noticed the branch protection features changing on GitHub, so perhaps they're thinking about this space too.



> Twitter users

People. People are up in arms.

Anyway, yeah, that's insane. Even Google, who constantly shuts stuff down, usually does so with way more heads up. For comparison, Google Reader, a completely free service, shut down with 3.5 months advance notice. Google Wave got almost 6 months notice.


> Google Reader

It still hurts.


Since I missed on missing it, was there any particular feature that's not in other rss readers?

I have only user rss via gwene over nntp, if that went away, i'd miss it bunches.


It’s not about features. Nowadays, I’d go with a self-hosted Open Source application from the start. In fact, that’s what I am looking for at this very moment.


It was the day RSS died for me


I really don't get why this statement is so common. More or less every other hosted reader immediately offered a migration path, so getting out of Reader and up and running somewhere else was really easy.


I'm glad it happened, now I'm happier with Feedly than I was back then with Google Reader.


https://bazqux.com/ - Highly recommended. I'm a lifetime subscriber.


Let it live again with InoReader....


I do think that for commercial services like services in the Google cloud platform they give a year+.


All Google Cloud GA features will have at least 1 year of deprecation period.

7.2 Deprecation Policy. Google will announce if it intends to discontinue or make backwards incompatible changes to the Services specified at the URL in the next sentence. Google will use commercially reasonable efforts to continue to operate those Services versions and features identified at https://cloud.google.com/terms/deprecation without these changes for at least one year after that announcement, unless (as Google determines in its reasonable good faith judgment):

(i) required by law or third party relationship (including if there is a change in applicable law or relationship), or

(ii) doing so could create a security risk or substantial economic or material technical burden.

The above policy is the "Deprecation Policy."

Disclaimer I work for Google in Cloud.


Google does make a mess with consumer apps but it’s entirely different when it comes to Google Cloud or any of their enterprise products.


It certainly makes me rethink if docker services should be relied on in production.

Migrating off docker cloud will be a pain, but the service was already a pain to use, so maybe it's about time anyways.

But imagine being given 2 months to migrate off docker hub for image storage. Panic would ensue :)


I understand that having to abandon ship sucks, but wasn't the whole point of containers that they can be migrated easily? heck, the whole concept got its name from that idea. So why the fuss?

edit: typos


sure, but i'd wager that most of the pain point is getting orchestration tools to work on other platforms. and also vetting of other platforms, etc.


Because moving to another orchestration platform requires overhauling your CI/build system.


It's also due to the fact (which I also tweeted) that Kubernetes is not even supported on the stable release channel of Docker for Mac!


but why?

The whole point of containers is that they are ephemeral and can be booted up quickly anywhere because you statically link the whole fucking OS?


The APIs of platforms are often totally different. If you went to docker cloud for its simplicity and now have to move to AWS/GCP/Azure/etc. and don't have a dedicated DevOps that knows one of those platforms already, you have no choice other than taking a developer working on features and putting them on learning the new API in a few weeks including testing. ~8 weeks is not enough for that if you are a cash-strapped startup.


Such are the perils of using immature tools in your development chain and production systems.


Please keep docker swarm going guys, it's a great product.

Docker cloud is no loss (with apologies to those who are using it in production) and will hopefully free up your people to work on other important stuff.

Otherwise if we can continue to use the compose configuration api and the docker deploy/service api with k8s under the hood then I guess that's a reasonable compromise.


Had a startup in this area, with swarm under the hood, and realised last year (when RackSpace closed Carina) that 1. swarm is loosing momentum due to the ammount of new features, stability and native cloud integration that k8s brought; 2. containers are beeig adopted at a huge speed, and big cloud providers like aws and gce have lots of users and trust, so the effort of offer CaaS is not that big, with great chances of success; 3. The hosting market is tough due to comprtition. DigitalOcean was in this market way before they were caled DO.; 4. When you want to make money from OSS like Docker does, companies like RedHat and IBM allready have years ahead of competition due to their established sales channels (in 1-2y from now we will see tectonic in all rhel powered companies, and docker announcing that it’s not supporting UCP any more)


The timing of this (shutdown on 21 May) made me wonder if it's related to the GDPR coming into effect starting on 25 May.


It may well but, but everybody knew that GDPR was coming. If this was the reason, why not announce it (much) earlier?


At first, I immediately discounted it as just plain coincidence. But actually, it could well have been.

I imagine the service wasn't really making money, and to add GDPR compliance on top, could have been the last straw (or a contributing factor)


I am thinking that it sets them up to announce something (possibly fully embracing K8s) at DockerCon[0] which is a few weeks after.

0: https://2018.dockercon.com


Given the time it takes to migrate the production stack to another system (one or more months), this won't be enough time for many users to migrate. There is no drop-in replacement. It is pretty abrupt of them to give a 60 day notice.

On a meta thought, I wonder what potentially caused this move. It is/was a pretty decent service.


I was logged into docker the other day and saw this notification but I didn’t—and still don’t—know exactly what services they’re referring to. Do they mean everything under cloud.docker.com, the swarms beta, or something else?


It was long known since they did not ship any new update since August 2016. We at Turing Analytics migrated to Rancher labs about 6 months ago.

They should have announced it earlier and should have given more time to paying customers. I am glad I migrated very early.


Is there a hosted rancher service?

I played around with it a long while ago and really liked it, but felt like more moving parts (and particularly on MySQL, which wasn't in our stack) wasn't something I was too keen on. Having someone manage this for us could be worth paying for though.


Rancher, the container orchestration, is pretty much dead too, since it's K8s-based from (upcoming) 2.0 on, i.e. it's becoming a K8s distribution and has to compete with K8s itself, OpenShift and probably upcoming open-sourced edition of Tectonic. RancherOS might live on, even though there is a lot of doubt on why it should do so.


I think you mean "cattle", the container orchestration system in Rancher 1.x

Rancher (and Rancher OS) seem to be doing fine and getting updates constantly.

We're eyeing Rancher 2.x and the Kubernetes integration but it gives me confidence seeing Rancher 1.x getting updated while they are so focused in 2.x and K8s.


Rancher is far from dead. They are essentially making a nice frontend for k8s. All we care about is ease of deployment and management of containers. If that is being achieved by k8s, we don't have problem. And yes, they are managing 1.x version nicely with timely updates.

Plus it's a self hosted version so no such fear of it disappearing within 2 months. :)


It does not have hosted service. What we did was to dedicate one VM to its management server and that worked out amazingly well. Everything is inside it and does not disturb other systems.


I played around with rancher in my VM and with three servers.

And it always lost its ip sec connections and only a reboot helped.

I liked it though...


Wasn't the whole point of docker containers that deploys and migrations are a breeze?


Docker cloud offered CI/CD integrations and management features similar to K8S that were proprietary to them. While Docker images themselves are portable, this is a whole different beast.


in other words, for the micro-service world, you don't have the monolith, the whole stack is the monolith?


It seems like they are only discontinuing the hosted swarm part for now and keeping the rest up. That said, it may just be a matter of time before the great of the stuff goes down.


As I read through this all it feels sort of weird even typing it but I kind of hope Amazon or Microsoft buys Docker rather than Oracle, Google or IBM.

I think Microsoft still employs thousands of great engineers and have been early embracers of containerization among the large companies out there and because Satya was a large part of growing Azure into what it is (IMHO a pretty solid set of services) it could make a lot of sense


I was wondering if you think the whole Docker inc is closing or something like that. Docker is only shutting down their cloud offering for Docker Swarm and I think it is to bundle their resources and focus on integrating Kubernetes.


I’m trying and failing to imagine a future where Docker achieves business success as a stand-alone entity.


This is why your company or product should not depend on that new cool SaaS/PaaS.


This is why you should write your app code to be independant from your current provider, Always thinking that you may have to change (maybe urgently) in the future


This is why you shouldn’t depend on any service that isn’t profitable, not just SaaS/PaaS.


Damn. My last company ran a bunch of production things in Docker Cloud. Contrary to many opinions, I kinda liked it. It was cheap and fairly simple to use. The API was straightforward and way less expansive and cumbersome than something like AWS ECS.


What was the point of acquiring Tutum killing it, rebranding to Docker Cloud and killing it as well.

Tutum actually worked and I really liked it. Now I plan to use Rancher on top of kubernetes for my docker hosting


Long time docker user here.

Docker Cloud was getting worse and worse. Tutum (they acquired, rebranded and killed it) was great. But docker team just destroyed it.

That is a shame. Tutum was great since it scaled up and down very well. Nobody thinks about scaling down, but this is important in many ways.

Right now IMO: - Docker swarm - scales up and down ok. Had too many bugs with even on stable. - Rancher - fine, very good for medium deployments. - k8 - winner for larger scale but scales down badly.

This is really sad.


Please keep swarm mode going. It’s a pleasure to use, easy to set up, works well with Compose and overall satisfies a lot use cases. Thanks for your work!


I've been meaning to learn kubernetes for a while now having been a docker compose user for a few years. Swarm seemed super appealing due to easy migration and simplicity. Having gone through a few tutorials for Kube I still feel a little overwhelmed with the volume of configuration, and also how local development is meant to be done. This was always a very cool part of compose.


I found kubernetes up and running really helpful. Kubernetes the hard way was really useful, but a painful weekend.


Docker really needs to provide some transparency around these decisions. People are concerned about the future of the ecosystem. They have every right to shutdown aspects of their offering. But the community deserves an explainstion. This notice didn’t even attempt to explain the decision. Is Docker stopping development on swarm? Getting out of the PaaS business? Downsizing?


How is the community affected in any way by this decision? Docker cloud is a management product and only open to paying customers.


Wow this is very surprising.

I've been working on a Docker Cloud alternative for awhile now. I'm aiming for something that kind of balances the convenience of Heroku with the Docker experience.

It's still in beta but if anyone wants to check it out it's at: https://codemason.io/


What exactly is docker cloud and how does it fit in the docker workflow?


I don't have experience with Docker Cloud, but it's Docker company's SaaS.

https://blog.docker.com/2015/10/docker-acquires-tutum/


It's a managed application service where you bring your own node, from the cloud or from on-prem. It gives you a dashboard & end to end management of applications running on your nodes.


It was their hosting service, they would run your containers for you “in the cloud”.


How long until redhat buys docker?


My nightmare is that Oracle outbids Redhat for Docker.


The.Worst.Nightmare!

We saw that happen with Sun, BEA, Peoplesoft etc. Sad days.


Hmm I have a niggling suspicion that it'll be Microsoft


GDPR killed Docker Cloud. It's not the last victim we will see.


What? How on earth could have this been the case?


Docker the file format/command line syntax/etc will long outlive Docker the company

Kubernetes “won” so now the competition has shifted from who can innovate to who can execute and operate.


Genuine question - what makes you think that Kubernetes has won?


Genuine question - what makes you think that Kubernetes has won?

Well, "won" in quotes. I am not a k8s fanboy or anything, I simply observe that all the major cloud providers are offering managed k8s services that have superseded their own proprietary container-type offerings. For better or worse that's where the momentum is. If you wanted to containerize your stack right now, k8s then pick one of the big 3, seems like a safe bet.


I've noticed the same trend, and as a fan of Docker Swarm, along with this news, I'm not happy about it.

Compared to Kubernetes, Swarm is a breeze to setup, deploy and manage. The manifest files are the same Docker Compose files we're used to, just expanded to cover the new stack concepts. It has support for remote storage mounting, advanced networking configuration, various interesting volume and network plugins[0], and is generally a pain-free experience to use (from my admittedly short time with both).

Kubernetes is a fine product. It's just a shame Swarm doesn't seem to have the same traction.

Can someone share their Swarm experience in production, possibly compared to k8s?

[0]: https://docs.docker.com/engine/extend/legacy_plugins/


We're going with Swarm exactly because of the reasons you've listed. The EE part is a bit flaky sometimes (I'm looking at you, UCP), but Swarm is brilliant.


I agree - but I'm now very afraid. I'm beginning to wonder if its not better to just swallow the pain and just go k8s.

I would love it (and pay for it) if Swarm says that it is a opinionated distro (ingress/overlay) + management layer on top of k8s.

The silence is deafening and not nice.


I don't think that you need to be afraid and migrate just because of that. Swarm is not a service, so if they stopped developing it then you'd have plenty of time figuring out how to move away because your Swarm clusters would not stop working.

Also, there are very large Swarm installations in production at large companies, so I'd be surprised if Docker cancelled the product (which is their flagship).


Do you know if there are any good tools for migrating your manifest/compose yaml files to k8s, especially using more recent features such as configs and secrets?


There is kompose with a k: https://github.com/kubernetes/kompose


Doesn't seem to support some of the more recent features, such as secrets and configs :(


> Well, "won" in quotes. I am not a k8s fanboy or anything, I simply observe that all the major cloud providers are offering managed k8s services that have superseded their own proprietary container-type offerings. For better or worse that's where the momentum is. If you wanted to containerize your stack right now, k8s then pick one of the big 3, seems like a safe bet.

kubernetes got released more than a year earlier. And it wasn't until 2016 that people actually started to take docker-swarm seriously. Thats a two year headstart...

docker-swarm would've needed several really impactful feature to offset that. It didn't really. The only real upsite is how easy it is to install and maintain. But that's hardly important with pretty cheap hosted solutions around (at least cheap in comparison to hiring several SRE to maintain the cluster).

But thats just my opinion as a user.


The only real upsite is how easy it is to install and maintain. But that's hardly important with pretty cheap hosted solutions around (at least cheap in comparison to hiring several SRE to maintain the cluster)

Indeed - something like AKS can get you up and running very quickly, whereas as recently as 6 months ago unless you had a k8s wizard-guru on staff, there was no point in even trying to go it alone, esp. not into Production.

It also suggests that the skill set of the on-prem k8s expert will decline in demand over time, there will be less demand for people who can set it up from scratch on bare metal. We shall see!


GKE has been around for years.


I think the leading cloud provider going all in on it is a pretty good marker- https://aws.amazon.com/eks/


I always chuckle when people put so much trust in "startup" companies. It's the same like believing in IBM not trying to screw you over.

There's a reason why people and businesses hate vendor lock ins. There's a reason why we do not want AWS to reign supreme for long.


At least you know isn’t going to be shutting down their cloud services anytime soon.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: