We are extremely worried about the future of Docker Swarm as well.
We love Swarm - but we are seeing most work out of the Docker team is to give a migration path to kubernetes. A huge number of docker swarm networking bugs are not being worked on.
We will be happy if Docker talks about Swarm becoming a management UX for K8s - but we need visibility. These are production orchestration systems. The migration path is not easy.
And seeing what Docker Co is doing with Cloud, it is not very comforting to trust that they will do the right thing with Swarm.
We followed Swarm from the beginning, but after a few releases at v0.4 it was clear not to ever use Swarm, and that it mostly was the Docker PR machine that made it sound nice, and not the actual features.
Maybe it got better later on, but the first several Swarm announcements seemed really off-putting to me.
We ended up on Mesos/Marathon, not that that has a bright future either, but it was at least capable of restarting containers from the beginning..
When we started looking for a container orchestration system, we naturally used Google Trends (https://trends.google.com/trends/explore?date=today%205-y&q=...). Kubernetes reached their 1.0 release shortly before we were ready to start using the system and a lot of the features added since then eliminated so of our other problems (e.g. configmaps/secrets combined with the existing service discovery pretty much eliminated our plans to use Consul).
I've been taking a second look at Mesos recently and found that I didn't really grok it the first time I looked at it. In any case, I think your assessment is correct ("Just migrate to Kubernetes" - https://trends.google.com/trends/explore?date=today%205-y&q=...).
Hmm, that k8s “has won” might make me a bit sad - not that I’ve ever gasped some fresh air out of the stranglehold of AWS - as I was impressed with Mesos.
Can you share some 1st person opinions on Mesos, and where K8s is a step forward, backward or aside?
Having moved from Mesos to Kubernetes, Kubernetes just felt more mature. Working solutions for stateful sets, service discovery with DNS, flexible scheduling with affinity and tolerance, saner resource limits, a good CLI tool.
It's not a completely fair comparison since we also were able to offload persistent storage to Google Cloud, which is one of the harder problems IMO.
I think Mesos has improved since then, but it always felt like they were a bit behind.
In general Kubernetes feels like it is designed by people with relevant experience. Especially compared to our earlier experiments with Docker Compose files. People are praising their simplicity, but they left us solving a lot of hard problems that Kubernetes solves for us better than we could have done.
i would encourage you to try Swarm. Its brilliant in its simplicity.
I think Docker product marketing and customer success basically sucks. But Swarm as a product has been really, really nice. And yes I continuously evaluate kubernetes and swarm side by side.
You can get a swarm cluster running in less than 10 minutes on your local laptop after "apt-get install docker-ce". To run k8s, you will need to first muck about ingress and overlays and everything else.
I know its because of "flexibility" - its like Sinatra vs Rails. They are both great in their spaces.
I see your point. For k8s production, you'd have to use a solution provided by the cloud operator, GoogleKE / AzureKS / AmazonKS / etc. Which leaves the on-prem cluster and/or baremetal hosted cluster uncovered. Not that I'm convinced it's worth running baremetal anything, you're likely to be less efficient than large cloud operators because of economies of scale.
I used kubespray to set up a Kubernetes cluster on our own hardware. I have Ansible and Docker knowledge and ran into a few issues, but it didn't take much time to set up a custom cluster. It's still different as I had issues accessing the UI, but I think it'll become even more easier in the next month.
Because that is how I will deploy on production. At least the security pieces. Any difference & I am not sure if I can be assured of preventing the "works on my machine" kind of issues.
Nomad is very much alive and HashiCorp is committed to delivering a scheduler which concentrates on operational simplicity so teams can concentrate on building applications. It also gives the capability for running workloads other than Docker such as isolated fork for binaries, non containerised Java, etc. We have a great release with 0.8 and many features planned for the rest of the year.
Integrating Nomad with Vault and Consul is super easy and allows you to provide secrets, configuration and service discovery to the application with the right layer of abstraction, the application should not be aware of the scheduler it is running on. Cloud auto join allows super easy cluster config. Job files are declarative.
Yes, Nomad does not have all the features of Kubernetes, but we take a different approach believing in workflows and the unix phillosophy of a single tool for a single job. A fairer comaprison would be to comapare the HashiCorp suite of OSS tools to K8s, Nomad, Vault, Consul, Terraform, this gives you capabilities to manage your workload both legacy and modern.
I don't know about Nomad, though Hashicorp's Vault is going to do just fine I think. It fills in a gap in secrets management that K8S doesn't do out of the box.
Looking at doing Vault in HA leads people to look at Consul, which leads to Nomad. (Consul uses a consensus protocol for service discovery and I think that will be interesting for the next generation).
Last year, K8S had already captured the center of gravity, and it took a while for the rest of the dev community to catch up.
I think this year ia a lot of shuffling as the survivors settles into orbit around K8S. There is a lot of interesting innovations up the stack once orchestration is de facto standardized.
K8S still hasn't solved the stateful workloads, though it is introducing a lot of primitives to support those: controller hooks, third part resources, on top of which Operators can function.
I think we will see a lot more innovations as people create Operators. That can include anything from stateful loads for specific distributed stateful loads, to things like intrusion detection, ML-driven autoscaling, and so forth.
I get the impression that Nomad was never particularly alive to begin with, which is a shame since it seems better designed. But it doesn't have that "ZOMG Google has blessed us with the secrets of the borg" that DevOps crave.
Nomad was my choice for queue centric workloads, but it doesn't seem to fit the webserver / long living services as good as Kubernetes. I'm not sure, but I would think you could run Nomad and Kubernetes on the same servers, sharing the Docker runtime.
If you run two schedulers they don't have a correct view of available capacity. You can use Mesos as a meta-scheduler but that introduces more complexity.
The actual features are very nice. I don't know what else makes you say 'migrate' apart from the fact that there's a growing support for k8s. The advantages of Swarm (very easy setup in private clouds, docker-compose format descriptors,etc.) don't go away just because k8s is popular.
Stop spreading FUD please, it is not good for anyone.
There were two swarm's, the swarm classic — which was okayy, & then the newer swarm Mode, introduced in docker 1.12. I'm not sure which one you refer to when you say v0.4. But swarm mode was good, extremely simple & worked well for the right workloads. Like most solutions it is not a silver bullet for every orchestration need, but worked very well in microservices, new-ish architectures. K8s is great too, but it seems like an overkill for a handful of services. Also setup of K8s used to be hard, especially HA. The learning curve is also quite steep. One of the features in swarm mode that makes docker swarm extremely intuitive — IPVS, is being incorporated in K8s. So I guess there has been some cross pollination on both sides. But I do not think the swarm mode is going to die anytime soon.
I had to make a decision for an orchestration tool a few weeks ago and I went with K8s. One of the main reasons was that even Docker advertises it on its website and with Docker for Mac. I expect Swarm support to be canceled in a not so distant future and I cannot rely on a tool with an unclear future.
Which is a pity because I really liked Swarm for its simplicity.
Side note: I am also concerned about Docker in general. CE/EE split, services shutting down, bugs seemingly not being fixed - I cannot point out a precise aspect, but I am concerned.
I’m concerned as well. We use Docker and Docker Compose heavily for our development and both on Docker for Windows and Docker for Mac developers have to restart their daemon several times a day. The binaries aren’t open so it’s tough to see and fix the issue; but because we aren’t Docker Enterprise Engine customers, there is no path to support. It would be helpful if there were a way to pay for Docker and receive support without having to go the enterprise route. I can see
paying $200 a month for the team for support.
I might; but that’d have to mean seeing some traction from that money first. They haven’t proven that they’re able to run the sort of business they’re trying to run.
I'm not sure who's going to beat Docker. Docker is central to most orchestration tools so as long as they make money someplace with their central services, they should be fine.
Kubernetes could move to rkt or even the now standard systemd stuff, end users would hardly know the difference. The container format isn't a very strong lock-in effect and most people are probably better served without the image type format anyway (as the Linux block layer wasn't really constructed with that use case in mind, and fixing the plumbing will take longer time than developing the orchestration tools which is what'll win the users).
Docker the company have few options to monetize on Docker the software when it becomes commoditized. They seeminly chose the Enterprise way, which consists of pretty orchestration tools and integrations with Active Directory. (A perfectly valid option, which worked out well for VMware.) That's a dead end now that Kubernetes won container orchestration. It will be interesting to see where they will go next.
The kubernetes community is pouring a lot of resources into cri-o. I imagine you are going to see the kubernetes clusters that are built 'the hard way' start switching over and removing docker. It will still be used for building pushing containers for the time being.
Kubernetes had won the container scheduler wars. At GitLab we're all in on making a PaaS based on k8s and our CI/CD and the container registry that is part of GitLab.
(one might even say [1] seems to imply it'll never happen, or at least take a very long time. Also if you read the papers it becomes very clear that "Google Borg" includes a lot of things these days at many levels, from custom ASICs, device firmware (as in standard device, google borg firmware), BIOS firmware, entirely custom sub-kernel code, custom kernels, custom userspace (ie. Google-specific libc that's not optional), ... all of these will turn out to have dependencies on eachother that have to be redone for k8s, could take a while to migrate over)
(although I have not read any papers on it (I'd love some though), I'd bet amazon is in a similar boat, and of course Microsoft is Microsoft)
EC2 is not a container scheduler - it's an IaaS for VMs. The Amazon container PaaS (ECS/EKS) is a layer on top of EC2. And that is being superseded by Fargate which will make the underlying EC2 invisible. If you need a Fargate-like capability now, Azure AKS does it.
Fargate is expensive as hell for long running services. You should only be using it for something that creates value 100% of the time that it is running.
> At GitLab we're all in on making a PaaS based on k8s
This is very interesting. Could you talk more on this ? There is definitely space for an "opinionated k8s distro with batteries included". I have wished for Swarm to become this....
Docker is a amazing tool, but I think the technical design and overall strategy for Swarm wasn't very well executed. Moving to k8s is a smart thing for them, because it's objectively better for real production use.
In our tests about a year ago, swarm started showing serious networking and cluster synchronization problems with cluster sizes over 30 nodes (physical servers), on a fast, reliable LAN.
I've heard similar stories from another big Docker customer- Docker support promised them that improving performance of Swarm and fixing scaling issues are the focus of "the next version", but they never came. This company is now moving to k8s.
Could it be that the teams are simply focused on adding K8s support and getting the Docker EE out of Beta?
Public statement on their blog after the K8s announcement in EU:
"But it’s equally important for us to note that Swarm orchestration is not going away. Swarm forms an integral cluster management component of the Docker EE platform; in addition, Swarm will operate side-by-side with Kubernetes in a Docker EE cluster, allowing customers to select, based on their needs, the most suitable orchestration tool at application deployment time."
This is already Swarm V2, there was older Swarm, which worked nice enough. It was equivalent of multi-host Docker run, which could filter based on constraints and do bin-packing, and had even support for multi host networking with etcd/consul/zookeeper.
Then, they cancelled it, no more patches, no mention of it anywhere unless you know where to look.
Then they created Swarm Mode, and add the concept of "services" which sucks compared to regular run because it lacked so many options the run command had, it took more than 6 months to implement most of them.
docker hub is surprisingly difficult to replace because of how docker registries work.
Traditional package managers have two distinct concepts: a repository and packages within those repositories.
For example, if ubuntu took down their apt repo server, I could run my own with all the same packages and change a single sources.list entry and all my servers, ansible roles installing packages, etc, would operate the same.
This is possible because the package name+version is an identifier everything else uses and the only thing that cares about the repository is apt itself; all other tooling doesn't need to know about the repository the package is sourced from.
Docker conflates those two things. Each client doesn't just send a package name, it sends a url + package name + version (e.g. foo.registryurl.com/image:version). Because every single client has the detail of "foo.registryurl.com" baked in, it's difficult to change that. I can't change a single "repository-mapping" file that the docker daemon reads to quickly update it.
Instead, I have to update every single client.
The idea of decoupling those is not new. In 2014 it was proposed [1], and various implementations that would help make it easier to migrate off the default registry have been proposed and rejected [2]
This doesn't even get into the lack of tooling for chasing down the transitive dependencies building my images has on various registries with each FROM.
To combat this, every single image we use from hub.docker.com is "proxied" into our registry with a one-line Dockerfile:
FROM image:version
Building the "proxy" image and publishing it in our registry is entirely automated (using CI+Registry of a self-hosted Gitlab). Then we make everything point to our version in our registry. Should hub.docker.com go belly-up, then we have 1. a cache of versions in use (current and past), and 2. full control of the images (possibly making our own FROM scratch) without having to change a single line in downstream consumers. Initially we did this to be safe from hub.docker.com possibly intermittent availability that would delay image pulls on deployments.
Do you do it per project or as a separate project that houses all the proxy images? How do you version the proxy images? What namespace do you push them into? Is it easy enough to deal with that it doesn’t waste a lot of time?
I’ve been trying to insulate myself from docker too and the FROM proxy strategy seems to break the least stuff. Have you hit any pain points?
This is a single 'gitlab.example.com/docker/library' project.
We use orphan branches, one per image, although other strategies are possible (like using the commit diff and directory name).
Proxy images are versioned using branch names (e.g postgres vs postgres-9.6), images are pushed to gitlab.example.com/docker/library/postgres, and using version detection we generate docker image tags (e.g a 'postgres' branch will create postgres:latest, plus extracting version from postgres --version also pushes postgres:10 and postgres:10.1 images.
See this .gitlab-ci.yml[0]. Yes, there is one per branch. This can be generalised further (especially with Gitlab's new import system for .gitlab-ci.yml) but works well enough in practice, it's very low maintenance, and updates are a mere commit+push away.
In fact we use this not just for proxying images but for all "generalised", "utility", or "dependency" images that are not the result of a given full-blown app project in its own repo (those have their own CI/CD process in their respective repo)
But here we are talking about Docker Hub closing, which is way easier to solve. This is the registry that is used as default (for all non-local images without a URL identifier, like "alpine:latest").
All you need is an option to set the default registry. Probably it's already there, didn't google.
Does it actually send the URL? I think it uses the URL to make the network request, but doesn’t actually send the URL to the registry. If it did, you could use an alternate registry as a pull through cache and have it go upstream for everything. AFAIK that’s not possible.
The registry itself can be self-hosted easily, however everything on top (authentication, web UI, automation, ...) is surprisingly difficult. Plus technical challenges as pointed out by other commenters.
AWS has a registry as well. Both solutions are and easy way to host private and public repos. But Dockerhub offers access to a huge ecosystem that everyone is already plugged into that would be devastating if it shut down. You’d potentially have to re create so many base images from scratch.
Nexus isn’t a terrible option, but the application feels pretty dated from a deployment and maintenance perspective. We’re using it for ruby gems and internal docker registry. I haven’t looked into viable alternatives for the gems side of things—that works well enough—but we’re replacing the docker registry with gitlab.
Their devops for autobuilding, testing, and deploying are really awesome as well. We run an internal GitLab host that handles all deployments for us through their CI/CD interface.
Anyway, yeah, that's insane. Even Google, who constantly shuts stuff down, usually does so with way more heads up. For comparison, Google Reader, a completely free service, shut down with 3.5 months advance notice. Google Wave got almost 6 months notice.
It’s not about features. Nowadays, I’d go with a self-hosted Open Source application from the start. In fact, that’s what I am looking for at this very moment.
I really don't get why this statement is so common. More or less every other hosted reader immediately offered a migration path, so getting out of Reader and up and running somewhere else was really easy.
All Google Cloud GA features will have at least 1 year of deprecation period.
7.2 Deprecation Policy. Google will announce if it intends to discontinue or make backwards incompatible changes to the Services specified at the URL in the next sentence. Google will use commercially reasonable efforts to continue to operate those Services versions and features identified at https://cloud.google.com/terms/deprecation without these changes for at least one year after that announcement, unless (as Google determines in its reasonable good faith judgment):
(i) required by law or third party relationship (including if there is a change in applicable law or relationship), or
(ii) doing so could create a security risk or substantial economic or material technical burden.
I understand that having to abandon ship sucks, but wasn't the whole point of containers that they can be migrated easily? heck, the whole concept got its name from that idea. So why the fuss?
The APIs of platforms are often totally different. If you went to docker cloud for its simplicity and now have to move to AWS/GCP/Azure/etc. and don't have a dedicated DevOps that knows one of those platforms already, you have no choice other than taking a developer working on features and putting them on learning the new API in a few weeks including testing. ~8 weeks is not enough for that if you are a cash-strapped startup.
Please keep docker swarm going guys, it's a great product.
Docker cloud is no loss (with apologies to those who are using it in production) and will hopefully free up your people to work on other important stuff.
Otherwise if we can continue to use the compose configuration api and the docker deploy/service api with k8s under the hood then I guess that's a reasonable compromise.
Had a startup in this area, with swarm under the hood, and realised last year (when RackSpace closed Carina) that 1. swarm is loosing momentum due to the ammount of new features, stability and native cloud integration that k8s brought; 2. containers are beeig adopted at a huge speed, and big cloud providers like aws and gce have lots of users and trust, so the effort of offer CaaS is not that big, with great chances of success; 3. The hosting market is tough due to comprtition. DigitalOcean was in this market way before they were caled DO.; 4. When you want to make money from OSS like Docker does, companies like RedHat and IBM allready have years ahead of competition due to their established sales channels (in 1-2y from now we will see tectonic in all rhel powered companies, and docker announcing that it’s not supporting UCP any more)
Given the time it takes to migrate the production stack to another system (one or more months), this won't be enough time for many users to migrate. There is no drop-in replacement. It is pretty abrupt of them to give a 60 day notice.
On a meta thought, I wonder what potentially caused this move. It is/was a pretty decent service.
I was logged into docker the other day and saw this notification but I didn’t—and still don’t—know exactly what services they’re referring to. Do they mean everything under cloud.docker.com, the swarms beta, or something else?
I played around with it a long while ago and really liked it, but felt like more moving parts (and particularly on MySQL, which wasn't in our stack) wasn't something I was too keen on. Having someone manage this for us could be worth paying for though.
Rancher, the container orchestration, is pretty much dead too, since it's K8s-based from (upcoming) 2.0 on, i.e. it's becoming a K8s distribution and has to compete with K8s itself, OpenShift and probably upcoming open-sourced edition of Tectonic. RancherOS might live on, even though there is a lot of doubt on why it should do so.
I think you mean "cattle", the container orchestration system in Rancher 1.x
Rancher (and Rancher OS) seem to be doing fine and getting updates constantly.
We're eyeing Rancher 2.x and the Kubernetes integration but it gives me confidence seeing Rancher 1.x getting updated while they are so focused in 2.x and K8s.
Rancher is far from dead. They are essentially making a nice frontend for k8s. All we care about is ease of deployment and management of containers. If that is being achieved by k8s, we don't have problem. And yes, they are managing 1.x version nicely with timely updates.
Plus it's a self hosted version so no such fear of it disappearing within 2 months. :)
It does not have hosted service. What we did was to dedicate one VM to its management server and that worked out amazingly well. Everything is inside it and does not disturb other systems.
Docker cloud offered CI/CD integrations and management features similar to K8S that were proprietary to them. While Docker images themselves are portable, this is a whole different beast.
It seems like they are only discontinuing the hosted swarm part for now and keeping the rest up. That said, it may just be a matter of time before the great of the stuff goes down.
As I read through this all it feels sort of weird even typing it but I kind of hope Amazon or Microsoft buys Docker rather than Oracle, Google or IBM.
I think Microsoft still employs thousands of great engineers and have been early embracers of containerization among the large companies out there and because Satya was a large part of growing Azure into what it is (IMHO a pretty solid set of services) it could make a lot of sense
I was wondering if you think the whole Docker inc is closing or something like that. Docker is only shutting down their cloud offering for Docker Swarm and I think it is to bundle their resources and focus on integrating Kubernetes.
This is why you should write your app code to be independant from your current provider, Always thinking that you may have to change (maybe urgently) in the future
Damn. My last company ran a bunch of production things in Docker Cloud. Contrary to many opinions, I kinda liked it. It was cheap and fairly simple to use. The API was straightforward and way less expansive and cumbersome than something like AWS ECS.
Docker Cloud was getting worse and worse. Tutum (they acquired, rebranded and killed it) was great. But docker team just destroyed it.
That is a shame. Tutum was great since it scaled up and down very well. Nobody thinks about scaling down, but this is important in many ways.
Right now IMO:
- Docker swarm - scales up and down ok. Had too many bugs with even on stable.
- Rancher - fine, very good for medium deployments.
- k8 - winner for larger scale but scales down badly.
Please keep swarm mode going. It’s a pleasure to use, easy to set up, works well with Compose and overall satisfies a lot use cases. Thanks for your work!
I've been meaning to learn kubernetes for a while now having been a docker compose user for a few years. Swarm seemed super appealing due to easy migration and simplicity. Having gone through a few tutorials for Kube I still feel a little overwhelmed with the volume of configuration, and also how local development is meant to be done. This was always a very cool part of compose.
Docker really needs to provide some transparency around these decisions. People are concerned about the future of the ecosystem. They have every right to shutdown aspects of their offering. But the community deserves an explainstion. This notice didn’t even attempt to explain the decision. Is Docker stopping development on swarm? Getting out of the PaaS business? Downsizing?
I've been working on a Docker Cloud alternative for awhile now. I'm aiming for something that kind of balances the convenience of Heroku with the Docker experience.
It's still in beta but if anyone wants to check it out it's at: https://codemason.io/
It's a managed application service where you bring your own node, from the cloud or from on-prem. It gives you a dashboard & end to end management of applications running on your nodes.
Genuine question - what makes you think that Kubernetes has won?
Well, "won" in quotes. I am not a k8s fanboy or anything, I simply observe that all the major cloud providers are offering managed k8s services that have superseded their own proprietary container-type offerings. For better or worse that's where the momentum is. If you wanted to containerize your stack right now, k8s then pick one of the big 3, seems like a safe bet.
I've noticed the same trend, and as a fan of Docker Swarm, along with this news, I'm not happy about it.
Compared to Kubernetes, Swarm is a breeze to setup, deploy and manage. The manifest files are the same Docker Compose files we're used to, just expanded to cover the new stack concepts. It has support for remote storage mounting, advanced networking configuration, various interesting volume and network plugins[0], and is generally a pain-free experience to use (from my admittedly short time with both).
Kubernetes is a fine product. It's just a shame Swarm doesn't seem to have the same traction.
Can someone share their Swarm experience in production, possibly compared to k8s?
We're going with Swarm exactly because of the reasons you've listed. The EE part is a bit flaky sometimes (I'm looking at you, UCP), but Swarm is brilliant.
I don't think that you need to be afraid and migrate just because of that. Swarm is not a service, so if they stopped developing it then you'd have plenty of time figuring out how to move away because your Swarm clusters would not stop working.
Also, there are very large Swarm installations in production at large companies, so I'd be surprised if Docker cancelled the product (which is their flagship).
Do you know if there are any good tools for migrating your manifest/compose yaml files to k8s, especially using more recent features such as configs and secrets?
> Well, "won" in quotes. I am not a k8s fanboy or anything, I simply observe that all the major cloud providers are offering managed k8s services that have superseded their own proprietary container-type offerings. For better or worse that's where the momentum is. If you wanted to containerize your stack right now, k8s then pick one of the big 3, seems like a safe bet.
kubernetes got released more than a year earlier. And it wasn't until 2016 that people actually started to take docker-swarm seriously. Thats a two year headstart...
docker-swarm would've needed several really impactful feature to offset that. It didn't really.
The only real upsite is how easy it is to install and maintain. But that's hardly important with pretty cheap hosted solutions around (at least cheap in comparison to hiring several SRE to maintain the cluster).
The only real upsite is how easy it is to install and maintain. But that's hardly important with pretty cheap hosted solutions around (at least cheap in comparison to hiring several SRE to maintain the cluster)
Indeed - something like AKS can get you up and running very quickly, whereas as recently as 6 months ago unless you had a k8s wizard-guru on staff, there was no point in even trying to go it alone, esp. not into Production.
It also suggests that the skill set of the on-prem k8s expert will decline in demand over time, there will be less demand for people who can set it up from scratch on bare metal. We shall see!
We will be happy if Docker talks about Swarm becoming a management UX for K8s - but we need visibility. These are production orchestration systems. The migration path is not easy.
And seeing what Docker Co is doing with Cloud, it is not very comforting to trust that they will do the right thing with Swarm.