Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So here's the problem:

Ports under 1024 are privileged in Linux - they require root or root delegated privileges to run. When SSH is on port 22, you have the assurance that unless the server is root compromised, it is what you think it is running on that port.

When you make that port 2222 or whatever, like so many people do, you have cut out a lot of noise... but now that compromised PHP application you had running has allowed someone to now race you every time SSH is restarted for an update or crashed or whatever to bind on that port. Let's say it wins - now you've got something listening on the SSH port. If someone using SSH ignores the fact that the host key has changed, now they're trying to login to a fake SSH server. Maybe you use password auth and you just gave them the password. Maybe they're using an OpenSSH client that is vulnerable to leaking private keys in certain situations. Maybe the fake SSH server pretends to be a real shell and they then log whatever actions you try to take when you SSH in. Because they're going to be able to figure out what port SSH is listening on - a fingerprinting port scan can be done in seconds.

You are sacrificing security when it comes to a more focused attacker for the sake of filtering out the low effort mass scans and basic brute forces. The thing is - I'm worried about the former, not the latter.

If you care enough about securing or filtering your SSH to go through any of this trouble, just set up a VPN on a separate machine and restrict SSH access via firewall to that machine. Spin up the smallest VM your cloud provider has, throw up wireguard on it, and you're good to go. It'll be plenty for a VPN that's basically just there for SSH access. Now someone has to have both an exploit for wireguard and an exploit for SSH to get into your machine that has things you care about, you've filtered out all the noise, and you haven't introduced new security risks for a more determined attacker.



What you mention makes sense if all of the wild potential problems exist on the system. You've already fucked up leaving password authentication enabled on your theoretical machine.

It's pretty trivial to leave SSH on port 22 and just forward a high number port to it while blocking 22 externally. All the root-not-compromised assurances and still maintaining the high pass filter.

While a VPN certainly can be a good solution to securing a machine, you've now got the problem of the VPN server needing its access protected.

Public key only for authentication and strict fail2ban rules combined with port forwarding makes for a very tight system. Not invulnerable but secure enough to not be worth the effort.

As I said originally and still maintain, using non-standard ports is a high pass filter. It's not a security measure. It might be part of your security setup but it's just a filter.


It doesn't necessarily need to be a high port either.

All bots will check port 22 first and then move onto others. Your can have a port sniffer watching this and block any attempts to your actual <1024 port of they've also hit port 22.

It works a treat. Shodan, begone. I agree with your high pass sentiment


> It's not a security measure. It might be part of your security setup but it's just a filter.

I think most people saying security by obscurity 1) know that’s a fallacy 2) know that by using that phrase the reader/audience knows what they mean without having to explain a phrase like high pass filter. It’s a bit pedantic to rant on the use of terms.


> You've already fucked up leaving password authentication enabled on your theoretical machine.

You could also be exposed to this with key based auth - older openssh clients have a security flaw where they can leak keys, and you could also have a 'fake' shell that still allows them to gain a lot of details about the inner working of your systems.

>It's pretty trivial to leave SSH on port 22 and just forward a high number port to it while blocking 22 externally. All the root-not-compromised assurances and still maintaining the high pass filter.

Yep! Certainly can.

>While a VPN certainly can be a good solution to securing a machine, you've now got the problem of the VPN server needing its access protected.

There's a few things here:

So, Wireguard uses key based auth, so it's pretty sure by default. Additionally, it doesn't show up in port scans - if you don't send the correct key, you get no response from the wireguard server. You'll get the same 'open|filtered' response from nmap and similar that you would for a UDP port with nothing listening. (I don't think this particularly matters - I don't really care if someone knows I've got WireGuard running - I would run it even if you could tell I was. See following paragraph)

Second, if the VPN is compromised (from a key leak) the worst they get is access to whatever the VPN network is. I don't trust clients just because they're on the VPN, and neither should you - they have all the same auth and security requirements I would put in place for something on the public internet. They still need to get past key auth for SSH (or have a zero day for it). If there's a zero day for wireguard itself, well, now they have access to that box, and can potentially use it for nefarious purposes, but I don't keep any private data for my company or our customers on it, so I replace it and upgrade to a version of wireguard without the exploit. They still need to somehow get past SSH auth, or use another zero day. I think the chances of there being simultaneous public zero days for both at the same time is pretty much nil, and someone being willing to burn two private zero days for the two on you means you're up against someone with enough resources you're probably screwed to begin with.

>Public key only for authentication and strict fail2ban rules combined with port forwarding makes for a very tight system. Not invulnerable but secure enough to not be worth the effort.

Frankly, I think public key only auth is most likely enough for 99.999% of everyone to begin with. I don't think fail2ban or port forwarding from a nonstandard port to 22 matters overmuch, even for filtering logs. If you're going for the "realistically good enough" setup, key only, no root, only have personal nonstandard usernames allowed, that log noise doesn't matter because you can just grep for failed attempts for the usernames you care about and ignore everything else, and the only thing you really have to fear is a OpenSSH 0day, which running on a nonstandard port isn't gonna save you from either. It might buy you a little time to get it patched and not get hit in the first wave, but that's about it.


I guess that if you manage to take over the high number port by somehow crashing the high port forwarder service and then listening on that high port, you have taken over ssh on the high port.


Someone setting this up well on Linux would be using iptables forwarding, not a user-space service listening and forwarding. Now, this hypothetical attack has to be able to manipulate the kernel's network stack and we might as well stop pretending the low 1024 ports have special significance in that case, either.


If you can crash or modify ip tables then you pretty much have free access into the system anyway. No need to expose a fake ssh host. Just change the ssh binary itself for an infected one..


Well, I've simply changed my SSH port to a different port <1024 (e.g. 950). That invalidates the rest of your argument.


I was confused about that as well. There are plenty of privileged ports you can use.


True but they're still much easier to find in a scan. Most scans will prioritize them.


It's still better than putting it in the same, expected spot. Plus, port scanning is often a red flag behavior (i.e. it's against terms of service in EC2 to perform it from one of their machines), and it can be detected.

It's like putting your key under your door mat vs. some loose brick near your back door. Sure, someone can still find it (I don't recommend the real-life equivalent), but assuming you have a camera and/or nosey neighbors, there's a good chance the presumed invader is going to look suspicious enough to garner unwanted attention.


Oops I didn't know about the scanning thing. I often do this from my hosted VPS to my other systems, to make sure all ports that I plan to have closed are actually closed.

Especially when it involves IPv6 networks and each internal device has its own IP, this can involve a lot of scanning. I haven't had any complaints but good point, I could get banned for that.


It does reduce the noise in the logs. I also use pam_shield to drop packets from the scanners.


>If someone using SSH ignores the fact that the host key has changed

Here's the real problem.

Also if you're worried you could use SELinux, AppArmor, or things like docker to limit the capabilities of a compromised PHP application.


It is a major issue. Things like autoscaling have got a lot of bad habits ingrained in people though.

You should definitely use SELinux or AppArmor. For any production workload I do.

But for any production workload I don't allow SSH over the public internet. I require a VPN connection to a subnet that only has jumphost(s), the jumphost access logs are shipped to an ELK stack, SSH requires both a key + MFA to login to the jumphost, and SSH between production servers is blocked and alarmed on.


SSH onto the prod servers should also be denied. You say autoscaling so do you have an image? If yes then why do you need prod SSH access anyway? If a box is acting up kill it and let the ASG create a new one.


Sometimes you have trouble reproducing an issue outside of prod, even with things like tcpreplay or blkreplay. You could just kill off a problematic instance, but then you have trouble knowing why there was a problem to begin with. Grey failures might not be obvious in logs or metrics.

The idea that you never ever have to SSH into a production server is a nice ideal, but I've never seen it survive reality unless you just shrug about issues occurring and don't mind not being able to root cause them.


> If someone using SSH ignores the fact that the host key has changed

Then it's game over regardless of anything else. That someone is now owned by anything and anyone who cares to MITM them.


Yes, it is an issue for sure.

But it's also a world where people have grown accustomed to doing this - automatic scaling in the cloud with re-use of IPs means that systems come up with the same IP frequently, so people have gotten used to doing this.

For the stuff I manage in the cloud I use their APIs to keep my known_hosts file up to date - if an IP got reused by an instance, I clear the entry in my known_hosts file. If it didn't, I would see the key change alert (verified by modifying the host key on the destination server)


Why not directly setup wireguard on the same machine as the ssh server?

Wireguard doesn't answer anything during port scanning, or when it receive unauthenticated packets, as it doesn't need a handshake.

Actually, I'd think that wireguard + telnet should be good enough, though ssh has a lot more features.


> Wireguard doesn't answer anything during port scanning, or when it receive unauthenticated packets

While I agree that UDP port scanning is harder than TCP, since you cannot just batch send SYNs on every port, it really depends on how you did setup iptables and the sophistication of the port scan.

Most scanners will consider that not receiving an ICMP/port unreachable is a sign that some UDP service is listening. This could be prevented by a default DROP instead of REJECT to confuse scanners, but it has other annoying implications. I bet most people out there do use REJECT so a UDP service such as wireguard would be immediately spotted because of its lack of response.

Also note that if the attacker is willing to invest a bit more time on the map, they will most likely have a Wireguard probe.


Wireguard doesn't respond at all if you do not send the correct key from the start, so I'm not sure how you would write a probe for it. You receive the exact same response as if you send a UDP packet to a port with nothing listening.

Most real world firewall configurations I have seen use DROP instead of REJECT, so anecdotally I'm not sure about your claim.

I don't think it's particularly worrisome if someone knows I'm running Wireguard, though.

If there's a wireguard zero day, they gain no more access to the rest of my servers than if I had port 22 open to the internet - and I don't think that's a significant risk with key based auth to begin with. If there's an openssh zero day, then yeah that could be trouble, but now someone needs to have zero days for both wireguard and openssh to get in to my prod servers?

If they've got two private zero days for that and they're willing to burn them on me, I'm an incredibly high value target and pretty much everything we're discussing here isn't going to be enough to save me, and I've got much bigger problems.


> Wireguard doesn't respond at all if you do not send the correct key from the start, so I'm not sure how you would write a probe for it.

Is that really so? I'm not all that familiar with Wireguard but that seems like a debugging nightmare if the client has no way to get any error pointer from the server. At least my experience from setting up some IPsec infrastructures is that client logs are essential for troubleshooting.

> Most real world firewall configurations I have seen use DROP instead of REJECT, so anecdotally I'm not sure about your claim.

I do advocate for DROP, but most configurations I see are default REJECT. DROP has a bunch of disadvantages that most people don't want to deal with. It messes with TCP because of the lack of ICMP responses, and overall it makes troubleshooting harder because you end up with programs hanging and time outing instead of instantaneous failure.

Actually the use of REJECT is so widespread that it makes my life easier. When multiple levels of firewalls are involved, I can be quite sure that if the program hangs, it's a rule on my side, while a RST tells me that it's somewhere else.

> I don't think it's particularly worrisome if someone knows I'm running Wireguard, though.

Agree, though the subject is just port scanning here, not what happens beyond that.


>Is that really so? I'm not all that familiar with Wireguard but that seems like a debugging nightmare if the client has no way to get any error pointer from the server. At least my experience from setting up some IPsec infrastructures is that client logs are essential for troubleshooting.

Yep! Which, you're right, can make wireguard troubleshooting a pain... but it also mostly 'just works' with significantly less configuration overhead and chance of messing things up than your general IPSec IKE setup. IKEv2 definitely makes things nicer, but it's still not as generally painless as wireguard.


DROPping a UDP packet costs much less that REJECTing it. I cannot imagine why one would use REJECT for those.

I don't know why to mention them in the firewall at all. The kernel knows which ports it is listening on, and drops the rest without prompting.


> DROPping a UDP packet costs much less that REJECTing it. I cannot imagine why one would use REJECT for those.

Because 99.99% of people out there use iptables, with a single default rule for unmatched packets, whether it be UDP or TCP.

> I don't know why to mention them in the firewall at all.

Because you surely don't want your hosts to expose something unexpected to the internet .

> The kernel knows which ports it is listening on, and drops the rest without prompting.

Nope, and that's precisely the subject of the discussion here. In case nothing is listening, you will receive a nice ICMP "port unavailable".


OK, that's news.

Is that the same (externally seen) effect as a REJECT?


A wireguard zero day could in theory result in you getting owned in that situation. It also is just easier to manage if you have multiple servers.

I don't think it's particularly likely, but I don't think OpenSSH server zero days are particularly likely either.


> When you make that port 2222 or whatever, like so many people do, you have cut out a lot of noise... but now that compromised PHP application you had running has allowed someone to now race you every time SSH is restarted for an update or crashed or whatever to bind on that port.

This attack is mitigated quite strongly if you have selinux enforcing and aren't running networked services in unconfined domains.

Even if you have networked services that aren't confined in the default targeted policy, you can probably learn to write policy for them in a day or two, although poor documentation can make for a steep learning curve.


If you're going about that level of security (and you should be!) - why are you bothering with SSH being open to the public internet to begin with?

This is what's weird to me about this whole argument - people can come up with lots of ways to secure this, yet aren't willing to do one of the things that will provide the most security while also offering a high pass filter in blocking SSH access to the public internet. No log noise, no chance of a zero day hitting it, one key being compromised is no longer enough to result in an access breach.


In my case it's because (not counting the machines I get paid to admin, which are indeed behind a VPN) I only admin one isolated VPS, so without a dedicated bastion I feel that the benefit of a VPN is reduced. I just secured it according to the DISA STIG plus some more intrusion detection and stronger selinux confinement.

Adding a dedicated bastion would double my monthly costs, but SELinux costs me nothing if the targeted policy covers my applications, or like half an hour of my time per service if I have to write my own policy modules.

Although, I should point out I'm playing devil's advocate here because my ssh is still on port 22.


I agree, however with any team with size > 1 you then have to manage access. I've seen small teams share a common credential and getting SSO to work OOTB with that VPN infrastructure in SCM is a pain. Additionally, if you end up having more behind that VPN (e.g., db's, monitoring, etc) its arguably net worse to do than the port-shift IMO.


> just set up a VPN on a separate machine and restrict SSH access via firewall to that machine.

So create a tunnel to create a tunnel? Why not port knocking or just white listing static IPs?


Listen on both ports and firewall out 22?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: