[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Blocking SSH attackers



On 10/31/05, Stephen R Laniel <steve@laniels.org> wrote:
> As with a lot of other people, I've noticed lots of attacks
> on SSH recently. Just yesterday, my company got 1,611 failed
> ssh logins within an hour.
>
> Two questions, then -- one specific and one general:
>
> 1) What do y'all use to block attackers like this? It seems
>    to me that anyone who tries to login with a nonexistent
>    login name should be blocked immediately, for at least an
>    hour. Anyone who tries to login as an account like root,
>    and fails more than once, should be similarly blocked. I
>    can imagine encoding certain 'block policies', and
>    writing something based around hosts.deny that enforces
>    it. Is there an accepted "best practice" that works like
>    this?
>
> 2) I've recently moved from administering small networks of
>    Linux machines, to administering a much larger load of
>    them. I'm feeling kind of overwhelmed by the increased
>    scale of my responsibilities, and the increased
>    consequences if I mess something up. My sense is that
>    when the network scales, one starts worrying about things
>    like secure LDAP, preventing more determined attackers,
>    putting /etc under source control, etc. And I wonder
>    whether anyone's documented best practices for larger
>    admin tasks such as these. Any pointers?
>
> Thanks very much,
> Steve
>
> --
> Stephen R. Laniel
> steve@laniels.org
> +(617) 308-5571
> http://laniels.org/
> PGP key: http://laniels.org/slaniel.key
>

Shane has the right Idea. When you get into a lager network, it only
make sense to use tcpwrappers or a firewall on each machine. Allowing
only known hosts that need to connect to servers. Have a separate LANs
or VLAN for internal , public, Office. If server A should never need
to contact server Z, block it. Any attempt to gain access to the
private LANs should be through a bridge box. As much as I hate VPN's,
they have their uses also.

Use a central syslog server, run the IDS stuff off the syslog server.
Now your accountable to more then a dozen servers it sounds like. Do
you want to have to track syslog events on 20+ servers one at a time?
Imagine the massive amounts to data you would have to go through on a
daily and individual basis. It also helps that if a box was
compromised to track the who, what, where, when and how (legal stuff).

A little bit of thought on whats important and how you deliver that
info is pretty critical. For instance, if your only two DNS servers
all of a sudden quad their bandwidth and the CPU hits 99%, that's a
big problem and a bunch of people should get paged. If the mail server
all of a sudden get a bunch of spam, that's something that you should
not get woken up over either.

Get to know your network trends both in the private LAN and Public LAN
and Office LAN. Bandwidth graphs are great, netflow is even better.
Monitor everthing. If the accounts payable person is streaming TV on
there workstation, bogging down the LAN/WAN how are you going to find
out? That also goes to DoS attacks, how fast can you and your ISP
respond to a DoS attack or some other infrastructure crippling
problem?

Whats your plan B? If the web cluster takes a dump during the
companies major marketing tour, that's a really bad thing. Disaster
recovery is something that needs to be thought out _NOW_. How fast can
you rebuild the box? Should you rebuild the box? Do your backups
really work the way you think it should? What impact will the LDAP
services going down cause? How can you achieve maxim uptime on the
LDAP services.  This is an area that can really screw you over.
Internal IT policies can help mitigate the problem, but you have to
stick to them and change them when its needed. Solid deployment of
services is key,  with built in redundancy. Did I mention how much
important redundancy is? Ask your self, is that one hugh SAN really a
good idea? Can you even support or backup 4TB of data? People forget
that to format that 4TB system could take DAYS.

How do you communicate problem with customers, office workers, and executives.
Forgetting this part will surely get you canned. If you tell the the
VP of your department that all the mail services have been down hours
after it happened, that's bad. Streamlining communication in the
company to handle these types of problems should be addressed. Some
type of outage matrix should be created.

It is to easy getting caught up on individual servers/workstations,
but you need to look at this from a larger view. From a network up
time, security, and usability standpoint. There trade offs on any
choices you will make. How can you over come them?

anyway, by 2cents.

-Erik-



Reply to: