scaling_firewalls

This is part of The Pile, a partial archive of some open source mailing lists and newsgroups.



Date: Wed, 03 Jan 2001 14:56:57 -0800
From: Dan Beimborn <dan@celticmusic.com>
To: svlug@lists.svlug.org
Subject: [svlug] Firewall Scaling

>How exactly does one scale firewall hardware against pipe bandwidth? I've
>got a client that is currently running FW-1 on a managed Solaris box, and
>they are wanting to replace this box with a Linux or OpenBSD box (and end
>the monthly cash drain from this managed "solution"; monthly fees add up
>to about $40,000 per year for this thing.). The pipe into their
>cage has 5Mbit/sec guarenteed bandwidth, with support for up to
>100MBit/sec bursts of traffic. They are really averaging 1.5-2MBit/sec of
>traffic, but I would like the box to scale as their traffic grows.
>
>Anyone have hardware guidelines to follow for the building of this box?

You can solve all 3 bottlenecks at once with lots of ram, good NICs,
and a decent CPU. Another approach is to make a cluster. With Checkpoint
this gets expensive quickly in licensing, but you can do it with 
IPChains fairly cheaply. My company has a commercial product called
RainWall that does load balancing/failover accross multiple firewall
nodes, generally so you can have complex rules on even a busy site by
load sharing. The product page is 
http://www.rainfinity.com/us/eng/products/rainwall/index.html

(though it's not emphasized, it works with IPChains on linux; not
 *requiring* checkpoint to work)

Without our product, you can still do load balancing in linux with a 
number of approaches, the model we follow (at the very basic level)
is to have the machines monitor each other's load/availability and
then do a gratuitious ARP to take over the incoming IP of the other
machine if it is too laden.

Anyway, on your original question, a minimal configuration of probably
a p3/400 + 512mb of ram will run a pretty busy linux/IPChains site
(or Netfilter if you're feeling brave about prerelease kernels!). Most 
of the slowdown is going to be when you have a lot of traffic to run
through a lot of rules. My experience is that the RAM is the first 
bottleneck, then CPU at much higher loads. Watch the output of "free"
(or "watch -n 1 free" !!) to see if you are swapping to disk ever,
and add ram if so.

If you design with a cluster mentality, you can always scale by 
adding nodes. We currently have tested up to 20 nodes, but that's
for mega-portals really.

===

the rest of The Pile (a partial mailing list archive)

doom@kzsu.stanford.edu