Linux on IBM Power5s

From: Arthur Corliss <acorliss@nevaeh-linux.org>
Date: Wed Jul 06 2005 - 09:20:25 AKDT

Greetings:

Just thought I'd throw a few ideas out there, now that I've had some decent
playing time with Linux on IBM's Power5 (in a p5 570). In a nutshell, it's
awesome, and if IBM can simply get the commercial software vendors to support
Linux PPC they'd blow the doors off the blade market.

You can micropartition you can create host partitions with as little as 1/10th
of a processor, giving you a total of 40 partitions you can host in a single
4-way CEC. Throw in a virtual I/O server partition (basically just a
specialised AIX install) and it becomes practical to do so since you can now
put all of your system disks, etc., on an external SAN *without* having to
have a dedicate FC HBA on each Linux partition. The hosts think they're
talking to locallly attached SCSI DASD via a virtual SCSI card (supported in
the 2.6 kernel).

2.6 also adds support for virtual NICs, and so you can aggregate your local
traffic over one (or multiple, using LACP) physical ports hosted by a VIO
partition. The hypervisor provides full layer-2 switching capabilities with
802.1Q support, with about 3Gbps total throughput.

Then, look at the processors: starting with the Power4s you had two full CPU
cores on a single die. The Power5s add a psuedo-hyperthreading capability to
each core, so with SMT + SMP on one processor gives you an effective four CPUs
capacity (depending on your actual workload).

How about DLPAR: want to add processors, memory, I/O cards to a partition?
You don't even have to interrupt the running host OS.

Barring IBM hiring all of DECs old marketing people I can't see how these type
of capabilities won't wipe the floor with blade servers. Why waste an entire
blade to sit idle running DNS, or lose the attached disk space? Give it a
tenth of a processor, and carve an LV from a LUN managed by the VIO server's
LVM that's no bigger than you really need. Getting hit a little hard? Throw
another tenth of a processor on the fly, then take it back when the system
load drops.

Imagine a fully loaded 570 w/4 4-way CECs: in 18U or rack space (4U/CEC plus
1U/ea for the HMC & console) you can have up to 160 partitions managed as one
seempless pool of processor, memory, and I/O that you can allocate as you see
fit. Well, you do need to add at least another 7U for the SAN, but hey,
that's still less than half a rack, and more cost-effective than a blade
server to get the same level of redundancy without any of the flexibility.

Of course, this did force me to move to the 2.6 kernel for my ppc port of
Nevaeh Linux, so it's not all roses. If we're lucky Linus will fork a 2.7
branch so 2.6 can really stabilise.

BTW, you can run Red Hat and SuSE on the 570, so some of your favourite
distros are available for it. I did try SuSE on it for about a week, but I
had to stop when I broke out in hives. ;-) And from I've read Red Hat really
screws you on the licensing (you have license even a .1 processor partition as
an 8-way machine, any partition with more than 8 processors requires a 16-way
license. So, in my current configuration I'd have to license my 4-way CEC for
112 processors :-P To SuSE's credit I only have to license the number of
physical processors).

        --Arthur Corliss
          Bolverk's Lair -- http://arthur.corlissfamily.org/
          Digital Mages -- http://www.digitalmages.com/
          "Live Free or Die, the Only Way to Live" -- NH State Motto
---------
To unsubscribe, send email to <aklug-request@aklug.org>
with 'unsubscribe' in the message body.
Received on Wed Jul 6 09:20:32 2005

This archive was generated by hypermail 2.1.8 : Wed Jul 06 2005 - 09:20:32 AKDT