RampART/computers

From Aktivix
Jump to navigation Jump to search

Once apon a time the was 'the MacLab', a hand full of old iMac's made to run a linux distro called Yellow Dog, almost as a joke. What had been conceived as a media lab became London's second hacklab - a room full of skipped PC's, humming and buzzing, and consuming vast quantities of electricity.

Originally the hacklab machines all ran an operating system called freeBSD, installed and maintained by BSD evangelist Chris. However as time went by other people rebelled and wanted other operating systems so a few debian, ubuntu or pcBSD installs appeared in the mix.

The original intent of the hacklab nether really materialised and when Chris started spending less time maintaining the computers and users seemed more interest in viewing porn or pissing around on myspace, the hacklab was shut down.....

Jump forward some period of time - post cavell street and bowl court... the rampart still has loads of computers, none of them in use and no internet...

A New Regime

Inspired by the work of Bristol wireless and the stuff we did together at the camp for climate action, along with an upsurge in energy at the rampART - we pulled out all the computers, audited what we have and set up a ubuntu LTSP server plus almost a dozen diskless thin clients.

LTSP5 on Ubuntu 8.04

My god this is so easy now! When I first attempted to set up a thin client system it was REALLY hard work. Now it is a piece of piss. Since ubuntu 7.04, LTSP 5 has been a package thgat you can simply install to set up a fully functional LTSP server in no time at all.

The Server

The server we choose was a 1.7qghz Pentium 4 which we loaded up with as much memory as we had available - 1gb at time of writing (the more memory in the server, the better the clients will perform).

In theory (according to the edubuntu wiki) the LTSP server should have 256mb ram plus 128mb for each attached client. We have just 1gb so that should cope happily with six simultaneous clients (256+6x128=1024mb).

The hard drive in the server was actually rather small - ideally there should be two to ensure duplication and backup (a full RAID system would be even better).

Now that we have it running and know what we are doing, we might switch to using another machine for the server, a 2.4 ghz equivalent athlon, also with 1gb of ram.

The Clients

Completely diskless workstations - low power consumption, low noise and zero maintenance, that was the goal.

We quickly configured eight machines to boot via PXE using ethernet cards with boot roms. And just because we could, we set up two more machines to boot via PXE despite not having an ethernet card with a boot rom by using a boot image on floppy disk (google rom-o-matic and etherboot to find out how).

These machines range from 400mhz Pentium II up to 1ghz Pentium III machines and all with 128mb ram simply because that's what we had available. These specs are unnecessarily high, we could have been using complete shit like 200 mhz Pentium 1 with perhaps 64 mb.

Bootable Network Cards

While lots of machines had BIOS support for network booting, most of our network cards did not as they had no eprom. We looked through what we had and found eight with boot roms (sadly we'd just recently had a big clear out and appear to have thrown away hundreds of cards).

Floppy boot images

Those computers without BIOS support for network booting were made to boot by placing a custom boot image on a floppy which is left in the drive. The process was fairly painless and the boot time hardly different from those booting from eprom.

See http://www.rom-o-matic.net/gpxe/gpxe-0.9.3/contrib/rom-o-matic/

In Practice

The system seems to work great and should be a piss of piss to maintain compared to a similar number of standalone machines. The system can be set up with individual logins for different people, guest accounts and even with kiosk mode which just boots to a web browser and prevents the user from doing anything else. All user data is stored on the server which could have two harddrives mirroring each other to provide an automatic backup. If a workstation is faulty it can be pulled out and replaced with no loss of data. Not hard drives etc. means less noise and heat so a nicer working environment.

Power Savings

Power wise it is pretty good too (although no where near as good as the laptop based systems of bristol wireless). The server clocks up between 60 and 100 watts (not including monitor) while the workstations use something like 26 to 35 watts depending on the spec (again, not including monitors). In order to get the lower figure, unneeded items like hard drives and CDROM need to be unplugged.

A typical 17inch CRT monitor seems to draw about 50w depending on the brightness, while a LCD screen of a simlar size draws under 20w. Sadly we don't have LCD monitors so obviously the power consumption is higher than it could be - approx 75w per workstation (excluding server etc.) compared to perhaps 45w if we had LCD screens.

Interestingly, all the computers draw 4 watts even when 'off' as modern computers don't really have power on/off switches but instead sit in 'standby mode' awaiting instruction to power up.

LARC Next ?

If this was implemented at LARC it would eradicate the virus and malware problem but put all our proverbial eggs in one basket at all computers would stop working if the server went down. However, the scheme would introduce much improved security generally for the users of the LARC computers and the entire file system could be encrypted also. There are issues to consider - like the how to deal with switching on the server when somebody wanted to use a computer and the desirability of replacing all the old CRT monitors at LARC with LCD ones to create much needed space and reduce power consumption. These monitors can be had for £20 each second hand on ebay now so it might be worth buying five and then banning CRT monitors from LARC.

Anyway, I'd love some input from people about the computers at LARC. I know there has been a long running issue with the state of the computers there and many people have offered to fix things without progress actually being made. I'd be up for coming in with a fast server from rampart and setting up four or five diskless workstations and training people how to administer the server and user accounts, along with teaching people what I have learnt about thin clients systems and LTSP if that is desired.