By
Adam "StoneTable" Israel (mailto:stone@arstechnica.com),
Jorge "whiprush" Castro (mailto:jorge@whiprush.org)
We're back, and hitting the ground running, so strap in and get your Linux.Ars
on. Samba 3.0 was released recently, and we thought it would be unfair to the
project to tack it onto the end as a bullet. We'll get to Samba
Thanks to ElectricElf, philmes, cshieb, and yugami for
stepping up with this issue while we were under the weather
If
Game.Ars (http://arstechnica.com/etc/games/index.html) has its own logo, then Linux.Ars should get one too! There's no rules,
but good design sense should tell you to avoid flashing items, to respect the
color scheme somewhat, and keep the size down. We'll take our favorite entries
and show them to you next week when the next edition goes up. If one really
jumps out and grabs us, we may declare a winner then, or we might put it to a
poll. Like true root grubbing, power-mad freaks, we just don't know what we'll
do. The winner will get a shirt of his/her choice, a luscious Ars mug, and a
shotglass to boot. Oh, and eternal fame, of course. Bonus points for using the
gimp! Send your entries to both Adam and Jorge (addresses above).
KNOPPIX (http://www.knopper.net/knoppix/) (visit the "KNOPPIX" link in the English or German text paragraph to get
the full site) has released a new version, 3.3, which includes a large number of
package updates including XFree86, KDE, and OpenOffice.
For those who are unfamiliar with KNOPPIX, it's one of the original
"live" CDs. That is to say, simply put the CD in your drive, reboot, and you
have a fully functional Linux environment running right off the CD. No
modifications are done to one's hard drives, hardware is autodetected, and the
CD includes approximately 1,200 applications and utilities. KNOPPIX is certainly
one of the most mature "live" CDs available, and is also an easy way to install
Debian (http://www.debian.org/).
Such a tool has a myriad of uses. The author uses it primarily as a demo
CD, to demonstrate to people how pretty the Linux/KDE desktop can be, and to
show how much hardware detection in Linux has advanced. Others use it for
network diagnostics and auditing, since KNOPPIX includes a plethora of mature
and well-proven networking utilities. Last but definitely not least, is system
recovery. A user's operating system install (perhaps Windows, for example) can
be completely trashed, and KNOPPIX will be able to help the user recover all the
data - it includes an array of partitioning and filesystem tools. One user the
author knows even gave up their operating system of choice entirely in favour of
KNOPPIX alone, after having recovered their (almost!) lost data with it. For
others, in a pinch it can provide a fully-functional desktop while waiting for
somebody to repair their standard system.
Peruse the KNOPPIX
mirror list (http://www.knopper.net/knoppix-mirrors/index-en.html) to find a download site near you, or use the convenient
BitTorrent link (http://torrent.unix-ag.uni-kl.de:6969/). If you've never used Linux before, try KNOPPIX, trust us. In related
Live CD news,
Morphix (http://www.morphix.org/) has also released a point release, .4-1 of their modular distribution.
Better hurry on these two, there's also a new
Slackware (http://www.slackware.org/) and
Mandrake (http://arstechnica.com/www.mandrake.org) around the corner as well. Whew!
Don't worry KDE users, we haven't forgotten you, in fact, the KDE world
is cranking up for the 3.2 release. First of all, make sure you stable users
pick up
KDE 3.1.4 (http://www.kde.org/announcements/announce-3.1.4.php) bringing bugfixes and security patches to the stable line. Those wanting
to play with the latest and greatest builds can check out the
KDE 3.2 Alpha 2 (http://download.kde.org/download.php?url=unstable/3.1.92/src/) release, as well as the excellent
konstruct (http://developer.kde.org/build/konstruct/) script, which can build both stable AND unstable versions of KDE. Debian
users can pick up CVS snapshot packages
here (http://opendoorsoftware.com/cgi/http.pl?p=kdecvs;frame=1), Gentoo folk can follow this thread to get in on the action. If you know
of other packages available for other distros, please feel free to let us know.
While you're building, catch up to the latest CVS commits using
KDE-CVS-Digest, a weekly summary of what the KDE developers are committing to
the project; a MUST read for all KDE fans. The planned features for 3.2 are
here (http://developer.kde.org/development-versions/kde-3.2-features.html), keep track of the schedule
here (http://developer.kde.org/development-versions/kde-3.2-release-plan.html), and as always, keep track of all things KDE on
The Dot (http://dot.kde.org). Lastly, congratulations to KDE on winning Best Desktop Environment and
Best Web Browser at the
Open Choice Awards 2003 (http://www.ofb.biz/modules.php?name=News&file=article&sid=265). Kool! Want to see Ars take on KDE again? Let us know if you're wanting
a review-length piece on this, and we'll see what we can do.
ATI released long awaited proprietary
drivers (http://www.ati.com/support/drivers/linux/radeon-linux.html) for Linux users. They're currently quite rough; they only support Red
Hat, and there have been reports of setup/configuration difficulties. It is,
however, a start which we're happy to see. As stock XFree86 drivers don't
currently support 3D acceleration for any cards more advanced than the Radeon
9100, these drivers may be your best option if you require it. ATI provides a
FAQ (http://www.ati.com/support/faq/linux.html) as well. Hopefully ATI will provide specifications to XFree86 developers
so that the drivers can become a standard part of every distribution, requiring
no fiddling on the part of the end-user.
If you do try these drivers out, please report your success or your
failure in the
Linux Kung Fu Forum (http://episteme.arstechnica.com/eve/ubb.x?a=tpc&s=50009562&f=96509133&m=5830970485).
IBM's developerworks has
an article (http://www-106.ibm.com/developerworks/linux/library/l-boot.html) about an interesting use of the make application, parallel booting of
the operating system.
One gripe about Linux is that the boot process is fairly slow. Once the
kernel is up and running it goes into starting services. Typically the init
system does this sequentially, starting one then waiting for it to finish then
moving on. IBM discusses using make (with the -j flag we love so much when
compiling) to start up services in groups based around dependencies. While they
provide files so the kids can play at home, we hope distribution developers are
reading developerworks also, as this behavior by default would make service
management much easier.
Plenty of Gentoo users are already involved with these techniques; head
on over to the
Gentoo Forums (http://forums.gentoo.org/viewtopic.php?t=71620) to catch up on some of them, and as always, Gentoo's awesome
documentation (http://www.gentoo.org/doc/en/rc-scripts.xml) has tips that may apply to your distro, too.
Over the last year, Red Hat have expanded their commercial software range
with a number of new "Enterprise" products. These are heavily tested, have long
(18-24 month) release cycles and long (>3 year) support periods. Whilst this is
ideal for people deploying Linux commercially and brings the sort of product
stability one expects from the likes of Sun and Microsoft, many started to
wonder about the "enthusiast" Red Hat Linux distro.
One of the great things about Open Source is the rapid rate of
development. Unfortunately, this can make producing a commercial boxed
distribution rather difficult. Getting everything thoroughly tested, releasing
betas and release candidates, producing artwork, pressing CD's and writing
manuals takes a long time - and often results in a product filled with software
two or three versions behind the "state of the art".
In July
we covered (http://arstechnica.com/etc/linux/2003/linux.ars-08202003.html) the announcement of the Red Hat Linux (RHL) project. This was heralded
as a drastic change of direction; the enthusiast product would be opened up,
external developers would be given more control and releases would come as often
as every four months. The product would also not be available in shrinkwrapped
form.
The new announcement details that RHL is to be merged with the
Fedora (http://www.fedora.us/index-main.html) project, a community-run repository for aftermarket RPM's. The project
will focus on developing a "complete, general purpose operating system" built
"exclusively from free software". Third party contributors will be encouraged,
application repositories (in the style of Debian) will be developed and frequent
releases will be made.
So what does this all mean to Joe Linux User? As of this week, we can
consider "Red Hat Linux" a retired product. After years of faithful service,
it's being split into two entities, the Fedora Project, and Red Hat Enterprise
Linux (RHEL). Fedora is a community project, while Red Hat Enterprise
Linux is the commercial product, currently available for $350-$2999. Here's the
breakdown of how Fedora and Red Hat Enterprise Linux relate:
By now you're probably thinking "great, what about all those RH boxes I
already have, what will happen when I need security updates?" or "This is
ridiculous, I can't afford RHEL." Remember that the free product now effectively
belongs to the community. Supply always meets demand; we expect a healthy
support infrastructure to spring up around Fedora, giving existing RH users
plenty of options. There's nothing stopping groups of people supporting Fedora
versions for an indefinite period of time, in fact, one can find many support
and security updates for Red Hat distributions as old as 6.1. This also means
that Red Hat can now protect its trademarked name, while still providing the
community with a Free distribution. As for Fedora itself, catch up on the
new terminology (http://fedora.redhat.com/participate/terminology.html) and check out the
roadmap (http://fedora.redhat.com/participate/roadmap/) for more details.
We think this is an exciting development. A Linux distro with a large
repository of packages, community support and involvement, support for up2date,
apt, and yum, frequent releases and proper commercial backing can only be a good
thing. Pick up
Fedora Core test2 (http://fedora.redhat.com/docs/release-notes/) and let us know what you think.
The GRUB Boot Loader
Disclaimer: As with anything involving your bootloader,
this could possibly render your system unbootable. Please only do this if you're
willing to face the consequences.
Every Linux user has probably used LILO once or twice in their Linux
lifetime. They might have also forgotten to run /sbin/lilo once or twice after a
kernel update, and ended up with a non-bootable system. In this situation, you
best have your LILO rescue floppy ready, or you won't be booting your system (LI
LI LI LI...). This is why many of us have switched to the
GRUB Boot Loader (http://www.gnu.org/software/grub/).
GRUB is a fairly neat boot loader. It doesn't need a configuration file
at all. You can use a "boot shell" to give the boot loader commands, and edit
boot configuration options on the fly. GRUB uses a menu.lst, similar to /etc/lilo.conf,
that you can use to boot your system. Everything you can do in menu.lst you can
do in the boot shell, using the same syntax. You also do not have to reinstall
GRUB every time you make a change to the configuration file, as it sets itself
to read the file from "/boot/grub/menu.lst" on your hard drive by default. We're
going to cover the basics of GRUB in this section. As with anything, when in
doubt, consult the LKF or head on over to #linux on irc.arstechnica.com.
First, get GRUB. If you're distribution is apt-enabled a apt-get install
grub should fetch the proper package. Consult your distributions documentation
if you are unclear.
Next, you need to run grub-install /dev/hdX where hdX is the target drive
(or partition). Pay attention to the output that grub-install gives you, it will
tell you which drive is mapped where:
turkish:/home/chris# grub-install /dev/hda
Installation finished. No error reported.
This is the contents of the device map /boot/grub/device.map.
Check if this is correct or not. If any of the lines is incorrect,
fix it and re-run the script 'grub-install'.
(fd0) /dev/fd0
(hd0) /dev/hda
Now that GRUB is on your MBR, it's time to create the configuration file.
Open /boot/grub/menu.lst in your favorite text editor. In GRUB, the drives and
partitions are numbered from zero, so generally, /dev/hda is (hd0) and the
/dev/hda1 is (hd0,0). This is where that output of grub-install comes in handy,
it tells you the identifier GRUB assigns the drive. Here's an example
configuration file:
timeout 30 #timeout is the amount of time, in seconds, before GRUB loads the default image
default 0 #default is the image GRUB will load after the timeout, GRUB starts numbering at 0, so the image below
title Linux 2.4.xx #the name for this image that will show on the GRUB boot screen
root(hd0,1) #the root partition. NOT NECESSARILY THE "/" PARTITION. This is the partition on which the kernel excecutable itself lies (i.e. /boot)
kernel /vmlinuz-2.4.xx root=/dev/hda5 #the path to the kernel you want to load, followed by any kernel parameters you need. Not necessarily the absolute path in Linux, but the path on the partition you chose as root.
Double check that you set all the paths to YOUR OWN SYSTEM, don't just
copy mine. This is critical.
For Windows Users: you can add these lines to menu.lst to add a "Windows"
boot option
title Windows
rootnoverify(hd0,0) #sets windows root to first partition on first drive (/dev/hda1)
makeactive
chainloader +1
And that's it! Time to reboot and see if you did it right, if worst comes
to worst, you can use the grub prompt to fix what you might have missed, or
throw in that rescue disk. For more information and advanced settings consult
the
GRUB Manual (http://www.gnu.org/manual/grub/html_mono/grub.html) before you even attempt this. Although it might be confusing to the lilo-savvy
at first, GRUB will quickly earn its place as one of your most trusted tools. If
you've got any other GRUB tips, make sure to share them in the discussion.
One of the Samba camp's lesser known contributions is
distcc (http://distcc.samba.org/). Ever wish you could tie in all your computers into one massive
compiling machine? This week's Cool App of the Week will help you save time
compiling programs and is extremely useful in a work environment, and a near
necessity for larger programs. distcc is a distributed manager for compiles,
which has the absolute minimum requirements (the machine is running distcc), and
a minimum of setup.
distcc, performance wise, directly competes with costly distributed build
systems like
TeamBuilder (http://www.trolltech.com/products/teambuilder/) approaching close to linear scalability (3 identical systems makes a
build ~2.6 times faster). Setup is pretty simple, you'll need gcc (the GNU
Compiler Collection) on each box you want to use. Don't worry, gcc comes with
every distribution. Then install distcc on all the machines. Binaries and
documents weigh in at a "hefty" 41kB. If your distribution doesn't supply a
package, download from
distcc.samba.org (http://distcc.samba.org/download.html).
distcc comes with a program named distccd; if your distro doesn't have a
package for distcc, you'll have to setup the daemon by hand. For security
reasons, if started by root pass, use the --user option to have distccd
run under a less privileged user.
For simple setups you can execute distccd from the command line, or add
distccd to one of your init scripts ('distccd --user compilefarm' for example).
There is zero configuration for distccd unless you need to get picky about bound
addresseports or which user it runs as. Once distccd is running, your machine is
ready to act as a distcc "node".
Once you have all of them setup, you can now use one machine as a
'control machine', which is the box you are going to start the compile from. On
this box you need to setup the DISTCC_HOSTS environment variable. This contains
a list of all the hostnames you just started distccd on.
For the example setup I can setup DISTCC_HOSTS with either hostnames or
IPs, whichever is easiest.
localhost$ export DISTCC_HOSTS='localhost dkny killerloop fossil'
localhost$ export DISTCC_HOSTS='localhost 192.168.0.1 192.168.0.2 192.168.0.3
Now your compile farm is complete. Now to test it. Since some programs
look for "gcc" or "cc" you might need to export CC='distcc' to ensure that
distcc is being used. The variables used to specify the compiler in the Makefile
are fairly standard. CC for C compilers and CXX for C++ compilers. If your
project is a mix of both, you will need to override both flags (or hand edit the
Makefile) distcc recommends you set the -j flag to TWICE the number of hosts in
the environment variable.
make -j8 CC=distcc CXX=distcc
Would be a good place to start on a network with 4 PCs.
Happy
compiling!