"That which is overdesigned, too highly specific, anticipates outcome; the anticipation of outcome guarantees, if not failure, the absence of grace."
-- William Gibson, All Tomorrow's Parties
September 25, 2009

I've spun off my work-related ramblings over here. You can tell it's hardcore, because it's green text. Like jwz.

August 29, 2009

Our build files live on a Solaris 10 NFS server. The build client lives in a zone on a separate host. The build files are exported via v3 and tcp to the client.

Periodically the client would hang and require a zone reboot. Needless to say, this was astoundingly annoying if you didn't realize it had hung until you had started your build or push processes. An init 6 always fixed it... for a while.

Looking at snoop on the NFS server, it looked like a bunch of tcp:664 packets came in and go... nowhere. They hit the interface and vanish. Gee, I thought. That's odd.

Finally I got sick of this, and Googled around and found some references to port 623, a Linux bug that sounded pretty similar, and other Solaris users experiencing the same problem.

The first post is really the most useful. Different port, but same behavior.

After creating the rmcp dummy service in inetd, and restarting the zone, the problem has not resurfaced.

It's pretty interesting that this particular bug manifests because a chip on the motherboard eats traffic silently. "Interesting", anyway.

August 14, 2009

Co-worker asked for this. After a few minutes poking at the Makefile, I just googled and hit this page which gave me what I needed.

Yay for lazyweb.

July 1, 2009

Someone on Sun managers asked for advice on moving from Linux to Solaris and tips on living with Solaris in general. I guess I kind of have a lot to say about it, actually..

One thing I forgot to mention is using SMF. You may have two software repositories (Sun's and pkgsrc), but you only want one place to manage the actual services. Write SMF manifests! It's easy, and you can use puppet to manage it all.

From: Bryan Allen <bda@mirrorshades.net>
To: Jussi Sallinen
Cc:
Bcc:
Subject: Re: Looking for tips: Migrating Linux>Solaris10
Reply-To: bda@mirrorshades.net
In-Reply-To: <20090624113312.GA32749@unikko>
AIM: packetdump

+------------------------------------------------------------------------------
| On 2009-06-24 14:33:12, Jussi Sallinen wrote:
|
| Im new to Solaris and about to start migrating Linux (Gentoo) based E450 server
| to V240 Solaris 10.
|
| Currently running:
|
| -Apache2
| -Postfix
| -Dovecot
| -MySQL
|
| About 70 users using WWW and email services.
|
| So, to the point:
| In case you have tips and tricks, or good to know stuff please spam me with
| info regarding migration.

A quick note: I work for a company where I migrated all our services from Linux
on whiteboxes to Solaris 10 on Sun hardware. It was a major effort, but
garnered us many benefits:

* Consolidation. Thanks to the faster harder and Zones, we are down from 50+
Linux boxes to a dozen Sun systems. And for honestly not that much money.
* Much greater introspection (not just only mdb or DTrace; the *stat tools are
just that much better)
* Before ZFS, we were mostly sitting on reiserfs (before my time) and XFS
(which I migrated as much as I could to before getting it on ZFS). ZFS has
been a huge, huge win in terms of both reliability and availability.

This turned out to be quite an article, but here are some "quick" thoughts on
using Solaris particularly, and systems administration in general:

* Read the System Administrator Guides on docs.sun.com if you are new to
Solaris
* No, seriously. Go read them. They are incredibly useful and easy to parse.
* Follow OpenSolaris development, either via the mailing lists or #opensolaris
on freenode. This gives you a headsup and stuff that might be getting into
the next Solaris 10 Update, so you can plan accordingly.

* Use a ZFS root instead of UFS (text installer only, but you really want to
use JET -- see below)
* Use rpool for operating system and zoneroots only
* Set up a tank pool on seperate disks
* Delegate tank/filesystems to zones doing the application work

This minimizes the impact of random I/O on the root disks for data and vice
versa (just a good practice in general, but some people just try to use a
single giant pool).

It also negates the issue where one pool has become full and is spinning
platters looking for safe blocks to write to impacting the operating system or
application data.

* Use Marin Paul's pca for patching

The Sun patching tools all suck. pca is good stuff. You get security and
reliability patches for free from Sun; just sign up for a sun.com account.

You don't usually get new features from free patches (you do for paid patches),
but regardless all patches are included in the next system Update.

* Learn to love LiveUpgrade

With ZFS roots, LiveUpgrade became a lot faster to use. You don't have a real
excuse anymore for not building an alternative boot environment when you are
patching the system.

Some patches suck and will screw you. Being able to reboot back into your
previous boot environment is of enormous use.

* Use NetBSD's pkgsrc

Solaris 10 lacks a lot of niceties you and your users are going to miss.
screen, vim, etc. You can use Blastwave, but it has its own problems. pkgsrc
packages will compile basically everything without a problem; they are good
quality, easy to administer, and easy to upgrade.

If you aren't doing this on a single box, but several machines, you would have
a dedicated build zone/host, and use PKG_PATH to install the packages on other
systems. Since you are using a single machine, see below about loopback
mounting the pkgsrc directory into zones: Compile once, use everywhere.

The services you listed are available from pkgsrc and work fine. The one thing
you might want to consider instead is using Sun's Webstack and the MySQL
package, as they are optimized for Solaris and 64bit hardware.

In addition to the above, we use pkgsrc on our (dwingling number of) remaining
Linux hosts. It means we have a *single version* of software that may be
running on both platforms. It segments the idea of "system updates" and
"application updates" rather nicely with little overhead.

* Use Solaris Zones

Keep the global zone as free of user cruft as possible. If you segment your
services and users properly, zones make it incredibly easy to see what activity
is going on where (prstat -Z).

It also makes it easy to manage resources (CPU, RAM) for a given set of
services (you can do this with projects also, but to me it's easier to do at
the zone level).

Install all your pkgsrc packages in the global zone and loopback mount it in
each zone. This saves on space and time when upgrading pkgsrc packages. It also
means you have one set of pkgsrc packages to maintain, not N. It's the same
concept as...

* Use Sparse Zones

They are faster to build, patch and manage than full root zones. If you have
recalcitrant software that wants to write to something mounted read-only from
the global zone, use loopback mounts within the global zone to mount a zfs
volume read-write to where it wants (e.g., if something really wants to write
to /usr/local/yourface).

I also install common software in the global zone (e.g., Sun's compiler,
Webstack or MySQL) and then loopback mount the /opt directory into each zone
that needs it (every zone gets SSPRO).

* Delegate a ZFS dataset to each zone

This allows the zone administrator to create ZFS filesystems inside the zone
without asking the global admin. Something like rpool/zones/www1/tank. It's
easier to manage programmically too, if you are using something like Puppet
(see below) to control your zones. You only have to edit a single class (the
zones) when migrating the zone between systems.

* Use ZFS Features

No, really. Make sure your ZFS pools are in a redundant configuration! ZFS
can't automatically repair file errors if it doesn't have another copy of the
file.

But: ZFS does more for you than just checksumming your data and ensuring it's
valid. You also have compression, trivial snapshots, and the ability to send
those snapshots to other Solaris systems.

Writing a script that snapshots, zfs sends | ssh host zfs recvs is trivial. I
have one in less than 50 lines of shell. It gives you streaming, incremental
backups with basically no system impact (depending on your workload,
obviously).

Note that if disk bandwidth is your major bottleneck, enabling compression can
give you a major performance boost. We had a workload writing constantly
rewriting 30,000 sqlite databases (which reads the file into memory, creates
temp files, and writes the entire file to disk -- which are between 5MB and
2GB). It was incredibly slow until I enabled compression, which gave us a 4x
write boost.

You can also delegate ZFS filesystems to your users. This lets them take a
snapshot of their homedir before they do something scary, or whatever.

* Use the Jumpstart Enterprise Tool

Even though you only have one Solaris system, if you're new to Solaris, the
chances are you're going to screw up your first couple installs. I spent months
trying to get mine just the I wanted. And guess what, installing Solaris is
time-consuming and boring.

Using JET (a set of wrappers around Jumpstart, which can also be annoying to
configure), you have a trivial way of reinstalling your system just the way you
want. I run JET in a virtual machine, but most large installs would have a
dedicated install VLAN their install server is plugged into.

Solaris installs have a concept of "clusters", which define which packages are
instaled. I use RNET, the smallest one. It basically has nothing. I tell JET to
install my extra packages, and the systems are configured exactly how I want.

You use the finish scripts to do basic configuration after the install, and
to configure the *rest* of the system and applications, you...

* Use a centralized configuration management tool

I use Puppet. It makes it trivial to configure the system programmically,
manager users and groups, and install zones. It's a life and timesaver. In
addition to making your system configuration reproducible, it *documents* it.

Puppet manages both our Solaris and Linux boxes, keeping each in a known,
documented configuration. It's invaluable.

I also store all my user skel in source control (see next), and distribute them
with Puppet. Users may be slightly annoyed that they have to update the
repository whenever they want to change ~/.bash_profile, but it will be the
same on *every* host/zone they have access to, without them doing any work,
which will make them very happy.

* Store your configs in a source control manager

Both your change management and your system configuration should all be
versioned. Usefully, you can use your change management to manage your system
configs!

We have an internal directory called /sw where we deploy all our software to.
Older services have configs hard-coded to other locations, so we use Puppet to
ensure symlinks exist as appropriate. We deploy to /sw with a script that
checks the tree out of git and rsyncs it to all machines. It's pretty trivial,
and very useful if you have more than, say, two hosts.

/sw is also a loopback mount into every zone, and read-only. It enforces the
idea that all config changes must go into the repository, *not* be changed
locally... because developers can't write to /sw just to fix something quickly.

* Solaris Sucks At: Logging, by default

The default logging setup is awful. Install syslog-ng from pkgsrc, and write
your logs to both a remote syslog server and the local disk (enable compression
on your logs ZFS filesystem!)

* Solaris Sucks At: Firewalling

ipf is a pain in the butt. Unless you absolutely have to do host-based
firewalling, set up an OpenBSD system and use pf.

...

I'm sure I could think of quite a lot more (DTrace, Brendan Gregg's DTrace
Toolkit, RBAC, mdb), but it's dinnertime. :)

Hopefully the above will prove somewhat useful!
--
bda
cyberpunk is dead. long live cyberpunk.


May 28, 2009

Watching Outlander (2008), and spamming irk while doing so. It stars James Caviezel. You know. Bondage Jesus. He plays an alien who crashlands in Viking-era Norway (Earth being an abandoned alien seed colony).

At one point, Space Jesus has a bunch of Vikings building a trap for the Space Dragon.

< bda> "Is it deep enough for ya?" "No. Four more feet. And when you're done, I need two rows of postholes running up both sides." "Postholes. What do you need postholes for?" "...posts." "<dirty face>"
< bda> Jesus needs a postholer.
< bda> C'mon, no takers?
< bda> "What does JESUS need with a POSTHOLER?!"
< rjbs> I'm not going there.
< rjbs> Nobody fucks with the Jesus.
< ejp> I was going to say something, but I got hung up.
* bda groans.
< ejp> happy to help.

It's a pretty decent Beowulf story, with surprise John Hurt and Ron Perlman. Hard to argue with that.

May 20, 2009

bda
Starting hypnosis next week.
That should be alarming.

kitten
For what?

bda
To learn how to hack my brain.

kitten
Uh.. huh.
They tried that with me once.
Didn't work.

bda
That's what they want you to think, but you still bark anytime anyone asks you the time.

April 16, 2009

A nice high-level writeup by OmniTI's Mark Harrison on Zones, ZFS, and Zetaback.

[via Theo S.]

April 8, 2009

I've been meaning to blog this for a while. Very useful in Jumpstart finish scripts.

eeprom console=ttyb
eeprom ttyb-mode="115200,8,n,1,-"
echo "name=\"asy\" parent=\"isa\" reg=1, 0x2f8 interrupts=3;" >> /kernel/drv/asy.conf
svccfg -s system/console-login setprop ttymon/label = 115200
svcadm refresh system/console-login
svcadm restart system/console-login
perl -pi -e 's/^splashimage/#splashimage/' /rpool/boot/grub/menu.lst
perl -pi -e 's/$ZFS-BOOTFS$/$ZFS-BOOTFS,console=ttyb/' /rpool/boot/grub/menu.lst
bootadm update-archive

reboot

April 1, 2009

So I have a device failing in one of my zpools:

extended device statistics ---- errors ---
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 fd0
0.0 2.0 0.0 8.0 0.0 0.0 0.0 0.1 0 0 1 0 0 1 c0t0d0
0.0 2.0 0.0 8.0 0.0 0.0 0.0 0.1 0 0 1 0 0 1 c0t1d0
0.0 0.0 0.0 0.0 0.0 10.0 0.0 0.0 0 100 1 3 4 8 c0t2d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c0t3d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c1t2d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c1t3d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c1t4d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c1t5d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 c2t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 c3t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 6 2 0 8 c4t0d0
extended device statistics ---- errors ---
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c0t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c0t1d0
0.0 0.0 0.0 0.0 0.0 10.0 0.0 0.0 0 100 1 3 4 8 c0t2d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c0t3d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c1t2d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c1t3d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c1t4d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c1t5d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 c2t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 c3t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 6 2 0 8 c4t0d0

etc...

It's part of a mirror:

pool: tank
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-9P
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror ONLINE 0 0 0
c0t2d0 ONLINE 0 6 2
c0t3d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
c1t3d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c1t4d0 ONLINE 0 0 0
c1t5d0 ONLINE 0 0 0

errors: No known data errors

So I reckon I'll just offline it and go replace it.

[20090401-17:20:12]::[root@shoal]:[~]$ zpool offline tank c0t2d0
cannot offline c0t2d0: no valid replicas
[20090401-17:31:15]::[root@shoal]:[~]$

err... what?

So I detach it from the mirror instead, which does work.

I ask jmcp if he has any insight into why this might be, and after a few minutes he asks if disconnecting the device works.

[20090401-18:01:57]::[root@shoal]:[~]$ cfgadm -c disconnect c0::dsk/c0t2d0
cfgadm: Hardware specific failure: operation not supported for SCSI device

So that's the culprit, I think. A disconnect is implicit when doing a zpool offline?

Not a good error to throw back to the user, either.

March 18, 2009

So Linux has a history of hosed db interfaces. Apache worked around this about ten years ago by including their own SDBM in their distribution.

pkgsrc separates their Apache packages into DSOs. So mod_perl, mod_fastcgi, mod_ssl, etc, are built as separate packages. However, when you compile Apache1 with no SSL, it disables SDBM, so mod_ssl (which requires some sort of DBM) fails.

The PR is here.

My workaround was to do this:

ap-ssl$ bmake patch

ap-ssl$ vi /usr/pkg/pkgsrc/www/ap-ssl/work/mod_ssl-2.8.31-1.3.41/pkg.sslmod/libssl.module

Search for the first instance of APXS.

Add the following two lines above it:

APXS_MODE="yes"

my_rule_SSL_SDBM="yes"

And ap-ssl will compile happily.

March 1, 2009

Over the last two weeks we (read: rjbs) migrated our Subversion repositories to git on GitHub. I was not very pleased with this for the first week or so. By default, I am grumpy when things that (to me) are working just fine are changed, especially at an (even minor) inconvenience to me. That is just the grumpy, beardy sysadmin in me.

After a bit more talking to by rjbs, things are again smooth sailing. I can do the small amount of VCS work I need to do, and more imporantly: I am assured things I don't care about will make the developers lives much, much less painful, which is something I am certainly all for.

git is much faster than Subversion ever was, and I can see some features as being useful to me eventually. Overall, though, what I use VCS for is pretty uninteresting, so I don't have much else to say about it.

I had a couple basic mental blocks that rjbs was able to explain away in a 20 minute talk he gave during our bi-weekly iteration meeting. It was quite productive. There are pictures.

Work has otherwise consisted of a lot of consolidation. I have finally reduced the number of horrible systems to two. Yes. Two. Both of which are slated for destruction in the next iteration. Not only that, I have found some poor sucker (hi, Cronin!) to take them all off our hands. Of course, they'll be upgrading from PIIIs, so...

I also cleaned up our racks. A lot. They are almost clean enough to post pictures of, though I'll wait until I've used up more of the six rolls of velcro Matt ordered before doing that.

Pretty soon we'll have nothing but Sun, a bit of IBM, and a very small number of SuperMicros. My plans are to move our mail storage from the existing SCSI arrays to a Sun J4200 (hopefully arriving this coming week). 6TB raw disk, and it eats 3.5" SATA disks, which are ridiculously cheap these days. I really, really wanted an Amber Roads (aka OpenStorage) J7110, but at 2TB with the cost of 2.5" SAS, it was impossible to justify. If they sold a SATA version at the low-end... there has been some noise about conversion kits for Thumpers, but that's also way outside our price range.

I doubt conversion support will become more common, but if I could turn one of our X4100s and the J4200 into an OpenStorage setup, I would incredibly happy. If you haven't tried out the OpenStorage Simulator, I suggest you do so. Analytics is absolutely amazing.

People on zfs-discuss@ and #opensolaris have been talking about possible GSoC projects. I suggested a zpool/filesystem "interactive" attribute, or "ask before destroy." However you want to think of it. Someone else expanded on that, suggesting that -t be allowed to ensure that only specified resource types can be destroyed. I have yet to bone myself with a `zfs destroy` or `zpool destroy` but the day will come, and I will cry.

I see a pkgsrc upgrade in my near future. I've been working on linking all our Perl modules against it, and I want to get the rest of our internal code linking against it as well. It will make OS upgrades so, so much easier. Right now, most code is either linked to OS libraries or to an internal tree (most of which also links to OS libraries).

We've almost gotten rid of all our Debian 3.1 installs, which is... well. You know. Debian 5.0 just came out, and we've barely gotten moved to 4.0 yet. Getting the upgrade path there sorted out will thankfully just be tedious, and require nothing clever.

I really hope that the Cobbler guys get Debian partitioning down soon, and integrate some Solaris support. I tried redeploying FAI over Christmas and man, did it so not work out of the box. I used to use FAI, and was quite happy with it. I had to hack it up, but... it worked pretty well. Until it stopped.

If Cobbler had Solaris support, I would seriously consider moving our remaining Linux installs to CentOS. We use puppet already, so in many ways Cobbler is a no-brainer. We are not really tied to any particular Linux distribution, and having all our infrastructure under a single management tools ken would be really nice. To put it mildly.

30% curious about OpenSolaris's Automated Installer project, but it's so far off the radar as to be a ghost.

I picked up John Allspaw's The Art of Capacity Planning, and it's next on my book queue. Flipping through it makes me think it's going to be as useful as Theo S.'s Scalable Internet Architectures.

What with all my microblogging... well, anyway.

H is in Boston visiting her sister for the weekend, so I've been left to my own devices. Which seem to consist of many naps, baths, and lots of reading. I finished The Iron Dragon's Daughter, by Michael Swanwick. Nothing I'd recommend to anyone. Maybe to people who enjoy Laurrel K. Hamilton and are looking at a poor gateway to better fiction. Started the sequel, The Dragons of Babel, which is much, much better.

Beyond that, I've done little. The XBOX 360 is still broken (bloody Microsoft) and I haven't had the mental power to get over to Gamestop to get it replaced yet. I should do that. H is no doubt missing rocking out, and I'm sort of interested in Dead Space after hearing rjbs and a few other guys on IRC talk about it for a couple weeks now.

Tonight I apparently missed out on Social Activities, thanks to napping and headphone usage. Phone being on mute probably didn't help much.

I made some edamame to snack on earlier (yum!) but should probably find something more akin to an actual meal. meh.

January 31, 2009

Today Harry and I were walking to Five Guys to get some lunch and an old woman and her grandson had just gotten nailed by the PPA. She was doing this Old World Damn the Man Dance on the crumpled up parking ticket.

Philadelphia was very colorful today.

January 28, 2009

Andy Zebrowitz
ahaha.

Bryan Allen
?

Andy Zebrowitz
Hang on. ^_^

Bryan Allen
uh.

Andy Zebrowitz
Shut up.
I already know what you're going to say.

Bryan Allen
What?

Andy Zebrowitz
The colon/semicolon key is broken, okay?

Bryan Allen
mm.

December 27, 2008
December 13, 2008

< bda> Dude.
< bda> So I'm in bed reading and hear buzzy flapping coming from my desk.
< bda> Like a moth on a lightbulb.
< bda> So I look at my lamp and there's some shit going down behind the bulb. I can see some movement through the holes in the back, by the switch.
< bda> So I wait for shit to resolve itself, thinking "A moth got caught back there? Weird."
< bda> But I don't see any wings or antenna, and it seems to have an awful lot of legs for a moth.
< bda> Finally it stops moving and I take a closer look, through the holes.
< bda> A fucking spider went into my lamp after a fly and fried itself!
< bda> Now I'm worried a fucking bird is going to fly into the damn thing!
< robf> I guess not every insect can be on the varsity team.

December 12, 2008
December 11, 2008

Lightning flashes through the hole in the dome. Creeper vines cover most of the ancient crystal, diffusing the crisp white tear into a mottled static. The heavy upper atmosphere gives the lightning a good five or six second lifespan. Its tendrils flash and dig through dense cloud.

Dim light filters through the creepers, the second moon something half-seen through clouds, hovering over the hole itself. That particular damage to the
City had been caused by some form of bombardment; its origin, reasoning, lost
to time and media corruption. Chunks of the dome litter the City, along with
hundreds of years of other debris.

Carter ducks and rolls as the ground around him is lit, moving as quickly away from the exposed position as possible. Lots of folk might tend to freeze, let anyone trailing them get a scope on them, even in the sudden glare and burn. He'd done it. Didn't see why anyone else couldn't.

No shots rang out, but he honestly hadn't expected any too. His quarry was known to be more hands on. More personal.

The breather mask covering his face obscured his vision; sound already tends to slow, distort. His thermal and microwave rigs have both already been destroyed or discarded during the long hunt. Against the android's senses, Carter is at a definite disadvantage.

Given the choice, he might give up. Crawl back down into the tunnels under the City, tell his employer he lost the android at the edge of the dome. It got out into the World, and while Carter is known to play it fast and loose, he followed some rules. No one went outside.

Given the choice.

Something that felt older and angrier, something cold and reptilian, something deeper and stronger than even the most basic of survival instincts, had taken Carter in its grip. Had sharpened him. The android had thrown Davis into a vantree; Carter had been half a mile away, but Davis' screams had carried, warbling and horrible in the heavy air.

Until the tree's slow mind noticed its gift, and Davis was silent.

They'd split up at Carter's suggestion, figuring the skinjob would be focused on escape. Instead it had doubled back, rallied against its hunters. Carter's fault Davis was slowly melting away in the trunk of some damn plauge-twisted tree.

So revenge kept him up here, trying to track the runner. Oxy tank low, resources minimal. He tried not to think about the damage to his leg. The suit's drugs were taking care of most of the pain, and the armor had stiffened and frozen into a kind of brace. He could get along. Wouldn't give himself much in the way of odds if he did manage to catch up to the bastard again, though.

After Davis went down, Carter had lost his cool. He'd tried to tank the
skinjob; it must have thought Davis had been the only tracker. Carter had a
piece of the dome thrown into him for his trouble. He's almost got out of the
way, but not quite. He'd been limping since, and through the drugs, he was
starting to feel bones grating together, somewhere deep in his leg. Somewhere
close to an artery, the way his luck was going.

Problematic.

Good word to describe this job from the start.

November 12, 2008

Finally got around to doing a Jumpstart for 10/08 today. After one little hitch (u6 renames the cNdN devices in my X2100s to the more proper cNtNdN), it all worked as expected.

fdisk c1t0d0 solaris delete
fdisk c1t1d0 solaris delete

fdisk c1t0d0 solaris all
fdisk c1t1d0 solaris all

install_type initial_install
pool rpool auto auto auto mirror c1t1d0s0 c1t1d0s0
bootenv installbe bename sol10u6

Yay, ZFS root!

November 1, 2008

Solaris 10 10/08 (Update 6) was released yesterday. Release notes here.

I grabbed SPARC media and headed down to the colo yesterday to reinstall our T5120 (previously running b93). Fed the media in, consoled in via the SP, booted the system, and then left.

From much more comfortable environs, I got the system installed (service processors really are the best thing ever) without issue, and then, thanks to hilarity with my laptop, lost the randomized password I'd set for root. So whatever, I boot single-user and ... get asked for root's password. This is very similar to most Linux single-user boots these days, and more recently OpenSolaris.

I really, really didn't expect Solaris to follow suit. At least not for .. a while.

Very annoying. At dlg's suggestion, I tried booting -m milestone=none, but still had no joy. Ended up just booting cdrom -s and munging /etc/shadow that way.

Very annoying.

Anyway, having ZFS root in Solaris GA is pretty great. There are a number of really awesome features putback from Nevada this release, along with zfsboot. Check out the release notes. Good stuff.

UPDATE

Ceri Davies corrects me:

Just a note, because it sounds as if you think otherwise, that this behaviour has been present since at least update 3; ie. at least two years. You can turn it off by creating /etc/default/sulogin with the line PASSREQ=NO.

I don't recall seeing this behavior with u4 or u5, so evidently I am a crazy person. Thanks to Ceri for the info.

See sulogin(1M) for further details.

<kitten> sup.
<bda> Not much.
<bda> Workin'.
<kitten> On the weekend?
<bda> Everybody is.

October 31, 2008

< bda> Some girl in the elevator just slapped my ass and told me to get some.
< bda> A bunch of drunk Phillies fans.
< rjbs> That's what they wanted you to believe.
< bda> They were all wearing Phillies shirts, and were all very drunk, and asked me if I cared about the Phillies at all.
< bda> I believe they were drunk Phillies fans.
< bda> They were doing something illegal to a potted plant, too.
< rjbs> Right. They wanted you to believe they were drunk Phillies fans, so they put on Phillies shirts, talked about the Phillies, and affected a drunken demeanor.
< rjbs> Poor Bryan. So credulous.
< bda> And then she planted a tracking device on my butt?
< rjbs> no, that was just courtesy

Melissa suggested I should have asked her if she was volunteering. Why can't I think of these things when random drunk people are slapping my nethers?

October 28, 2008
September 10, 2008
September 7, 2008

Ben Folds with the Chamber Orchestra of Philadelphia last night, at the Mann Center.

Amazing.

Just amazing.

"Steven's Last Night in Town" pretty much requires an orchestra, really.

He played a couple new tracks last night, too, for the album dropping on the 30th. Definitely looking forward to it.

Even in the middle of a little hurricane, the place was packed. Everyone decked out in rain gear and ready to have an excellent time. And they did.

We ran into Nick and Mariah while looking for seats, which was hilarious though -- really -- not all that unexpected. (We hit up Doobies after with them, and ran into Gallo (Hi, Eric!), who is back in town and sounding very productive. Very excited for him.)

And, as H commented after the "three part harmony" section of the night, the Mann center has great acoustics.

Seriously. Last night was awesome. There's a reason H has seen Ben Folds twelve times now. I certainly hope I get to; I can't imagine it ever getting old.

September 5, 2008

* solios imagines that's Lud in the distance, squees

August 31, 2008

New Dr. Horrible content in the works, and the soundtrack is apparently soon to be released.

Huzzah!

August 27, 2008

Recently I moved our x86-64 pkgsrc build zone to another system. When I did so, I had forgotten I had built the original zone as full, to get around an annoying install(1M) bug. Basically, when you tried to build a package, it would attempt to recursively mkdir /usr/pkg. On sparse zones, /usr is shared read-only from the global zone.

So the install would fail, because it couldn't create /usr for obvious reasons. At the time, I thought I had tried various install programs, but given that the problem was being re-addressed and I didn't feel like reprovisioning a zone, I figured I would tackle it again.

After some minor discussion on #pkgsrc and grepping through mk/ I "discovered" the following variable:

TOOLS_PLATFORM.install?= /usr/pkg/bin/ginstall

Added to mk.conf and all is good. Mainly because ginstall actually uses mkdir -p, so...

The contents of pkgsrc/mk/platform/ are very useful if you aren't on NetBSD.