"That which is overdesigned, too highly specific, anticipates outcome; the anticipation of outcome guarantees, if not failure, the absence of grace."
-- William Gibson, All Tomorrow's Parties
Fighting with OS X backups.

I've been fighting with the OS X Server since we got it. Getting it backed up has proven to be damn near unpossible within the context of our current backup system.

This is an email I just threw together after I spent the weekend troubleshooting the machine and various pieces of software that have mashed together.

So we're in a bit of a pickle here.

Our goal is to get data from sobek (OS X Server) to the O2000 (IRIX) with resource forks intact to be taped. We need full backups, incrementals, etc.

Unfortunately we can't go directly from sobek (or any OS X box) directly to the O2000, as Xinet's ktalk suite does not currently speak AFPv3 and there are apparently no plans for it (I was added to a "feature request" list). Xinet's suggestion was to export the directories to be backed up out via NFS (on the OS X Server) to the target machine; reminding them that this would lose all resource information garnered no response.

My attempts to backup to a netatalk (1.6.4) server have thus far met will failure: OS X will kernel panic and reboot itself more or less randomly. At gig speeds, it takes somewhere between 3 and 5 hours for the panic to happen. At 100MB, it took seven hours. I haven't looked too very closely into this (read: I have not sniffed network traffic and I have not taken kernel dumps), but I suspect (and some quick mailing list reading suggests) that it's the fact that any AFP server that returns malformed replies will cause OS X or the AFP subsystem to crash. In this case, it's not just disconnecting the session, but taking the entire machine down. So at some point during the transfer, netatalk says something that OS X's AFP implementation does not like, and OS X proceeds to die.

So what are our possible solutions?

1) hfstar will create tar files with the resource data intact. We can then just copy these to the backup staging box (which is mounted on the IRIX machine) and they can be taped. For incrementals, we can pipe rsync into hfstar to get a list of changed files. The changed files will then be tar'd up by themselves. We can also keep track of what file is in what tarball in a rudimentary database. To do a restore, we would have to search this database for the file, determine what tarball it is, then go to NetBackup on the IRIX box, pull that tarball off tape, restore it to the OS X box, expand it with hfstar, and dump it into the restore volume.

Most of this can be automated, especially if I write wrappers for the NetBackup CLI tools (From the application used interface with our database of tar file information: "Oh, you want to do a full backup from day Z? I'll need the last full dataset tarball and all the incrementals up to now! Click the appropriate button on this web page and I'll run and go do that.")

2) OS X can write resource information to non-AFP/HFS volumes. It creates ._FILENAME files in the same directory as the file. These ._ files contain the same resource information that netatalk writes to its .AppleDouble directory and ktalk puts in its .HSResource files. The problem is this is not a session-level function; it's entirely up to the application (be it cp, rsync_hfs, psync, or even Apple's own ditto command) to make the appropriate function call for the file. There is an API available to grab the resdata, encode it, and write it out to a file. The only app that I've found so far (with limited research) to do any of this is Finder. I've tested it with a Samba server and it works fine... as long as you use Finder.

We could write a wrapper for rsync that will, when it goes to copy a new or changed file, will make the appropriate function calls to create the ._ file in the target directory. I haven't actually looked at the API but it can no doubt be accessed via AppleScript or Objective C. The latter (ObjC) will be much faster, but will require more work for me (as I'm such an awful programmer).

We'll then have a complete mirror of the dataset on the backup staging box, and NetBackup can deal with it as it ordinarily does... the only thing to remember is that when you go to restore a single file, you'll need to restore the corresponding ._FILE to get the resource data back.

The nice thing about this method is that OS X natively understands it. If you copy something from a Samba server, Finder knows to grab the ._FILE and "put it back together" on the local filesystem (read: Shove all the resource information back into the appropriate place on the filesystem). Once the file is accessible, Finder Just Knows what to do.

Note that netatalk doesn't serve the information contained in ._FILE up via AFP, but it DOES hide the file. This means that once we have written to a non-AFP share, we cannot use netatalk to mount the volume back on the OS X box. If we do, we won't actually be able to access the resource fork contained in the ._FILE. So we will need to share the staging box's mirror volumes out via NFS.

If we go with the latter option, we can continue to rely on NetBackup's database for finding files.

Workflow with the second option will be something like:

- Mirror data from OS X to staging box via NFS or Samba. Custom app writes ._FILEs.
- The staging box's mirrors are mounted on the IRIX box via NFS. It tapes them like it would anything else.
- We restore directories (we can do single files as long as we remember to grab the corresponding ._FILE) to the staging boxes "restore" volume, which is mounted via NFS.
- The staging box's "restore" volume is mounted on the OS X Server box via NFS or Samba, and we can then use Finder or our custom CLI tool to copy the file and the ._FILE from the volume back to the production folder. (We can also just mount the restore volume from the staging box (via Samba or NFS) and the target volume on the OS X Server (via AFP) and copy the file through a client machine. This will probably be easier, generally.)

So which solution is better?

The former, I can hack together something to generate the tarballs in a few minutes, and dumping the file information into a database is also a trivial matter. Writing a front-end to search the database is easy enough, and with a little more work I could automate doing the restore (read: "Go restore tarball_x.hfs.tar off of tape and put it in volume Y and tell me when the file is in place. Okay, cool, I'll put the file you wanted in the restore volume now").

The first solution will also be something *relatively* close to what Archivist will eventually be. If I write the wrappers for the NetBackup CLI tools, it means you just go through one application to do all your restores.

The latter will require no front-end work. It's simply a matter of learning enough ObjC to do the calls to copy the file and create the ._FILE appropriately on the target filesystem.

We need to decide on a single solution, though. I don't want to get stuck with having the files in three different formats, and I'm sure neither does Robert.

I know a few OS X admins who read my nonsense, so hopefully they can shed some more light onto the problem.

I'm hoping I've missed something really obvious, though.

October 11, 2004 2:11 PM
Comments

Get a real OS.

Posted by: kitten at October 11, 2004 3:06 PM

Andy: How long did you give me shit for running Linux as a workstation? What OS do you prefer now? You're always a couple years behind my curve. Blow me.

Useful: http://developer.apple.com/documentation/MacOSX/Conceptual/SystemOverview/Finder/chapter_9_section_6.html

Posted by: bda at October 11, 2004 5:03 PM

Also useful: /Developer/Tools/CpMac

Posted by: bda at October 11, 2004 5:03 PM

CpMac sucks nuts.

ditto is useful in that it will actually write ._ files to a non-AFP/HFS volume.

Unfortunately it seems to be skipping any files that have hex or other weirdness in the filename.

It's always something around here.

Posted by: bda at October 11, 2004 8:33 PM

do samba server have API ?

Posted by: raju at April 20, 2005 9:00 AM
Post a comment









Remember personal info?