rsync will happy copy files between servers and will keep the ownership and permissions the same. However, if you aren’t the owner of all of the files then ownership sync requires the rsync on the receiving end (which we’ll call server B) to be running as root. Likewise on the sending server (server A) if we don’t own the files we might not be able to read them and again need to be running as root.

root on server A is easy, we can just use sudo:

sudo rsync -av /var/www server-b.example:/var

root on server B requires a little more finesse, we need to override the remote rsync command to include sudo, this is done with the --rsync-path path option:

sudo rsync -av --rsync-path="/usr/bin/sudo /usr/bin/rsync" /var/www server-b.example:/var

(This presumes you can run sudo without a password on server B, things get a little hairy if you can’t.)

Great. Done. But, what if you can’t log into server B with a password?

I’m embarking on a long, long over due project to organize my dotfiles, you know, all those files in your home directory that start with “.” and configure your software. (If you don’t then this isn’t the post or the blog for you.)

There are lots of schemes for storing and distributing dotfiles, and I’ll get to that. But first, I need to clean up the mess of living in the UNIX shell for 30 years. Really.

My goals are twofold: 1) Any time I spin up a new server, I want to be able to deploy my preferred configuration out to it. 2) I want a complete copy of my local machine’s config setting to make it (relatively) simple to setup on a new machine.

These goals conflict.

In my occasional series on waiting for things, I setup a BASH function to wait for AWS CloudFront invalidations. I mentioned it would be possible to invalid and wait in the same function, but was feeling lazy and left it to the reader. Well, I’ve gotten tired of the two step process, so I’m being industrious so I can be lazy.

Here’s about as esoteric a post as I ever write, my love of pushd and it’s little abused directory stack. If you don’t live on the command line, move along, there is nothing to see here.

If you have a lot of SSH keys loaded you may run into the dreaded:

Received disconnect from 2: Too many authentication failures for spike

This happens because the SSH client tries each key in order, until it finds one that works. The SSH server allows only so many authentication attempts before kicking the client to the curb (default 6, controlled by the MaxAuthTries setting). Fortunately, there’s a fix.

A quickie today on leveraging “the cloud” for warm-ish spare servers.

I run a mix of physical and cloud based servers. The Cloud is convenence, however, in general, I prefer physical servers for lower cost (over time anyway) and greater control. Of course that means having dependency on hardware, upstream connectivity, data center power, etc.

I sometime hedge my bets by keeping a backup copy of the server in AWS.

It’s Boulder Startup Week. With so many tech startups in town, there a lot of focus on code and coders. How do you become a developer? What developer career paths are there? Where do I find work? Etc. Etc.

Coincidentally, my mom, who is pruning, sent me a copy of a letter I wrote, more than 30 years ago now, when I was applying to colleges. It I describe how I first learned to program. I had forgotten this story, but I think it’s relevant and worth sharing.